Feb 17 15:53:55 crc systemd[1]: Starting Kubernetes Kubelet... Feb 17 15:53:55 crc restorecon[4694]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:55 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 15:53:56 crc restorecon[4694]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 15:53:56 crc restorecon[4694]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Feb 17 15:53:56 crc kubenswrapper[4808]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 17 15:53:56 crc kubenswrapper[4808]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 17 15:53:56 crc kubenswrapper[4808]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 17 15:53:56 crc kubenswrapper[4808]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 17 15:53:56 crc kubenswrapper[4808]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 17 15:53:56 crc kubenswrapper[4808]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.912432 4808 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.923339 4808 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.923401 4808 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.923410 4808 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.923420 4808 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.923428 4808 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.923437 4808 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.923445 4808 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.923452 4808 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.923461 4808 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.923471 4808 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.923480 4808 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.923489 4808 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.923499 4808 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.923507 4808 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.923515 4808 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.923523 4808 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.923532 4808 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.923540 4808 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.923548 4808 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.923557 4808 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.923565 4808 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.923598 4808 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.923606 4808 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.923614 4808 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.923623 4808 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.923631 4808 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.923639 4808 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.923646 4808 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.923655 4808 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.923665 4808 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.923675 4808 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.923684 4808 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.923691 4808 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.923714 4808 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.923723 4808 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.923731 4808 feature_gate.go:330] unrecognized feature gate: Example Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.923739 4808 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.923747 4808 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.923755 4808 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.923763 4808 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.923771 4808 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.923778 4808 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.923786 4808 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.923801 4808 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.923810 4808 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.923818 4808 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.923826 4808 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.923834 4808 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.923841 4808 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.923850 4808 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.923858 4808 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.923868 4808 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.923879 4808 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.923890 4808 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.923901 4808 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.923910 4808 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.923919 4808 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.923927 4808 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.923936 4808 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.923944 4808 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.923955 4808 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.923964 4808 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.923973 4808 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.923980 4808 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.923988 4808 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.923996 4808 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.924006 4808 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.924016 4808 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.924026 4808 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.924035 4808 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.924043 4808 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.925110 4808 flags.go:64] FLAG: --address="0.0.0.0" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.925134 4808 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.925150 4808 flags.go:64] FLAG: --anonymous-auth="true" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.925173 4808 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.925187 4808 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.925197 4808 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.925209 4808 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.925220 4808 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.925230 4808 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.925239 4808 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.925249 4808 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.925259 4808 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.925268 4808 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.925278 4808 flags.go:64] FLAG: --cgroup-root="" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.925287 4808 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.925296 4808 flags.go:64] FLAG: --client-ca-file="" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.925305 4808 flags.go:64] FLAG: --cloud-config="" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.925315 4808 flags.go:64] FLAG: --cloud-provider="" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.925324 4808 flags.go:64] FLAG: --cluster-dns="[]" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.925338 4808 flags.go:64] FLAG: --cluster-domain="" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.925348 4808 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.925359 4808 flags.go:64] FLAG: --config-dir="" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.925368 4808 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.925379 4808 flags.go:64] FLAG: --container-log-max-files="5" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.925391 4808 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.925400 4808 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.925410 4808 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.925420 4808 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.925429 4808 flags.go:64] FLAG: --contention-profiling="false" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.925438 4808 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.925448 4808 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.925458 4808 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.925469 4808 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.925488 4808 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.925498 4808 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.925507 4808 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.925516 4808 flags.go:64] FLAG: --enable-load-reader="false" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.925527 4808 flags.go:64] FLAG: --enable-server="true" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.925536 4808 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.925547 4808 flags.go:64] FLAG: --event-burst="100" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.925556 4808 flags.go:64] FLAG: --event-qps="50" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.925566 4808 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.925604 4808 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.925614 4808 flags.go:64] FLAG: --eviction-hard="" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.925625 4808 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.925634 4808 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.925644 4808 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.925654 4808 flags.go:64] FLAG: --eviction-soft="" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.925664 4808 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.925673 4808 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.925682 4808 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.925691 4808 flags.go:64] FLAG: --experimental-mounter-path="" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.925702 4808 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.925711 4808 flags.go:64] FLAG: --fail-swap-on="true" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.925721 4808 flags.go:64] FLAG: --feature-gates="" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.925732 4808 flags.go:64] FLAG: --file-check-frequency="20s" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.925741 4808 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.925752 4808 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.925762 4808 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.925771 4808 flags.go:64] FLAG: --healthz-port="10248" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.925780 4808 flags.go:64] FLAG: --help="false" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.925790 4808 flags.go:64] FLAG: --hostname-override="" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.925799 4808 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.925809 4808 flags.go:64] FLAG: --http-check-frequency="20s" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.925819 4808 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.925829 4808 flags.go:64] FLAG: --image-credential-provider-config="" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.925838 4808 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.925847 4808 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.925857 4808 flags.go:64] FLAG: --image-service-endpoint="" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.925866 4808 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.925875 4808 flags.go:64] FLAG: --kube-api-burst="100" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.925885 4808 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.925894 4808 flags.go:64] FLAG: --kube-api-qps="50" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.925904 4808 flags.go:64] FLAG: --kube-reserved="" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.925914 4808 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.925923 4808 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.925932 4808 flags.go:64] FLAG: --kubelet-cgroups="" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.925941 4808 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.925951 4808 flags.go:64] FLAG: --lock-file="" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.925960 4808 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.925969 4808 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.925978 4808 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.925992 4808 flags.go:64] FLAG: --log-json-split-stream="false" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.926002 4808 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.926012 4808 flags.go:64] FLAG: --log-text-split-stream="false" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.926022 4808 flags.go:64] FLAG: --logging-format="text" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.926031 4808 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.926041 4808 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.926050 4808 flags.go:64] FLAG: --manifest-url="" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.926060 4808 flags.go:64] FLAG: --manifest-url-header="" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.926071 4808 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.926081 4808 flags.go:64] FLAG: --max-open-files="1000000" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.926092 4808 flags.go:64] FLAG: --max-pods="110" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.926102 4808 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.926112 4808 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.926121 4808 flags.go:64] FLAG: --memory-manager-policy="None" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.926130 4808 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.926140 4808 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.926149 4808 flags.go:64] FLAG: --node-ip="192.168.126.11" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.926159 4808 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.926179 4808 flags.go:64] FLAG: --node-status-max-images="50" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.926188 4808 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.926198 4808 flags.go:64] FLAG: --oom-score-adj="-999" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.926208 4808 flags.go:64] FLAG: --pod-cidr="" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.926217 4808 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.926230 4808 flags.go:64] FLAG: --pod-manifest-path="" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.926239 4808 flags.go:64] FLAG: --pod-max-pids="-1" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.926259 4808 flags.go:64] FLAG: --pods-per-core="0" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.926268 4808 flags.go:64] FLAG: --port="10250" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.926277 4808 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.926286 4808 flags.go:64] FLAG: --provider-id="" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.926295 4808 flags.go:64] FLAG: --qos-reserved="" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.926304 4808 flags.go:64] FLAG: --read-only-port="10255" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.926314 4808 flags.go:64] FLAG: --register-node="true" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.926323 4808 flags.go:64] FLAG: --register-schedulable="true" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.926333 4808 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.926347 4808 flags.go:64] FLAG: --registry-burst="10" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.926356 4808 flags.go:64] FLAG: --registry-qps="5" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.926366 4808 flags.go:64] FLAG: --reserved-cpus="" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.926378 4808 flags.go:64] FLAG: --reserved-memory="" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.926389 4808 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.926399 4808 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.926408 4808 flags.go:64] FLAG: --rotate-certificates="false" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.926417 4808 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.926427 4808 flags.go:64] FLAG: --runonce="false" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.926438 4808 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.926448 4808 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.926458 4808 flags.go:64] FLAG: --seccomp-default="false" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.926467 4808 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.926476 4808 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.926487 4808 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.926496 4808 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.926505 4808 flags.go:64] FLAG: --storage-driver-password="root" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.926514 4808 flags.go:64] FLAG: --storage-driver-secure="false" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.926523 4808 flags.go:64] FLAG: --storage-driver-table="stats" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.926533 4808 flags.go:64] FLAG: --storage-driver-user="root" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.926542 4808 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.926551 4808 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.926561 4808 flags.go:64] FLAG: --system-cgroups="" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.926598 4808 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.926613 4808 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.926623 4808 flags.go:64] FLAG: --tls-cert-file="" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.926632 4808 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.926644 4808 flags.go:64] FLAG: --tls-min-version="" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.926653 4808 flags.go:64] FLAG: --tls-private-key-file="" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.926661 4808 flags.go:64] FLAG: --topology-manager-policy="none" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.926671 4808 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.926705 4808 flags.go:64] FLAG: --topology-manager-scope="container" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.926715 4808 flags.go:64] FLAG: --v="2" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.926736 4808 flags.go:64] FLAG: --version="false" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.926749 4808 flags.go:64] FLAG: --vmodule="" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.926760 4808 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.926770 4808 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.926985 4808 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.926996 4808 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.927005 4808 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.927016 4808 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.927026 4808 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.927035 4808 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.927043 4808 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.927051 4808 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.927059 4808 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.927067 4808 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.927075 4808 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.927083 4808 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.927091 4808 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.927099 4808 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.927107 4808 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.927115 4808 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.927123 4808 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.927130 4808 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.927141 4808 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.927150 4808 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.927157 4808 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.927165 4808 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.927173 4808 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.927181 4808 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.927189 4808 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.927196 4808 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.927204 4808 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.927212 4808 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.927223 4808 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.927231 4808 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.927238 4808 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.927246 4808 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.927254 4808 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.927261 4808 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.927270 4808 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.927278 4808 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.927287 4808 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.927294 4808 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.927307 4808 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.927316 4808 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.927325 4808 feature_gate.go:330] unrecognized feature gate: Example Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.927334 4808 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.927342 4808 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.927350 4808 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.927358 4808 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.927366 4808 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.927374 4808 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.927382 4808 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.927390 4808 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.927398 4808 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.927410 4808 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.927418 4808 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.927426 4808 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.927434 4808 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.927442 4808 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.927450 4808 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.927458 4808 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.927466 4808 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.927474 4808 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.927481 4808 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.927493 4808 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.927500 4808 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.927509 4808 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.927518 4808 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.927529 4808 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.927539 4808 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.927549 4808 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.927557 4808 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.927566 4808 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.927603 4808 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.927613 4808 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.928720 4808 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.943530 4808 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.943631 4808 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.943799 4808 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.943822 4808 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.943838 4808 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.943853 4808 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.943870 4808 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.943882 4808 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.943896 4808 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.943910 4808 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.943922 4808 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.943933 4808 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.943945 4808 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.943957 4808 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.943969 4808 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.943984 4808 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.943998 4808 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.944010 4808 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.944020 4808 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.944034 4808 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.944048 4808 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.944062 4808 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.944074 4808 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.944085 4808 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.944096 4808 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.944107 4808 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.944119 4808 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.944130 4808 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.944140 4808 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.944151 4808 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.944162 4808 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.944172 4808 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.944181 4808 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.944191 4808 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.944200 4808 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.944212 4808 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.944225 4808 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.944236 4808 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.944248 4808 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.944258 4808 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.944268 4808 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.944279 4808 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.944288 4808 feature_gate.go:330] unrecognized feature gate: Example Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.944298 4808 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.944309 4808 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.944319 4808 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.944330 4808 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.944340 4808 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.944350 4808 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.944360 4808 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.944370 4808 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.944380 4808 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.944391 4808 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.944401 4808 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.944411 4808 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.944422 4808 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.944432 4808 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.944442 4808 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.944455 4808 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.944466 4808 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.944476 4808 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.944486 4808 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.944497 4808 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.944507 4808 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.944517 4808 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.944527 4808 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.944538 4808 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.944549 4808 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.944560 4808 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.944603 4808 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.944614 4808 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.944624 4808 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.944637 4808 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.944654 4808 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.944963 4808 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.944982 4808 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.944994 4808 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.945006 4808 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.945017 4808 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.945029 4808 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.945041 4808 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.945052 4808 feature_gate.go:330] unrecognized feature gate: Example Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.945063 4808 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.945075 4808 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.945086 4808 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.945096 4808 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.945106 4808 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.945118 4808 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.945130 4808 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.945141 4808 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.945152 4808 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.945163 4808 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.945174 4808 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.945184 4808 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.945198 4808 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.945212 4808 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.945223 4808 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.945234 4808 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.945245 4808 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.945255 4808 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.945265 4808 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.945275 4808 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.945285 4808 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.945298 4808 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.945308 4808 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.945318 4808 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.945328 4808 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.945339 4808 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.945351 4808 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.945361 4808 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.945374 4808 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.945388 4808 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.945401 4808 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.945412 4808 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.945423 4808 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.945434 4808 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.945444 4808 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.945454 4808 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.945465 4808 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.945475 4808 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.945485 4808 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.945495 4808 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.945505 4808 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.945521 4808 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.945535 4808 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.945547 4808 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.945558 4808 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.945605 4808 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.945620 4808 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.945632 4808 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.945643 4808 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.945653 4808 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.945663 4808 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.945674 4808 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.945685 4808 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.945697 4808 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.945707 4808 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.945718 4808 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.945728 4808 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.945739 4808 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.945750 4808 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.945760 4808 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.945770 4808 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.945779 4808 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 17 15:53:56 crc kubenswrapper[4808]: W0217 15:53:56.945793 4808 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.945814 4808 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.946225 4808 server.go:940] "Client rotation is on, will bootstrap in background" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.950356 4808 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.950461 4808 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.951818 4808 server.go:997] "Starting client certificate rotation" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.951843 4808 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.952060 4808 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2026-01-09 22:19:34.944998021 +0000 UTC Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.952228 4808 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.975988 4808 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.977794 4808 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 17 15:53:56 crc kubenswrapper[4808]: E0217 15:53:56.980103 4808 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.64:6443: connect: connection refused" logger="UnhandledError" Feb 17 15:53:56 crc kubenswrapper[4808]: I0217 15:53:56.995351 4808 log.go:25] "Validated CRI v1 runtime API" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.029124 4808 log.go:25] "Validated CRI v1 image API" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.031228 4808 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.038176 4808 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-02-17-15-49-34-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.038213 4808 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.053232 4808 manager.go:217] Machine: {Timestamp:2026-02-17 15:53:57.05047691 +0000 UTC m=+0.566836003 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654124544 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:8fe3bc97-dd01-4038-9ff9-743e71f8162b BootID:7379f6dd-5937-4d60-901f-8c9dc45481b3 Filesystems:[{Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108169 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:bf:d7:c2 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:bf:d7:c2 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:51:86:26 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:89:6b:02 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:c7:32:1d Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:b6:7c:82 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:96:73:3d:e7:4f:3d Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:36:46:20:49:83:8a Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654124544 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.053460 4808 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.053632 4808 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.056421 4808 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.056693 4808 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.056739 4808 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.056950 4808 topology_manager.go:138] "Creating topology manager with none policy" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.056962 4808 container_manager_linux.go:303] "Creating device plugin manager" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.057795 4808 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.057831 4808 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.059003 4808 state_mem.go:36] "Initialized new in-memory state store" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.059117 4808 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.063192 4808 kubelet.go:418] "Attempting to sync node with API server" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.063219 4808 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.063237 4808 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.063254 4808 kubelet.go:324] "Adding apiserver pod source" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.063270 4808 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.068682 4808 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Feb 17 15:53:57 crc kubenswrapper[4808]: W0217 15:53:57.069720 4808 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.64:6443: connect: connection refused Feb 17 15:53:57 crc kubenswrapper[4808]: E0217 15:53:57.069795 4808 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.64:6443: connect: connection refused" logger="UnhandledError" Feb 17 15:53:57 crc kubenswrapper[4808]: W0217 15:53:57.069819 4808 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.64:6443: connect: connection refused Feb 17 15:53:57 crc kubenswrapper[4808]: E0217 15:53:57.069971 4808 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.64:6443: connect: connection refused" logger="UnhandledError" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.070023 4808 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.072455 4808 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.074528 4808 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.074558 4808 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.074567 4808 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.074592 4808 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.074608 4808 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.074619 4808 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.074627 4808 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.074640 4808 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.074649 4808 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.074656 4808 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.074668 4808 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.074674 4808 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.077110 4808 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.077713 4808 server.go:1280] "Started kubelet" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.077893 4808 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.64:6443: connect: connection refused Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.078735 4808 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.078746 4808 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.079681 4808 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 17 15:53:57 crc systemd[1]: Started Kubernetes Kubelet. Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.081185 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.081233 4808 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.081349 4808 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.081367 4808 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 17 15:53:57 crc kubenswrapper[4808]: E0217 15:53:57.081559 4808 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.081568 4808 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.081538 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 23:36:20.170147281 +0000 UTC Feb 17 15:53:57 crc kubenswrapper[4808]: E0217 15:53:57.089363 4808 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.64:6443: connect: connection refused" interval="200ms" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.089874 4808 factory.go:55] Registering systemd factory Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.089918 4808 factory.go:221] Registration of the systemd container factory successfully Feb 17 15:53:57 crc kubenswrapper[4808]: W0217 15:53:57.089971 4808 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.64:6443: connect: connection refused Feb 17 15:53:57 crc kubenswrapper[4808]: E0217 15:53:57.090058 4808 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.64:6443: connect: connection refused" logger="UnhandledError" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.093842 4808 factory.go:153] Registering CRI-O factory Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.094004 4808 factory.go:221] Registration of the crio container factory successfully Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.094215 4808 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.094389 4808 factory.go:103] Registering Raw factory Feb 17 15:53:57 crc kubenswrapper[4808]: E0217 15:53:57.093878 4808 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.64:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.189513a72729afaa default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 15:53:57.077667754 +0000 UTC m=+0.594026827,LastTimestamp:2026-02-17 15:53:57.077667754 +0000 UTC m=+0.594026827,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.097102 4808 manager.go:1196] Started watching for new ooms in manager Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.097412 4808 server.go:460] "Adding debug handlers to kubelet server" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.098249 4808 manager.go:319] Starting recovery of all containers Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.102858 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.102903 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.102917 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.102928 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.102941 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.102953 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.102965 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.102976 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.102989 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.103000 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.103011 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.103025 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.103036 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.103047 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.103059 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.103072 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.103083 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.103094 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.103104 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.103115 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.103125 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.103137 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.103151 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.103163 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.103176 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.103188 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.103202 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.103213 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.103223 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.103234 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.103245 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.103265 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.103277 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.103288 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.103300 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.103310 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.103321 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.103333 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.103344 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.103355 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.103369 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.103379 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.103390 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.103402 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.103413 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.103425 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.103437 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.103449 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.103460 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.103471 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.103483 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.103494 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.103508 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.105439 4808 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.105476 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.105492 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.105507 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.105520 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.105531 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.105545 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.105556 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.105567 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.105601 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.105618 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.105634 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.105648 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.105661 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.105676 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.105690 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.105705 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.105723 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.105738 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.105752 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.105769 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.105782 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.105799 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.105812 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.105829 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.105845 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.105860 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.105880 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.105894 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.105909 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.105923 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.105953 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.105970 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.105987 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.106001 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.106014 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.106028 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.106041 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.106055 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.106071 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.106087 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.106101 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.106116 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.106128 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.106149 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.106163 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.106179 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.106193 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.106206 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.106220 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.106235 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.106248 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.106267 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.106280 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.106294 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.106309 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.106323 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.106337 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.106354 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.106370 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.106384 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.106398 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.106411 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.106426 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.106442 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.106457 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.106470 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.106486 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.106500 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.106516 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.106530 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.106544 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.106558 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.106613 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.106633 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.106647 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.106659 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.106673 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.106686 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.106698 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.106742 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.106757 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.106773 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.106786 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.106799 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.106810 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.106825 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.106838 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.106851 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.106865 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.106881 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.106893 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.106905 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.106921 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.106934 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.106946 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.106958 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.106969 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.106987 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.107000 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.107011 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.107025 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.107038 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.107049 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.107060 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.107072 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.107086 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.107097 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.107109 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.107121 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.107133 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.107147 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.107160 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.107172 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.107186 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.107200 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.107212 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.107225 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.107238 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.107251 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.107265 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.107277 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.107292 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.107307 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.107321 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.107334 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.107349 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.107362 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.107376 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.107389 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.107407 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.107421 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.107433 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.107446 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.107458 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.107471 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.107485 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.107498 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.107513 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.107526 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.107539 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.107551 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.107562 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.107596 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.107608 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.107620 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.107633 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.107647 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.107660 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.107671 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.107683 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.107695 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.107707 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.107719 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.107734 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.107745 4808 reconstruct.go:97] "Volume reconstruction finished" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.107754 4808 reconciler.go:26] "Reconciler: start to sync state" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.120550 4808 manager.go:324] Recovery completed Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.132273 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.135372 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.135449 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.135460 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.137840 4808 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.137858 4808 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.137885 4808 state_mem.go:36] "Initialized new in-memory state store" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.141483 4808 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.144378 4808 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.144438 4808 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.144486 4808 kubelet.go:2335] "Starting kubelet main sync loop" Feb 17 15:53:57 crc kubenswrapper[4808]: E0217 15:53:57.144728 4808 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 17 15:53:57 crc kubenswrapper[4808]: W0217 15:53:57.146599 4808 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.64:6443: connect: connection refused Feb 17 15:53:57 crc kubenswrapper[4808]: E0217 15:53:57.146759 4808 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.64:6443: connect: connection refused" logger="UnhandledError" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.153139 4808 policy_none.go:49] "None policy: Start" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.154388 4808 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.154428 4808 state_mem.go:35] "Initializing new in-memory state store" Feb 17 15:53:57 crc kubenswrapper[4808]: E0217 15:53:57.181910 4808 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.218792 4808 manager.go:334] "Starting Device Plugin manager" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.218869 4808 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.218887 4808 server.go:79] "Starting device plugin registration server" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.219448 4808 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.219471 4808 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.219824 4808 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.219926 4808 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.219942 4808 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 17 15:53:57 crc kubenswrapper[4808]: E0217 15:53:57.232750 4808 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.244885 4808 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc"] Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.244988 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.246130 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.246165 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.246189 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.246300 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.246923 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.246944 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.246953 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.247761 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.247789 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.247868 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.248060 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.248102 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.248554 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.248601 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.248612 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.248686 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.248797 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.248820 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.249218 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.249275 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.249292 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.249806 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.249849 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.249864 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.249927 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.249928 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.250001 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.250021 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.249969 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.250060 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.250288 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.250399 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.250444 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.251046 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.251065 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.251074 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.251140 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.251165 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.251179 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.251199 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.251229 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.251863 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.251899 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.251914 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:53:57 crc kubenswrapper[4808]: E0217 15:53:57.290253 4808 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.64:6443: connect: connection refused" interval="400ms" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.312456 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.312499 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.312522 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.312560 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.312609 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.312667 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.312713 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.312758 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.312889 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.312969 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.313017 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.313060 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.313105 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.313145 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.313189 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.319966 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.321251 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.321296 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.321305 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.321338 4808 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 17 15:53:57 crc kubenswrapper[4808]: E0217 15:53:57.321888 4808 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.64:6443: connect: connection refused" node="crc" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.415095 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.415157 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.415197 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.415226 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.415248 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.415270 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.415291 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.415313 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.415337 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.415340 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.415398 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.415414 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.415412 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.415423 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.415358 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.415664 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.415361 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.415479 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.415734 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.415768 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.415481 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.415765 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.415832 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.415842 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.415461 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.415767 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.415881 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.415492 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.415931 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.415964 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.522999 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.525894 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.525954 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.525964 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.526000 4808 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 17 15:53:57 crc kubenswrapper[4808]: E0217 15:53:57.526632 4808 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.64:6443: connect: connection refused" node="crc" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.571285 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.582432 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.607954 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.626607 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.632940 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:53:57 crc kubenswrapper[4808]: W0217 15:53:57.635225 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-8fd02601f7c292a8392d07681a35f11d7c07511b76bf5c61747b92706b1f0350 WatchSource:0}: Error finding container 8fd02601f7c292a8392d07681a35f11d7c07511b76bf5c61747b92706b1f0350: Status 404 returned error can't find the container with id 8fd02601f7c292a8392d07681a35f11d7c07511b76bf5c61747b92706b1f0350 Feb 17 15:53:57 crc kubenswrapper[4808]: W0217 15:53:57.638022 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-7dbb8217455e24b92156a62d20dcae3cf6f601feff263cfc4c2a756cb5a2bc00 WatchSource:0}: Error finding container 7dbb8217455e24b92156a62d20dcae3cf6f601feff263cfc4c2a756cb5a2bc00: Status 404 returned error can't find the container with id 7dbb8217455e24b92156a62d20dcae3cf6f601feff263cfc4c2a756cb5a2bc00 Feb 17 15:53:57 crc kubenswrapper[4808]: W0217 15:53:57.644887 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-50f7499844ac32267920d8c00a8095103ee5481f9be878cdcfcad96cf7fc67e2 WatchSource:0}: Error finding container 50f7499844ac32267920d8c00a8095103ee5481f9be878cdcfcad96cf7fc67e2: Status 404 returned error can't find the container with id 50f7499844ac32267920d8c00a8095103ee5481f9be878cdcfcad96cf7fc67e2 Feb 17 15:53:57 crc kubenswrapper[4808]: W0217 15:53:57.652503 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-8108ed2fa92373be836ae97e00aaacb429a7e89cfe15397c1f8c4e728160fdcc WatchSource:0}: Error finding container 8108ed2fa92373be836ae97e00aaacb429a7e89cfe15397c1f8c4e728160fdcc: Status 404 returned error can't find the container with id 8108ed2fa92373be836ae97e00aaacb429a7e89cfe15397c1f8c4e728160fdcc Feb 17 15:53:57 crc kubenswrapper[4808]: W0217 15:53:57.654516 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-23e7ae90833332d580d5f17bc1314947ff235f324f6e46c62dd1bd8881614a96 WatchSource:0}: Error finding container 23e7ae90833332d580d5f17bc1314947ff235f324f6e46c62dd1bd8881614a96: Status 404 returned error can't find the container with id 23e7ae90833332d580d5f17bc1314947ff235f324f6e46c62dd1bd8881614a96 Feb 17 15:53:57 crc kubenswrapper[4808]: E0217 15:53:57.692291 4808 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.64:6443: connect: connection refused" interval="800ms" Feb 17 15:53:57 crc kubenswrapper[4808]: W0217 15:53:57.906539 4808 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.64:6443: connect: connection refused Feb 17 15:53:57 crc kubenswrapper[4808]: E0217 15:53:57.906683 4808 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.64:6443: connect: connection refused" logger="UnhandledError" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.926899 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.928413 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.928468 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.928485 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:53:57 crc kubenswrapper[4808]: I0217 15:53:57.928523 4808 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 17 15:53:57 crc kubenswrapper[4808]: E0217 15:53:57.929199 4808 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.64:6443: connect: connection refused" node="crc" Feb 17 15:53:58 crc kubenswrapper[4808]: I0217 15:53:58.078853 4808 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.64:6443: connect: connection refused Feb 17 15:53:58 crc kubenswrapper[4808]: I0217 15:53:58.081858 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 14:28:24.585927399 +0000 UTC Feb 17 15:53:58 crc kubenswrapper[4808]: I0217 15:53:58.149679 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"8fd02601f7c292a8392d07681a35f11d7c07511b76bf5c61747b92706b1f0350"} Feb 17 15:53:58 crc kubenswrapper[4808]: I0217 15:53:58.151877 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"8108ed2fa92373be836ae97e00aaacb429a7e89cfe15397c1f8c4e728160fdcc"} Feb 17 15:53:58 crc kubenswrapper[4808]: I0217 15:53:58.153027 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"23e7ae90833332d580d5f17bc1314947ff235f324f6e46c62dd1bd8881614a96"} Feb 17 15:53:58 crc kubenswrapper[4808]: I0217 15:53:58.154153 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"50f7499844ac32267920d8c00a8095103ee5481f9be878cdcfcad96cf7fc67e2"} Feb 17 15:53:58 crc kubenswrapper[4808]: I0217 15:53:58.155271 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"7dbb8217455e24b92156a62d20dcae3cf6f601feff263cfc4c2a756cb5a2bc00"} Feb 17 15:53:58 crc kubenswrapper[4808]: W0217 15:53:58.386470 4808 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.64:6443: connect: connection refused Feb 17 15:53:58 crc kubenswrapper[4808]: E0217 15:53:58.386913 4808 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.64:6443: connect: connection refused" logger="UnhandledError" Feb 17 15:53:58 crc kubenswrapper[4808]: W0217 15:53:58.409107 4808 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.64:6443: connect: connection refused Feb 17 15:53:58 crc kubenswrapper[4808]: E0217 15:53:58.409207 4808 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.64:6443: connect: connection refused" logger="UnhandledError" Feb 17 15:53:58 crc kubenswrapper[4808]: W0217 15:53:58.466149 4808 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.64:6443: connect: connection refused Feb 17 15:53:58 crc kubenswrapper[4808]: E0217 15:53:58.466250 4808 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.64:6443: connect: connection refused" logger="UnhandledError" Feb 17 15:53:58 crc kubenswrapper[4808]: E0217 15:53:58.493670 4808 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.64:6443: connect: connection refused" interval="1.6s" Feb 17 15:53:58 crc kubenswrapper[4808]: I0217 15:53:58.729716 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:53:58 crc kubenswrapper[4808]: I0217 15:53:58.731383 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:53:58 crc kubenswrapper[4808]: I0217 15:53:58.731438 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:53:58 crc kubenswrapper[4808]: I0217 15:53:58.731450 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:53:58 crc kubenswrapper[4808]: I0217 15:53:58.732068 4808 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 17 15:53:58 crc kubenswrapper[4808]: E0217 15:53:58.732646 4808 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.64:6443: connect: connection refused" node="crc" Feb 17 15:53:59 crc kubenswrapper[4808]: I0217 15:53:59.079887 4808 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.64:6443: connect: connection refused Feb 17 15:53:59 crc kubenswrapper[4808]: I0217 15:53:59.082528 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 11:57:39.15101778 +0000 UTC Feb 17 15:53:59 crc kubenswrapper[4808]: I0217 15:53:59.114691 4808 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 17 15:53:59 crc kubenswrapper[4808]: E0217 15:53:59.116942 4808 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.64:6443: connect: connection refused" logger="UnhandledError" Feb 17 15:53:59 crc kubenswrapper[4808]: I0217 15:53:59.162435 4808 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42" exitCode=0 Feb 17 15:53:59 crc kubenswrapper[4808]: I0217 15:53:59.162558 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42"} Feb 17 15:53:59 crc kubenswrapper[4808]: I0217 15:53:59.162627 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:53:59 crc kubenswrapper[4808]: I0217 15:53:59.164060 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:53:59 crc kubenswrapper[4808]: I0217 15:53:59.164105 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:53:59 crc kubenswrapper[4808]: I0217 15:53:59.164121 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:53:59 crc kubenswrapper[4808]: I0217 15:53:59.166646 4808 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="878385dba8da392fa6524e2bd7051d00b7423ba16efe985229cc6e353f150159" exitCode=0 Feb 17 15:53:59 crc kubenswrapper[4808]: I0217 15:53:59.166742 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:53:59 crc kubenswrapper[4808]: I0217 15:53:59.166818 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"878385dba8da392fa6524e2bd7051d00b7423ba16efe985229cc6e353f150159"} Feb 17 15:53:59 crc kubenswrapper[4808]: I0217 15:53:59.167810 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:53:59 crc kubenswrapper[4808]: I0217 15:53:59.168441 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:53:59 crc kubenswrapper[4808]: I0217 15:53:59.168467 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:53:59 crc kubenswrapper[4808]: I0217 15:53:59.168477 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:53:59 crc kubenswrapper[4808]: I0217 15:53:59.168742 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:53:59 crc kubenswrapper[4808]: I0217 15:53:59.168780 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:53:59 crc kubenswrapper[4808]: I0217 15:53:59.168797 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:53:59 crc kubenswrapper[4808]: I0217 15:53:59.172556 4808 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="51962c47ab63116fa62604c3cc5603db1b7b4015519052616c363dc21c7cb913" exitCode=0 Feb 17 15:53:59 crc kubenswrapper[4808]: I0217 15:53:59.172674 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"51962c47ab63116fa62604c3cc5603db1b7b4015519052616c363dc21c7cb913"} Feb 17 15:53:59 crc kubenswrapper[4808]: I0217 15:53:59.172711 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:53:59 crc kubenswrapper[4808]: I0217 15:53:59.174393 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:53:59 crc kubenswrapper[4808]: I0217 15:53:59.174427 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:53:59 crc kubenswrapper[4808]: I0217 15:53:59.174438 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:53:59 crc kubenswrapper[4808]: I0217 15:53:59.177491 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"59dcbb2be526e98cfd0a3c8cf833d6cfdef0120c58b47e52fb62f56adffb1d9c"} Feb 17 15:53:59 crc kubenswrapper[4808]: I0217 15:53:59.177560 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"12c45de72b21abdab0a1073a9a1a357c8d593f68a339bf9b455b5e87aa7863aa"} Feb 17 15:53:59 crc kubenswrapper[4808]: I0217 15:53:59.177601 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"2fd52f8fe1e994b2f877ce0843ce86d86d7674bace8c4ca163e3232248313435"} Feb 17 15:53:59 crc kubenswrapper[4808]: I0217 15:53:59.177617 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"8b00de586738e2d759aa971e2114def8fdfeb2a25fd72f482d75b9f46ea9a3d1"} Feb 17 15:53:59 crc kubenswrapper[4808]: I0217 15:53:59.177718 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:53:59 crc kubenswrapper[4808]: I0217 15:53:59.179272 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:53:59 crc kubenswrapper[4808]: I0217 15:53:59.179279 4808 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="2a55cfe852dc761ab878ee565fdddb28116fbcb015ba837ed3b9d266a33ee357" exitCode=0 Feb 17 15:53:59 crc kubenswrapper[4808]: I0217 15:53:59.179320 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:53:59 crc kubenswrapper[4808]: I0217 15:53:59.179337 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:53:59 crc kubenswrapper[4808]: I0217 15:53:59.179337 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"2a55cfe852dc761ab878ee565fdddb28116fbcb015ba837ed3b9d266a33ee357"} Feb 17 15:53:59 crc kubenswrapper[4808]: I0217 15:53:59.179421 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:53:59 crc kubenswrapper[4808]: I0217 15:53:59.180314 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:53:59 crc kubenswrapper[4808]: I0217 15:53:59.180349 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:53:59 crc kubenswrapper[4808]: I0217 15:53:59.180363 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:53:59 crc kubenswrapper[4808]: I0217 15:53:59.386334 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 15:53:59 crc kubenswrapper[4808]: I0217 15:53:59.589746 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 15:53:59 crc kubenswrapper[4808]: W0217 15:53:59.971667 4808 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.64:6443: connect: connection refused Feb 17 15:53:59 crc kubenswrapper[4808]: E0217 15:53:59.971783 4808 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.64:6443: connect: connection refused" logger="UnhandledError" Feb 17 15:54:00 crc kubenswrapper[4808]: I0217 15:54:00.079281 4808 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.64:6443: connect: connection refused Feb 17 15:54:00 crc kubenswrapper[4808]: I0217 15:54:00.082694 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 22:15:21.949382747 +0000 UTC Feb 17 15:54:00 crc kubenswrapper[4808]: E0217 15:54:00.095260 4808 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.64:6443: connect: connection refused" interval="3.2s" Feb 17 15:54:00 crc kubenswrapper[4808]: W0217 15:54:00.136359 4808 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.64:6443: connect: connection refused Feb 17 15:54:00 crc kubenswrapper[4808]: E0217 15:54:00.136711 4808 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.64:6443: connect: connection refused" logger="UnhandledError" Feb 17 15:54:00 crc kubenswrapper[4808]: I0217 15:54:00.186081 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"defa2be2862e24dfc99982183beaa92c8114cc81036544f19ed8bb4e10b0b09a"} Feb 17 15:54:00 crc kubenswrapper[4808]: I0217 15:54:00.186126 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"8d7c05a68a98372cde4e26c0c61f336641b7554e44bea9c4d240fed31e6b366b"} Feb 17 15:54:00 crc kubenswrapper[4808]: I0217 15:54:00.186139 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"4372c35d9db61ec94e0ea9eacf8c4e39b960530780a05f7d69ef2a050d38d23b"} Feb 17 15:54:00 crc kubenswrapper[4808]: I0217 15:54:00.186184 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:54:00 crc kubenswrapper[4808]: I0217 15:54:00.187125 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:00 crc kubenswrapper[4808]: I0217 15:54:00.187149 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:00 crc kubenswrapper[4808]: I0217 15:54:00.187158 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:00 crc kubenswrapper[4808]: I0217 15:54:00.189003 4808 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="17c2026bddc60a011cd7fae144526e4a3fdaafbb403ee2eae34b6160f49c4f8f" exitCode=0 Feb 17 15:54:00 crc kubenswrapper[4808]: I0217 15:54:00.189062 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:54:00 crc kubenswrapper[4808]: I0217 15:54:00.189073 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"17c2026bddc60a011cd7fae144526e4a3fdaafbb403ee2eae34b6160f49c4f8f"} Feb 17 15:54:00 crc kubenswrapper[4808]: I0217 15:54:00.189893 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:00 crc kubenswrapper[4808]: I0217 15:54:00.189916 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:00 crc kubenswrapper[4808]: I0217 15:54:00.189924 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:00 crc kubenswrapper[4808]: I0217 15:54:00.193619 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"715d799f5e1732f88175b90bad28450b9c5148e89bf47ac3e47f9585acf3b392"} Feb 17 15:54:00 crc kubenswrapper[4808]: I0217 15:54:00.193665 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"695c70a36ec8a626d22b6dc04fdaad77e3e1f27a035ce6f62b96afe1f2c29361"} Feb 17 15:54:00 crc kubenswrapper[4808]: I0217 15:54:00.193678 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"e2611c9a878eac336beeea637370ce7fe47a5a80a6f29002cb2fb79d4637a1c6"} Feb 17 15:54:00 crc kubenswrapper[4808]: I0217 15:54:00.193690 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"5fa3ef5d82c776e482d3da2d223d74423393c75b813707483fadca8cfbb5ed3b"} Feb 17 15:54:00 crc kubenswrapper[4808]: I0217 15:54:00.195243 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"670ac0bd1d8baf07179e911a15b5cb9c2137b2711e56c6a0243052ad67ff8ca3"} Feb 17 15:54:00 crc kubenswrapper[4808]: I0217 15:54:00.195279 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:54:00 crc kubenswrapper[4808]: I0217 15:54:00.195307 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:54:00 crc kubenswrapper[4808]: I0217 15:54:00.196810 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:00 crc kubenswrapper[4808]: I0217 15:54:00.196817 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:00 crc kubenswrapper[4808]: I0217 15:54:00.196850 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:00 crc kubenswrapper[4808]: I0217 15:54:00.196859 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:00 crc kubenswrapper[4808]: I0217 15:54:00.196967 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:00 crc kubenswrapper[4808]: I0217 15:54:00.196995 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:00 crc kubenswrapper[4808]: I0217 15:54:00.333655 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:54:00 crc kubenswrapper[4808]: I0217 15:54:00.336490 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:00 crc kubenswrapper[4808]: I0217 15:54:00.336543 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:00 crc kubenswrapper[4808]: I0217 15:54:00.336554 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:00 crc kubenswrapper[4808]: I0217 15:54:00.336600 4808 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 17 15:54:00 crc kubenswrapper[4808]: E0217 15:54:00.337222 4808 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.64:6443: connect: connection refused" node="crc" Feb 17 15:54:00 crc kubenswrapper[4808]: I0217 15:54:00.470789 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 15:54:00 crc kubenswrapper[4808]: I0217 15:54:00.697168 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 15:54:00 crc kubenswrapper[4808]: W0217 15:54:00.950377 4808 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.64:6443: connect: connection refused Feb 17 15:54:00 crc kubenswrapper[4808]: E0217 15:54:00.950471 4808 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.64:6443: connect: connection refused" logger="UnhandledError" Feb 17 15:54:01 crc kubenswrapper[4808]: I0217 15:54:01.079663 4808 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.64:6443: connect: connection refused Feb 17 15:54:01 crc kubenswrapper[4808]: I0217 15:54:01.083631 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 23:04:26.157960796 +0000 UTC Feb 17 15:54:01 crc kubenswrapper[4808]: W0217 15:54:01.138422 4808 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.64:6443: connect: connection refused Feb 17 15:54:01 crc kubenswrapper[4808]: E0217 15:54:01.138524 4808 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.64:6443: connect: connection refused" logger="UnhandledError" Feb 17 15:54:01 crc kubenswrapper[4808]: I0217 15:54:01.202513 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"68d1439ead0f87e8cde6925c6db2cfde8a7fe89c6e5afaf719868740138742df"} Feb 17 15:54:01 crc kubenswrapper[4808]: I0217 15:54:01.202623 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:54:01 crc kubenswrapper[4808]: I0217 15:54:01.203618 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:01 crc kubenswrapper[4808]: I0217 15:54:01.203644 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:01 crc kubenswrapper[4808]: I0217 15:54:01.203654 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:01 crc kubenswrapper[4808]: I0217 15:54:01.204706 4808 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="d6a91335c612e9bc3384afdcee29fce91bb775df29fa47f0e56572c2dd4ef02e" exitCode=0 Feb 17 15:54:01 crc kubenswrapper[4808]: I0217 15:54:01.204764 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:54:01 crc kubenswrapper[4808]: I0217 15:54:01.204806 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"d6a91335c612e9bc3384afdcee29fce91bb775df29fa47f0e56572c2dd4ef02e"} Feb 17 15:54:01 crc kubenswrapper[4808]: I0217 15:54:01.204845 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:54:01 crc kubenswrapper[4808]: I0217 15:54:01.204885 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 17 15:54:01 crc kubenswrapper[4808]: I0217 15:54:01.204854 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:54:01 crc kubenswrapper[4808]: I0217 15:54:01.204974 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:54:01 crc kubenswrapper[4808]: I0217 15:54:01.205894 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:01 crc kubenswrapper[4808]: I0217 15:54:01.205931 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:01 crc kubenswrapper[4808]: I0217 15:54:01.205948 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:01 crc kubenswrapper[4808]: I0217 15:54:01.205966 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:01 crc kubenswrapper[4808]: I0217 15:54:01.205988 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:01 crc kubenswrapper[4808]: I0217 15:54:01.205999 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:01 crc kubenswrapper[4808]: I0217 15:54:01.206073 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:01 crc kubenswrapper[4808]: I0217 15:54:01.206086 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:01 crc kubenswrapper[4808]: I0217 15:54:01.206101 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:01 crc kubenswrapper[4808]: I0217 15:54:01.206118 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:01 crc kubenswrapper[4808]: I0217 15:54:01.206105 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:01 crc kubenswrapper[4808]: I0217 15:54:01.206213 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:01 crc kubenswrapper[4808]: I0217 15:54:01.649911 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:54:02 crc kubenswrapper[4808]: I0217 15:54:02.084080 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 00:03:23.593171087 +0000 UTC Feb 17 15:54:02 crc kubenswrapper[4808]: I0217 15:54:02.211276 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"b19cd43f8e20356b7daaffdbcc3e29b36f9b51facdb6b7b3b95280b88d56a7a4"} Feb 17 15:54:02 crc kubenswrapper[4808]: I0217 15:54:02.211306 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:54:02 crc kubenswrapper[4808]: I0217 15:54:02.211325 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"a3782183041eeb247c7ed98938c06d0a8c128573a84f17b79b96df4519f423e6"} Feb 17 15:54:02 crc kubenswrapper[4808]: I0217 15:54:02.211337 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"c55d4a91ed26f4ad69169b9ea319486429c33bb7731ecce7c850d26d8bf9dc00"} Feb 17 15:54:02 crc kubenswrapper[4808]: I0217 15:54:02.211348 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"1c0d257ce17ccf9a4c42f137f1696f60fbb4505501c98e2699fdd4fcbc97c583"} Feb 17 15:54:02 crc kubenswrapper[4808]: I0217 15:54:02.211353 4808 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 17 15:54:02 crc kubenswrapper[4808]: I0217 15:54:02.211401 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:54:02 crc kubenswrapper[4808]: I0217 15:54:02.211416 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:54:02 crc kubenswrapper[4808]: I0217 15:54:02.215311 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:02 crc kubenswrapper[4808]: I0217 15:54:02.215350 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:02 crc kubenswrapper[4808]: I0217 15:54:02.215359 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:02 crc kubenswrapper[4808]: I0217 15:54:02.215427 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:02 crc kubenswrapper[4808]: I0217 15:54:02.215465 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:02 crc kubenswrapper[4808]: I0217 15:54:02.215474 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:02 crc kubenswrapper[4808]: I0217 15:54:02.216408 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:02 crc kubenswrapper[4808]: I0217 15:54:02.216430 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:02 crc kubenswrapper[4808]: I0217 15:54:02.216440 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:03 crc kubenswrapper[4808]: I0217 15:54:03.084680 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 15:42:37.61312473 +0000 UTC Feb 17 15:54:03 crc kubenswrapper[4808]: I0217 15:54:03.218552 4808 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 17 15:54:03 crc kubenswrapper[4808]: I0217 15:54:03.218625 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:54:03 crc kubenswrapper[4808]: I0217 15:54:03.219951 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"e244dbf3f628b1754b402213f096cc7ab037f537e8651e3ab21d0efefca19106"} Feb 17 15:54:03 crc kubenswrapper[4808]: I0217 15:54:03.220766 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:54:03 crc kubenswrapper[4808]: I0217 15:54:03.220903 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:03 crc kubenswrapper[4808]: I0217 15:54:03.220969 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:03 crc kubenswrapper[4808]: I0217 15:54:03.221004 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:03 crc kubenswrapper[4808]: I0217 15:54:03.225247 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:03 crc kubenswrapper[4808]: I0217 15:54:03.225319 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:03 crc kubenswrapper[4808]: I0217 15:54:03.225341 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:03 crc kubenswrapper[4808]: I0217 15:54:03.467609 4808 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 17 15:54:03 crc kubenswrapper[4808]: I0217 15:54:03.538202 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:54:03 crc kubenswrapper[4808]: I0217 15:54:03.540088 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:03 crc kubenswrapper[4808]: I0217 15:54:03.540159 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:03 crc kubenswrapper[4808]: I0217 15:54:03.540183 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:03 crc kubenswrapper[4808]: I0217 15:54:03.540228 4808 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 17 15:54:03 crc kubenswrapper[4808]: I0217 15:54:03.698059 4808 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 17 15:54:03 crc kubenswrapper[4808]: I0217 15:54:03.698195 4808 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 17 15:54:03 crc kubenswrapper[4808]: I0217 15:54:03.987360 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Feb 17 15:54:04 crc kubenswrapper[4808]: I0217 15:54:04.085766 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 02:20:22.535886197 +0000 UTC Feb 17 15:54:04 crc kubenswrapper[4808]: I0217 15:54:04.221333 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:54:04 crc kubenswrapper[4808]: I0217 15:54:04.223019 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:04 crc kubenswrapper[4808]: I0217 15:54:04.223079 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:04 crc kubenswrapper[4808]: I0217 15:54:04.223104 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:05 crc kubenswrapper[4808]: I0217 15:54:05.085998 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 18:56:44.628303389 +0000 UTC Feb 17 15:54:05 crc kubenswrapper[4808]: I0217 15:54:05.224246 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:54:05 crc kubenswrapper[4808]: I0217 15:54:05.225381 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:05 crc kubenswrapper[4808]: I0217 15:54:05.225424 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:05 crc kubenswrapper[4808]: I0217 15:54:05.225438 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:05 crc kubenswrapper[4808]: I0217 15:54:05.294437 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:54:05 crc kubenswrapper[4808]: I0217 15:54:05.294690 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:54:05 crc kubenswrapper[4808]: I0217 15:54:05.295961 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:05 crc kubenswrapper[4808]: I0217 15:54:05.296006 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:05 crc kubenswrapper[4808]: I0217 15:54:05.296020 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:05 crc kubenswrapper[4808]: I0217 15:54:05.844645 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:54:06 crc kubenswrapper[4808]: I0217 15:54:06.086382 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 10:07:48.937717765 +0000 UTC Feb 17 15:54:06 crc kubenswrapper[4808]: I0217 15:54:06.152872 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 15:54:06 crc kubenswrapper[4808]: I0217 15:54:06.153172 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:54:06 crc kubenswrapper[4808]: I0217 15:54:06.154254 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:06 crc kubenswrapper[4808]: I0217 15:54:06.154282 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:06 crc kubenswrapper[4808]: I0217 15:54:06.154290 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:06 crc kubenswrapper[4808]: I0217 15:54:06.227779 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:54:06 crc kubenswrapper[4808]: I0217 15:54:06.229040 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:06 crc kubenswrapper[4808]: I0217 15:54:06.229102 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:06 crc kubenswrapper[4808]: I0217 15:54:06.229120 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:07 crc kubenswrapper[4808]: I0217 15:54:07.087436 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 13:59:58.63693405 +0000 UTC Feb 17 15:54:07 crc kubenswrapper[4808]: E0217 15:54:07.232903 4808 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 17 15:54:08 crc kubenswrapper[4808]: I0217 15:54:08.088436 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 21:17:11.29537384 +0000 UTC Feb 17 15:54:09 crc kubenswrapper[4808]: I0217 15:54:09.089034 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 07:39:26.692106299 +0000 UTC Feb 17 15:54:10 crc kubenswrapper[4808]: I0217 15:54:10.090095 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 18:57:37.676622085 +0000 UTC Feb 17 15:54:10 crc kubenswrapper[4808]: I0217 15:54:10.476624 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 15:54:10 crc kubenswrapper[4808]: I0217 15:54:10.476827 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:54:10 crc kubenswrapper[4808]: I0217 15:54:10.478377 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:10 crc kubenswrapper[4808]: I0217 15:54:10.478428 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:10 crc kubenswrapper[4808]: I0217 15:54:10.478443 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:11 crc kubenswrapper[4808]: I0217 15:54:11.091372 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 14:33:09.921371333 +0000 UTC Feb 17 15:54:11 crc kubenswrapper[4808]: I0217 15:54:11.350287 4808 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 17 15:54:11 crc kubenswrapper[4808]: I0217 15:54:11.350387 4808 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 17 15:54:11 crc kubenswrapper[4808]: I0217 15:54:11.355411 4808 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 17 15:54:11 crc kubenswrapper[4808]: I0217 15:54:11.355467 4808 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 17 15:54:11 crc kubenswrapper[4808]: I0217 15:54:11.657779 4808 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 17 15:54:11 crc kubenswrapper[4808]: [+]log ok Feb 17 15:54:11 crc kubenswrapper[4808]: [+]etcd ok Feb 17 15:54:11 crc kubenswrapper[4808]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 17 15:54:11 crc kubenswrapper[4808]: [+]poststarthook/openshift.io-api-request-count-filter ok Feb 17 15:54:11 crc kubenswrapper[4808]: [+]poststarthook/openshift.io-startkubeinformers ok Feb 17 15:54:11 crc kubenswrapper[4808]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Feb 17 15:54:11 crc kubenswrapper[4808]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Feb 17 15:54:11 crc kubenswrapper[4808]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 17 15:54:11 crc kubenswrapper[4808]: [+]poststarthook/generic-apiserver-start-informers ok Feb 17 15:54:11 crc kubenswrapper[4808]: [+]poststarthook/priority-and-fairness-config-consumer ok Feb 17 15:54:11 crc kubenswrapper[4808]: [+]poststarthook/priority-and-fairness-filter ok Feb 17 15:54:11 crc kubenswrapper[4808]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 17 15:54:11 crc kubenswrapper[4808]: [+]poststarthook/start-apiextensions-informers ok Feb 17 15:54:11 crc kubenswrapper[4808]: [+]poststarthook/start-apiextensions-controllers ok Feb 17 15:54:11 crc kubenswrapper[4808]: [+]poststarthook/crd-informer-synced ok Feb 17 15:54:11 crc kubenswrapper[4808]: [+]poststarthook/start-system-namespaces-controller ok Feb 17 15:54:11 crc kubenswrapper[4808]: [+]poststarthook/start-cluster-authentication-info-controller ok Feb 17 15:54:11 crc kubenswrapper[4808]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Feb 17 15:54:11 crc kubenswrapper[4808]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Feb 17 15:54:11 crc kubenswrapper[4808]: [+]poststarthook/start-legacy-token-tracking-controller ok Feb 17 15:54:11 crc kubenswrapper[4808]: [+]poststarthook/start-service-ip-repair-controllers ok Feb 17 15:54:11 crc kubenswrapper[4808]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Feb 17 15:54:11 crc kubenswrapper[4808]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Feb 17 15:54:11 crc kubenswrapper[4808]: [+]poststarthook/priority-and-fairness-config-producer ok Feb 17 15:54:11 crc kubenswrapper[4808]: [+]poststarthook/bootstrap-controller ok Feb 17 15:54:11 crc kubenswrapper[4808]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Feb 17 15:54:11 crc kubenswrapper[4808]: [+]poststarthook/start-kube-aggregator-informers ok Feb 17 15:54:11 crc kubenswrapper[4808]: [+]poststarthook/apiservice-status-local-available-controller ok Feb 17 15:54:11 crc kubenswrapper[4808]: [+]poststarthook/apiservice-status-remote-available-controller ok Feb 17 15:54:11 crc kubenswrapper[4808]: [+]poststarthook/apiservice-registration-controller ok Feb 17 15:54:11 crc kubenswrapper[4808]: [+]poststarthook/apiservice-wait-for-first-sync ok Feb 17 15:54:11 crc kubenswrapper[4808]: [+]poststarthook/apiservice-discovery-controller ok Feb 17 15:54:11 crc kubenswrapper[4808]: [+]poststarthook/kube-apiserver-autoregistration ok Feb 17 15:54:11 crc kubenswrapper[4808]: [+]autoregister-completion ok Feb 17 15:54:11 crc kubenswrapper[4808]: [+]poststarthook/apiservice-openapi-controller ok Feb 17 15:54:11 crc kubenswrapper[4808]: [+]poststarthook/apiservice-openapiv3-controller ok Feb 17 15:54:11 crc kubenswrapper[4808]: livez check failed Feb 17 15:54:11 crc kubenswrapper[4808]: I0217 15:54:11.657870 4808 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:54:11 crc kubenswrapper[4808]: I0217 15:54:11.856550 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Feb 17 15:54:11 crc kubenswrapper[4808]: I0217 15:54:11.856806 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:54:11 crc kubenswrapper[4808]: I0217 15:54:11.858692 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:11 crc kubenswrapper[4808]: I0217 15:54:11.858752 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:11 crc kubenswrapper[4808]: I0217 15:54:11.858763 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:11 crc kubenswrapper[4808]: I0217 15:54:11.899974 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Feb 17 15:54:12 crc kubenswrapper[4808]: I0217 15:54:12.093313 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 23:26:30.271243906 +0000 UTC Feb 17 15:54:12 crc kubenswrapper[4808]: I0217 15:54:12.249703 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:54:12 crc kubenswrapper[4808]: I0217 15:54:12.251094 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:12 crc kubenswrapper[4808]: I0217 15:54:12.251167 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:12 crc kubenswrapper[4808]: I0217 15:54:12.251188 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:12 crc kubenswrapper[4808]: I0217 15:54:12.271162 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Feb 17 15:54:13 crc kubenswrapper[4808]: I0217 15:54:13.094326 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 18:11:31.986921971 +0000 UTC Feb 17 15:54:13 crc kubenswrapper[4808]: I0217 15:54:13.252310 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:54:13 crc kubenswrapper[4808]: I0217 15:54:13.253740 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:13 crc kubenswrapper[4808]: I0217 15:54:13.253796 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:13 crc kubenswrapper[4808]: I0217 15:54:13.253812 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:13 crc kubenswrapper[4808]: I0217 15:54:13.698536 4808 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 17 15:54:13 crc kubenswrapper[4808]: I0217 15:54:13.698688 4808 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 17 15:54:14 crc kubenswrapper[4808]: I0217 15:54:14.095469 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 11:58:56.839487271 +0000 UTC Feb 17 15:54:15 crc kubenswrapper[4808]: I0217 15:54:15.096768 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 06:07:14.822172622 +0000 UTC Feb 17 15:54:16 crc kubenswrapper[4808]: I0217 15:54:16.097164 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 08:14:13.512246434 +0000 UTC Feb 17 15:54:16 crc kubenswrapper[4808]: E0217 15:54:16.350724 4808 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Feb 17 15:54:16 crc kubenswrapper[4808]: I0217 15:54:16.355096 4808 trace.go:236] Trace[1482175469]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (17-Feb-2026 15:54:06.123) (total time: 10231ms): Feb 17 15:54:16 crc kubenswrapper[4808]: Trace[1482175469]: ---"Objects listed" error: 10231ms (15:54:16.354) Feb 17 15:54:16 crc kubenswrapper[4808]: Trace[1482175469]: [10.231345087s] [10.231345087s] END Feb 17 15:54:16 crc kubenswrapper[4808]: I0217 15:54:16.355131 4808 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 17 15:54:16 crc kubenswrapper[4808]: I0217 15:54:16.360258 4808 trace.go:236] Trace[142975957]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (17-Feb-2026 15:54:05.500) (total time: 10859ms): Feb 17 15:54:16 crc kubenswrapper[4808]: Trace[142975957]: ---"Objects listed" error: 10859ms (15:54:16.360) Feb 17 15:54:16 crc kubenswrapper[4808]: Trace[142975957]: [10.859391329s] [10.859391329s] END Feb 17 15:54:16 crc kubenswrapper[4808]: I0217 15:54:16.360294 4808 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 17 15:54:16 crc kubenswrapper[4808]: I0217 15:54:16.360414 4808 trace.go:236] Trace[643777040]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (17-Feb-2026 15:54:04.236) (total time: 12123ms): Feb 17 15:54:16 crc kubenswrapper[4808]: Trace[643777040]: ---"Objects listed" error: 12123ms (15:54:16.360) Feb 17 15:54:16 crc kubenswrapper[4808]: Trace[643777040]: [12.123873161s] [12.123873161s] END Feb 17 15:54:16 crc kubenswrapper[4808]: I0217 15:54:16.360446 4808 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 17 15:54:16 crc kubenswrapper[4808]: I0217 15:54:16.362401 4808 trace.go:236] Trace[1570336420]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (17-Feb-2026 15:54:03.653) (total time: 12708ms): Feb 17 15:54:16 crc kubenswrapper[4808]: Trace[1570336420]: ---"Objects listed" error: 12708ms (15:54:16.362) Feb 17 15:54:16 crc kubenswrapper[4808]: Trace[1570336420]: [12.708753128s] [12.708753128s] END Feb 17 15:54:16 crc kubenswrapper[4808]: I0217 15:54:16.362431 4808 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 17 15:54:16 crc kubenswrapper[4808]: E0217 15:54:16.365010 4808 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Feb 17 15:54:16 crc kubenswrapper[4808]: I0217 15:54:16.366558 4808 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 17 15:54:16 crc kubenswrapper[4808]: I0217 15:54:16.370633 4808 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 17 15:54:16 crc kubenswrapper[4808]: I0217 15:54:16.406693 4808 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:55868->192.168.126.11:17697: read: connection reset by peer" start-of-body= Feb 17 15:54:16 crc kubenswrapper[4808]: I0217 15:54:16.406788 4808 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:55868->192.168.126.11:17697: read: connection reset by peer" Feb 17 15:54:16 crc kubenswrapper[4808]: I0217 15:54:16.417228 4808 csr.go:261] certificate signing request csr-2cnbv is approved, waiting to be issued Feb 17 15:54:16 crc kubenswrapper[4808]: I0217 15:54:16.429709 4808 csr.go:257] certificate signing request csr-2cnbv is issued Feb 17 15:54:16 crc kubenswrapper[4808]: I0217 15:54:16.656718 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:54:16 crc kubenswrapper[4808]: I0217 15:54:16.657547 4808 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Feb 17 15:54:16 crc kubenswrapper[4808]: I0217 15:54:16.657663 4808 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Feb 17 15:54:16 crc kubenswrapper[4808]: I0217 15:54:16.661526 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:54:16 crc kubenswrapper[4808]: I0217 15:54:16.951966 4808 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 17 15:54:16 crc kubenswrapper[4808]: W0217 15:54:16.952471 4808 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 17 15:54:16 crc kubenswrapper[4808]: W0217 15:54:16.952541 4808 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 17 15:54:16 crc kubenswrapper[4808]: W0217 15:54:16.952496 4808 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 17 15:54:16 crc kubenswrapper[4808]: E0217 15:54:16.952555 4808 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": read tcp 38.102.83.64:57292->38.102.83.64:6443: use of closed network connection" event="&Event{ObjectMeta:{kube-apiserver-crc.189513a749c78f92 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 15:53:57.65843957 +0000 UTC m=+1.174798643,LastTimestamp:2026-02-17 15:53:57.65843957 +0000 UTC m=+1.174798643,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 15:54:16 crc kubenswrapper[4808]: W0217 15:54:16.952496 4808 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.087839 4808 apiserver.go:52] "Watching apiserver" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.093126 4808 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.093489 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-dns/node-resolver-f8pfh","openshift-kube-apiserver/kube-apiserver-crc","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h"] Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.093853 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.093897 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:54:17 crc kubenswrapper[4808]: E0217 15:54:17.093970 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.094034 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:54:17 crc kubenswrapper[4808]: E0217 15:54:17.094163 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.094296 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.094403 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.094460 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:54:17 crc kubenswrapper[4808]: E0217 15:54:17.094591 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.094732 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-f8pfh" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.097522 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 15:34:15.995814718 +0000 UTC Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.097808 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.099937 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.100103 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.100208 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.100515 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.100634 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.100680 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.101139 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.101474 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.101480 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.106933 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.111948 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.126031 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.152522 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.167646 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efd34c89-7350-4ce0-83d9-302614df88f7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa3ef5d82c776e482d3da2d223d74423393c75b813707483fadca8cfbb5ed3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://695c70a36ec8a626d22b6dc04fdaad77e3e1f27a035ce6f62b96afe1f2c29361\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2611c9a878eac336beeea637370ce7fe47a5a80a6f29002cb2fb79d4637a1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68d1439ead0f87e8cde6925c6db2cfde8a7fe89c6e5afaf719868740138742df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://715d799f5e1732f88175b90bad28450b9c5148e89bf47ac3e47f9585acf3b392\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:53:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.181224 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.182948 4808 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.190166 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.199903 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.209543 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.219926 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.233934 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.241549 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f8pfh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13cb51e0-9eb4-4948-a9bf-93cddaa429fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkcvd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f8pfh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.255545 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.265658 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.265731 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.267930 4808 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="68d1439ead0f87e8cde6925c6db2cfde8a7fe89c6e5afaf719868740138742df" exitCode=255 Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.267972 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"68d1439ead0f87e8cde6925c6db2cfde8a7fe89c6e5afaf719868740138742df"} Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.272241 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.272289 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.272318 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.272348 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.272379 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.272409 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.272437 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.272463 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.272492 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.272520 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.272547 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.272596 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.272622 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.272674 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.272833 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.273126 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.273136 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.272654 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.274464 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.274509 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.274526 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.274543 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.274632 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.274679 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.274710 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.274737 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.274767 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.274763 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.274799 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.274831 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.274860 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.274943 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.275233 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.275131 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.275238 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.275259 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.275426 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.275463 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.275617 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.275751 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.275957 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.275981 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.276070 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.276250 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.274891 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.276326 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.276360 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.276392 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.276420 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.276450 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.276479 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.276501 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.276529 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.276558 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.276603 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.276627 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.276658 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.276697 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.276721 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.276753 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.276786 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.276810 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.276840 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.276872 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.276900 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.276920 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.276947 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.276971 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.276999 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.277023 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.277051 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.277076 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.277125 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.277160 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.277192 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.277218 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.277241 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.277266 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.277291 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.277315 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.277340 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.277365 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.277390 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 17 15:54:17 crc kubenswrapper[4808]: E0217 15:54:17.277396 4808 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-apiserver-crc\" already exists" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.278015 4808 scope.go:117] "RemoveContainer" containerID="68d1439ead0f87e8cde6925c6db2cfde8a7fe89c6e5afaf719868740138742df" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.276282 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.277544 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.277623 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.277890 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.277988 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.278009 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.278018 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.278206 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.278314 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.278303 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.278567 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.278535 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.278634 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.277409 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.278686 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.278719 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.278799 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.278814 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.278829 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.278852 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.278876 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.278899 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.278920 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.278945 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.278971 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.278983 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.278992 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.278995 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.279021 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.279050 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.279076 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.281459 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.281602 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.281654 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.281885 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.282058 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.282207 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.282293 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.282365 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.282376 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.282522 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 17 15:54:17 crc kubenswrapper[4808]: E0217 15:54:17.282623 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:54:17.782523741 +0000 UTC m=+21.298882814 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.282672 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.282715 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.282703 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.282850 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.283430 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.283658 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.283683 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.283883 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.283910 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.283966 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.284906 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.284958 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.285122 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.285284 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.285328 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.285521 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.285560 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.285621 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.285565 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.285649 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.285670 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.285694 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.285717 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.285740 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.285761 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.285791 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.285814 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.285833 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.285853 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.287067 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.287121 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.287153 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.287183 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.287212 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.287239 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.287267 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.287293 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.287322 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.287352 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.287382 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.287407 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.287434 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.287460 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.287486 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.287512 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.287537 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.287563 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.287608 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.287634 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.287675 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.287703 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.287728 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.287753 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.287783 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.287821 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.287845 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.287876 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.287899 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.287926 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.287950 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.287974 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.288075 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.288107 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.288199 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.288231 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.288265 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.288294 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.288326 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.288362 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.288388 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.288414 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.288444 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.288469 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.288497 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.288526 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.288552 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.288599 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.288634 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.288670 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.288705 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.288743 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.289802 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.289856 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.289882 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.289909 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.289937 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.289963 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.289988 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.290016 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.290046 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.290076 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.290107 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.291684 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.291770 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.291863 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.291939 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.292020 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.292091 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.292167 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.292266 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.292362 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.292497 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.292610 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.292683 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.292758 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.292880 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.293002 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.293110 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.293230 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.293296 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.293362 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.293432 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.293503 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.293568 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.293658 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.293740 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.293817 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.293889 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.293972 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.294059 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.294129 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.294223 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.294301 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.294370 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.294471 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.294553 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.294721 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.294823 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.294901 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.294968 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.295058 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.295170 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.295273 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.295367 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.295502 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkcvd\" (UniqueName: \"kubernetes.io/projected/13cb51e0-9eb4-4948-a9bf-93cddaa429fe-kube-api-access-mkcvd\") pod \"node-resolver-f8pfh\" (UID: \"13cb51e0-9eb4-4948-a9bf-93cddaa429fe\") " pod="openshift-dns/node-resolver-f8pfh" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.295655 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.295740 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.295916 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.296154 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.285754 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.285748 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.286425 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.286457 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.286788 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.287008 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.287125 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.287197 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.287243 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.287535 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.287549 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.287612 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.286835 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.287637 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.288016 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.288058 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.288704 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.288868 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.288893 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.289285 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.289265 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.289413 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.289684 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.289732 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.289988 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.290530 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.290553 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.290607 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.290621 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.291005 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.291096 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.291160 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.291759 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.292163 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.292482 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.292991 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.293094 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.293545 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.293836 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.294003 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.294086 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.294656 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.294496 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.294948 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.294990 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.295404 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.295621 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.296089 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.296267 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.296646 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.296303 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/13cb51e0-9eb4-4948-a9bf-93cddaa429fe-hosts-file\") pod \"node-resolver-f8pfh\" (UID: \"13cb51e0-9eb4-4948-a9bf-93cddaa429fe\") " pod="openshift-dns/node-resolver-f8pfh" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.297358 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.297368 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.297407 4808 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.297437 4808 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.297459 4808 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.297480 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.297500 4808 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.297524 4808 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.297544 4808 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.297564 4808 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.297611 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.297631 4808 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.297650 4808 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.297669 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.297692 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.297713 4808 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.297732 4808 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.297751 4808 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.297771 4808 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.297790 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.297808 4808 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.297825 4808 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.297844 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.297862 4808 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.297880 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.297898 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.297916 4808 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.297935 4808 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.297952 4808 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.297969 4808 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.297986 4808 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.298003 4808 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.298020 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.298040 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.298061 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.298078 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.298231 4808 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.298254 4808 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.298272 4808 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.298291 4808 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.298309 4808 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.298325 4808 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.298341 4808 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.298360 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.298378 4808 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.298410 4808 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.298429 4808 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.298446 4808 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.298465 4808 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.298483 4808 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.298506 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.298524 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.298544 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.298562 4808 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.298648 4808 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.298670 4808 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.298691 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.298708 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.298725 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.298742 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.298757 4808 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.298772 4808 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.298790 4808 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.298809 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.298824 4808 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.298838 4808 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.298854 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.298868 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.298887 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.298902 4808 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.298918 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.298932 4808 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.298946 4808 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.298961 4808 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.298976 4808 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.298991 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.299006 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.299022 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.299040 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.299056 4808 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.299073 4808 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.299094 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.299113 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.299131 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.299153 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.299174 4808 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.299192 4808 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.299210 4808 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.299229 4808 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.299252 4808 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.299271 4808 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.299292 4808 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.299311 4808 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.299332 4808 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.299348 4808 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.299365 4808 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.299382 4808 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.299401 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.299421 4808 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.299440 4808 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.299459 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.299477 4808 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.299497 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.298380 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.298675 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.299001 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.299372 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.299806 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.300048 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.300644 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.301914 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.302298 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.302925 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.303648 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.304097 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.304253 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.304356 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.304793 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.304965 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.305635 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.305706 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.305963 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.306225 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.306248 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.307229 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.307421 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.307616 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.307755 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.308346 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.310039 4808 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.310366 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.310541 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.310799 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.310854 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.311385 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.311730 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.312024 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.312595 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: E0217 15:54:17.312920 4808 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 15:54:17 crc kubenswrapper[4808]: E0217 15:54:17.316104 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 15:54:17.816081662 +0000 UTC m=+21.332440735 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.314051 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.314524 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.315021 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.315310 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.315501 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.315726 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.315913 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.315948 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.316704 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.316790 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.318252 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: E0217 15:54:17.318443 4808 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 15:54:17 crc kubenswrapper[4808]: E0217 15:54:17.318508 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 15:54:17.818493346 +0000 UTC m=+21.334852619 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.318950 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.319951 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.323930 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.324275 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.324493 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.324685 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.324816 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.325363 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.325712 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.325955 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.326213 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.326345 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.326541 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.327020 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.327743 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: E0217 15:54:17.328039 4808 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 15:54:17 crc kubenswrapper[4808]: E0217 15:54:17.328065 4808 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 15:54:17 crc kubenswrapper[4808]: E0217 15:54:17.328081 4808 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:54:17 crc kubenswrapper[4808]: E0217 15:54:17.328159 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-17 15:54:17.828135889 +0000 UTC m=+21.344494962 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.328611 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.327721 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f8pfh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13cb51e0-9eb4-4948-a9bf-93cddaa429fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkcvd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f8pfh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.329198 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.330615 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.330790 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: E0217 15:54:17.331853 4808 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 15:54:17 crc kubenswrapper[4808]: E0217 15:54:17.331875 4808 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 15:54:17 crc kubenswrapper[4808]: E0217 15:54:17.331887 4808 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.331915 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.332111 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 17 15:54:17 crc kubenswrapper[4808]: E0217 15:54:17.331928 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-17 15:54:17.831916029 +0000 UTC m=+21.348275102 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.332591 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.332624 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.333143 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.333325 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.333863 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.334203 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.334588 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.342819 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.343404 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.343765 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.344070 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.344185 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.344391 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.344549 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.344631 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.344810 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.345365 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efd34c89-7350-4ce0-83d9-302614df88f7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa3ef5d82c776e482d3da2d223d74423393c75b813707483fadca8cfbb5ed3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://695c70a36ec8a626d22b6dc04fdaad77e3e1f27a035ce6f62b96afe1f2c29361\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2611c9a878eac336beeea637370ce7fe47a5a80a6f29002cb2fb79d4637a1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68d1439ead0f87e8cde6925c6db2cfde8a7fe89c6e5afaf719868740138742df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://715d799f5e1732f88175b90bad28450b9c5148e89bf47ac3e47f9585acf3b392\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:53:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.347544 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.348140 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.348737 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.349226 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.349373 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.349721 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.349754 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.349260 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.349980 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.349353 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.348947 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.350872 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.351826 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.355464 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.356567 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.361687 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.365299 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.367437 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.376079 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.380367 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.381377 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.391932 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.400025 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f8pfh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13cb51e0-9eb4-4948-a9bf-93cddaa429fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkcvd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f8pfh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.400624 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.400752 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.400764 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.400901 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mkcvd\" (UniqueName: \"kubernetes.io/projected/13cb51e0-9eb4-4948-a9bf-93cddaa429fe-kube-api-access-mkcvd\") pod \"node-resolver-f8pfh\" (UID: \"13cb51e0-9eb4-4948-a9bf-93cddaa429fe\") " pod="openshift-dns/node-resolver-f8pfh" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.400942 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/13cb51e0-9eb4-4948-a9bf-93cddaa429fe-hosts-file\") pod \"node-resolver-f8pfh\" (UID: \"13cb51e0-9eb4-4948-a9bf-93cddaa429fe\") " pod="openshift-dns/node-resolver-f8pfh" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.401113 4808 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.401134 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.401145 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.401155 4808 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.401166 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.401204 4808 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.401215 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.401226 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.401236 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.401247 4808 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.401257 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.401270 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.401280 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.401309 4808 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.401320 4808 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.401333 4808 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.401475 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.401487 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.401497 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.401507 4808 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.401520 4808 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.401530 4808 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.401542 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.401554 4808 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.401565 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.401596 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.401610 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.401624 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.401639 4808 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.401653 4808 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.401671 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.401684 4808 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.401697 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.401711 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.401726 4808 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.401738 4808 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.401751 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.401764 4808 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.401776 4808 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.401789 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.401802 4808 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.401815 4808 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.401828 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.401840 4808 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.401853 4808 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.401865 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.401877 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.401889 4808 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.401901 4808 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.401950 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.401997 4808 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.402050 4808 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.402066 4808 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.402081 4808 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.402093 4808 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.402134 4808 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.402150 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.402154 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.402162 4808 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.402224 4808 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.402242 4808 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.402260 4808 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.402274 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.402286 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.402299 4808 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.402310 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.402321 4808 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.402331 4808 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.402341 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.402351 4808 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.402360 4808 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.402370 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.402379 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.402390 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.402399 4808 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.402409 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.402418 4808 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.402427 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.402437 4808 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.402449 4808 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.402458 4808 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.402467 4808 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.402477 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.402486 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.402496 4808 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.402507 4808 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.402516 4808 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.402527 4808 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.402538 4808 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.402552 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.402562 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.402603 4808 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.402614 4808 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.402088 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/13cb51e0-9eb4-4948-a9bf-93cddaa429fe-hosts-file\") pod \"node-resolver-f8pfh\" (UID: \"13cb51e0-9eb4-4948-a9bf-93cddaa429fe\") " pod="openshift-dns/node-resolver-f8pfh" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.407604 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.410036 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.413297 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.420618 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.421219 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mkcvd\" (UniqueName: \"kubernetes.io/projected/13cb51e0-9eb4-4948-a9bf-93cddaa429fe-kube-api-access-mkcvd\") pod \"node-resolver-f8pfh\" (UID: \"13cb51e0-9eb4-4948-a9bf-93cddaa429fe\") " pod="openshift-dns/node-resolver-f8pfh" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.421357 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.430588 4808 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-17 15:49:16 +0000 UTC, rotation deadline is 2026-12-15 14:46:04.347844208 +0000 UTC Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.430669 4808 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7222h51m46.917177557s for next certificate rotation Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.434674 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-f8pfh" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.438044 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:54:17 crc kubenswrapper[4808]: W0217 15:54:17.453046 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-09a8e76b13bfe0e424e43bdf3538f2955a34754ff8ec198e8c5e5985d1232532 WatchSource:0}: Error finding container 09a8e76b13bfe0e424e43bdf3538f2955a34754ff8ec198e8c5e5985d1232532: Status 404 returned error can't find the container with id 09a8e76b13bfe0e424e43bdf3538f2955a34754ff8ec198e8c5e5985d1232532 Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.453623 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efd34c89-7350-4ce0-83d9-302614df88f7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa3ef5d82c776e482d3da2d223d74423393c75b813707483fadca8cfbb5ed3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://695c70a36ec8a626d22b6dc04fdaad77e3e1f27a035ce6f62b96afe1f2c29361\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2611c9a878eac336beeea637370ce7fe47a5a80a6f29002cb2fb79d4637a1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68d1439ead0f87e8cde6925c6db2cfde8a7fe89c6e5afaf719868740138742df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68d1439ead0f87e8cde6925c6db2cfde8a7fe89c6e5afaf719868740138742df\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:54:16Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 15:54:01.029442 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:54:01.030078 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2660512818/tls.crt::/tmp/serving-cert-2660512818/tls.key\\\\\\\"\\\\nI0217 15:54:16.361222 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:54:16.370125 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:54:16.370169 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:54:16.370202 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:54:16.370212 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:54:16.383437 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:54:16.383473 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383482 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:54:16.383494 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:54:16.383498 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:54:16.383502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:54:16.383616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:54:16.393934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://715d799f5e1732f88175b90bad28450b9c5148e89bf47ac3e47f9585acf3b392\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:53:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.464446 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:54:17 crc kubenswrapper[4808]: W0217 15:54:17.472978 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod13cb51e0_9eb4_4948_a9bf_93cddaa429fe.slice/crio-8ae4359922daee2ca55f193b06acbd233caa4c9d5554f03b1d2c8adcd5ce6f20 WatchSource:0}: Error finding container 8ae4359922daee2ca55f193b06acbd233caa4c9d5554f03b1d2c8adcd5ce6f20: Status 404 returned error can't find the container with id 8ae4359922daee2ca55f193b06acbd233caa4c9d5554f03b1d2c8adcd5ce6f20 Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.530739 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-pr5s4"] Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.531627 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-pr5s4" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.538690 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.538970 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.539109 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.546659 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.557950 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efd34c89-7350-4ce0-83d9-302614df88f7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa3ef5d82c776e482d3da2d223d74423393c75b813707483fadca8cfbb5ed3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://695c70a36ec8a626d22b6dc04fdaad77e3e1f27a035ce6f62b96afe1f2c29361\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2611c9a878eac336beeea637370ce7fe47a5a80a6f29002cb2fb79d4637a1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68d1439ead0f87e8cde6925c6db2cfde8a7fe89c6e5afaf719868740138742df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68d1439ead0f87e8cde6925c6db2cfde8a7fe89c6e5afaf719868740138742df\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:54:16Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 15:54:01.029442 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:54:01.030078 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2660512818/tls.crt::/tmp/serving-cert-2660512818/tls.key\\\\\\\"\\\\nI0217 15:54:16.361222 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:54:16.370125 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:54:16.370169 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:54:16.370202 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:54:16.370212 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:54:16.383437 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:54:16.383473 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383482 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:54:16.383494 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:54:16.383498 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:54:16.383502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:54:16.383616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:54:16.393934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://715d799f5e1732f88175b90bad28450b9c5148e89bf47ac3e47f9585acf3b392\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:53:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.571099 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.587179 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.599039 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pr5s4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4989dd6-5d44-42b5-882c-12a10ffc7911\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2xc9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pr5s4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.605979 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2xc9\" (UniqueName: \"kubernetes.io/projected/a4989dd6-5d44-42b5-882c-12a10ffc7911-kube-api-access-q2xc9\") pod \"node-ca-pr5s4\" (UID: \"a4989dd6-5d44-42b5-882c-12a10ffc7911\") " pod="openshift-image-registry/node-ca-pr5s4" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.606077 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/a4989dd6-5d44-42b5-882c-12a10ffc7911-serviceca\") pod \"node-ca-pr5s4\" (UID: \"a4989dd6-5d44-42b5-882c-12a10ffc7911\") " pod="openshift-image-registry/node-ca-pr5s4" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.606108 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a4989dd6-5d44-42b5-882c-12a10ffc7911-host\") pod \"node-ca-pr5s4\" (UID: \"a4989dd6-5d44-42b5-882c-12a10ffc7911\") " pod="openshift-image-registry/node-ca-pr5s4" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.614428 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.629939 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.653311 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.669083 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.686466 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f8pfh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13cb51e0-9eb4-4948-a9bf-93cddaa429fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkcvd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f8pfh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.707016 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2xc9\" (UniqueName: \"kubernetes.io/projected/a4989dd6-5d44-42b5-882c-12a10ffc7911-kube-api-access-q2xc9\") pod \"node-ca-pr5s4\" (UID: \"a4989dd6-5d44-42b5-882c-12a10ffc7911\") " pod="openshift-image-registry/node-ca-pr5s4" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.707082 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/a4989dd6-5d44-42b5-882c-12a10ffc7911-serviceca\") pod \"node-ca-pr5s4\" (UID: \"a4989dd6-5d44-42b5-882c-12a10ffc7911\") " pod="openshift-image-registry/node-ca-pr5s4" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.707123 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a4989dd6-5d44-42b5-882c-12a10ffc7911-host\") pod \"node-ca-pr5s4\" (UID: \"a4989dd6-5d44-42b5-882c-12a10ffc7911\") " pod="openshift-image-registry/node-ca-pr5s4" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.707208 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a4989dd6-5d44-42b5-882c-12a10ffc7911-host\") pod \"node-ca-pr5s4\" (UID: \"a4989dd6-5d44-42b5-882c-12a10ffc7911\") " pod="openshift-image-registry/node-ca-pr5s4" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.708358 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/a4989dd6-5d44-42b5-882c-12a10ffc7911-serviceca\") pod \"node-ca-pr5s4\" (UID: \"a4989dd6-5d44-42b5-882c-12a10ffc7911\") " pod="openshift-image-registry/node-ca-pr5s4" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.725341 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2xc9\" (UniqueName: \"kubernetes.io/projected/a4989dd6-5d44-42b5-882c-12a10ffc7911-kube-api-access-q2xc9\") pod \"node-ca-pr5s4\" (UID: \"a4989dd6-5d44-42b5-882c-12a10ffc7911\") " pod="openshift-image-registry/node-ca-pr5s4" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.807799 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:54:17 crc kubenswrapper[4808]: E0217 15:54:17.807976 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:54:18.807951508 +0000 UTC m=+22.324310581 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.864664 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-pr5s4" Feb 17 15:54:17 crc kubenswrapper[4808]: W0217 15:54:17.877637 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda4989dd6_5d44_42b5_882c_12a10ffc7911.slice/crio-a8571625297a6141427edbdaaf78be54f992f7edd1a9da1421d2de90b9f4bdc2 WatchSource:0}: Error finding container a8571625297a6141427edbdaaf78be54f992f7edd1a9da1421d2de90b9f4bdc2: Status 404 returned error can't find the container with id a8571625297a6141427edbdaaf78be54f992f7edd1a9da1421d2de90b9f4bdc2 Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.908985 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.909048 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.909082 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.909114 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:54:17 crc kubenswrapper[4808]: E0217 15:54:17.909236 4808 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 15:54:17 crc kubenswrapper[4808]: E0217 15:54:17.909266 4808 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 15:54:17 crc kubenswrapper[4808]: E0217 15:54:17.909265 4808 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 15:54:17 crc kubenswrapper[4808]: E0217 15:54:17.909341 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 15:54:18.9093211 +0000 UTC m=+22.425680173 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 15:54:17 crc kubenswrapper[4808]: E0217 15:54:17.909394 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 15:54:18.909364451 +0000 UTC m=+22.425723544 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 15:54:17 crc kubenswrapper[4808]: E0217 15:54:17.909266 4808 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 15:54:17 crc kubenswrapper[4808]: E0217 15:54:17.909434 4808 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 15:54:17 crc kubenswrapper[4808]: E0217 15:54:17.909452 4808 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:54:17 crc kubenswrapper[4808]: E0217 15:54:17.909292 4808 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 15:54:17 crc kubenswrapper[4808]: E0217 15:54:17.909482 4808 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:54:17 crc kubenswrapper[4808]: E0217 15:54:17.909529 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-17 15:54:18.909508035 +0000 UTC m=+22.425867108 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:54:17 crc kubenswrapper[4808]: E0217 15:54:17.909549 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-17 15:54:18.909540686 +0000 UTC m=+22.425899759 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.097801 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 10:08:05.994079398 +0000 UTC Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.274014 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.276368 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"77d0e25e29d8f9c5146809e50f50a20c537f5ddecea1b902928a94870b5d44ef"} Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.276843 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.278030 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-f8pfh" event={"ID":"13cb51e0-9eb4-4948-a9bf-93cddaa429fe","Type":"ContainerStarted","Data":"e67e9f34fe5e5e9f272673e47a80dfec89a2832289e719b09d5a13399412b2ce"} Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.278085 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-f8pfh" event={"ID":"13cb51e0-9eb4-4948-a9bf-93cddaa429fe","Type":"ContainerStarted","Data":"8ae4359922daee2ca55f193b06acbd233caa4c9d5554f03b1d2c8adcd5ce6f20"} Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.279403 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"09a8e76b13bfe0e424e43bdf3538f2955a34754ff8ec198e8c5e5985d1232532"} Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.282153 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"8b5cb9af7fe50ad534e758ba5647e162dfc951f41f07330e8b671427811de556"} Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.282181 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"84eb93f311bf1bd277aed541552b61365df084f2986d8df7dc489002bdb980cd"} Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.283897 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-pr5s4" event={"ID":"a4989dd6-5d44-42b5-882c-12a10ffc7911","Type":"ContainerStarted","Data":"228e9f46385cedf80299c68685a8b2b94d96c41ade18eeea5de7a83c648cf704"} Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.283921 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-pr5s4" event={"ID":"a4989dd6-5d44-42b5-882c-12a10ffc7911","Type":"ContainerStarted","Data":"a8571625297a6141427edbdaaf78be54f992f7edd1a9da1421d2de90b9f4bdc2"} Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.285662 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"a6556f8ef16656338bd11e718549ef3c019e96928825ab9dc0596f24b8f43e73"} Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.285698 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"fbc64aec6f296c59b9fb1e8c183c9f80c346f2d76620db59376c914ffcec02b3"} Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.285714 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"110706b24914ccd13caa26782092eec6177d2477667e6ad2b4c66eb04823a4ee"} Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.292507 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efd34c89-7350-4ce0-83d9-302614df88f7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa3ef5d82c776e482d3da2d223d74423393c75b813707483fadca8cfbb5ed3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://695c70a36ec8a626d22b6dc04fdaad77e3e1f27a035ce6f62b96afe1f2c29361\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2611c9a878eac336beeea637370ce7fe47a5a80a6f29002cb2fb79d4637a1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d0e25e29d8f9c5146809e50f50a20c537f5ddecea1b902928a94870b5d44ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68d1439ead0f87e8cde6925c6db2cfde8a7fe89c6e5afaf719868740138742df\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:54:16Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 15:54:01.029442 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:54:01.030078 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2660512818/tls.crt::/tmp/serving-cert-2660512818/tls.key\\\\\\\"\\\\nI0217 15:54:16.361222 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:54:16.370125 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:54:16.370169 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:54:16.370202 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:54:16.370212 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:54:16.383437 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:54:16.383473 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383482 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:54:16.383494 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:54:16.383498 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:54:16.383502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:54:16.383616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:54:16.393934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://715d799f5e1732f88175b90bad28450b9c5148e89bf47ac3e47f9585acf3b392\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:53:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.301068 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-kx4nl"] Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.301708 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-msgfd"] Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.301930 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-msgfd" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.302308 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-kx4nl" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.303287 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-k8v8k"] Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.303585 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.304471 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.304698 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.304865 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.304980 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.305139 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.305242 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.305338 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.308009 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.308758 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.309182 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.309384 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.309539 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.328165 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.343345 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.361527 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pr5s4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4989dd6-5d44-42b5-882c-12a10ffc7911\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2xc9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pr5s4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.383118 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.400514 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.412943 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/18916d6d-e063-40a0-816f-554f95cd2956-cni-binary-copy\") pod \"multus-msgfd\" (UID: \"18916d6d-e063-40a0-816f-554f95cd2956\") " pod="openshift-multus/multus-msgfd" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.413020 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/a6c9480c-4161-4c38-bec1-0822c6692f6e-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-kx4nl\" (UID: \"a6c9480c-4161-4c38-bec1-0822c6692f6e\") " pod="openshift-multus/multus-additional-cni-plugins-kx4nl" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.413051 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/18916d6d-e063-40a0-816f-554f95cd2956-multus-cni-dir\") pod \"multus-msgfd\" (UID: \"18916d6d-e063-40a0-816f-554f95cd2956\") " pod="openshift-multus/multus-msgfd" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.413092 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/18916d6d-e063-40a0-816f-554f95cd2956-host-run-k8s-cni-cncf-io\") pod \"multus-msgfd\" (UID: \"18916d6d-e063-40a0-816f-554f95cd2956\") " pod="openshift-multus/multus-msgfd" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.413110 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/18916d6d-e063-40a0-816f-554f95cd2956-cnibin\") pod \"multus-msgfd\" (UID: \"18916d6d-e063-40a0-816f-554f95cd2956\") " pod="openshift-multus/multus-msgfd" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.413129 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a6c9480c-4161-4c38-bec1-0822c6692f6e-os-release\") pod \"multus-additional-cni-plugins-kx4nl\" (UID: \"a6c9480c-4161-4c38-bec1-0822c6692f6e\") " pod="openshift-multus/multus-additional-cni-plugins-kx4nl" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.413164 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/18916d6d-e063-40a0-816f-554f95cd2956-host-run-netns\") pod \"multus-msgfd\" (UID: \"18916d6d-e063-40a0-816f-554f95cd2956\") " pod="openshift-multus/multus-msgfd" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.413186 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/18916d6d-e063-40a0-816f-554f95cd2956-host-run-multus-certs\") pod \"multus-msgfd\" (UID: \"18916d6d-e063-40a0-816f-554f95cd2956\") " pod="openshift-multus/multus-msgfd" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.413205 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a6c9480c-4161-4c38-bec1-0822c6692f6e-system-cni-dir\") pod \"multus-additional-cni-plugins-kx4nl\" (UID: \"a6c9480c-4161-4c38-bec1-0822c6692f6e\") " pod="openshift-multus/multus-additional-cni-plugins-kx4nl" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.413244 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/18916d6d-e063-40a0-816f-554f95cd2956-multus-socket-dir-parent\") pod \"multus-msgfd\" (UID: \"18916d6d-e063-40a0-816f-554f95cd2956\") " pod="openshift-multus/multus-msgfd" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.413265 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/18916d6d-e063-40a0-816f-554f95cd2956-etc-kubernetes\") pod \"multus-msgfd\" (UID: \"18916d6d-e063-40a0-816f-554f95cd2956\") " pod="openshift-multus/multus-msgfd" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.413287 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ca38b6e7-b21c-453d-8b6c-a163dac84b35-proxy-tls\") pod \"machine-config-daemon-k8v8k\" (UID: \"ca38b6e7-b21c-453d-8b6c-a163dac84b35\") " pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.413323 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bm52q\" (UniqueName: \"kubernetes.io/projected/ca38b6e7-b21c-453d-8b6c-a163dac84b35-kube-api-access-bm52q\") pod \"machine-config-daemon-k8v8k\" (UID: \"ca38b6e7-b21c-453d-8b6c-a163dac84b35\") " pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.413445 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/18916d6d-e063-40a0-816f-554f95cd2956-system-cni-dir\") pod \"multus-msgfd\" (UID: \"18916d6d-e063-40a0-816f-554f95cd2956\") " pod="openshift-multus/multus-msgfd" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.413513 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/18916d6d-e063-40a0-816f-554f95cd2956-host-var-lib-kubelet\") pod \"multus-msgfd\" (UID: \"18916d6d-e063-40a0-816f-554f95cd2956\") " pod="openshift-multus/multus-msgfd" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.413553 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmn2s\" (UniqueName: \"kubernetes.io/projected/18916d6d-e063-40a0-816f-554f95cd2956-kube-api-access-qmn2s\") pod \"multus-msgfd\" (UID: \"18916d6d-e063-40a0-816f-554f95cd2956\") " pod="openshift-multus/multus-msgfd" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.413611 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/18916d6d-e063-40a0-816f-554f95cd2956-host-var-lib-cni-multus\") pod \"multus-msgfd\" (UID: \"18916d6d-e063-40a0-816f-554f95cd2956\") " pod="openshift-multus/multus-msgfd" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.413697 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/18916d6d-e063-40a0-816f-554f95cd2956-multus-daemon-config\") pod \"multus-msgfd\" (UID: \"18916d6d-e063-40a0-816f-554f95cd2956\") " pod="openshift-multus/multus-msgfd" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.413765 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ca38b6e7-b21c-453d-8b6c-a163dac84b35-mcd-auth-proxy-config\") pod \"machine-config-daemon-k8v8k\" (UID: \"ca38b6e7-b21c-453d-8b6c-a163dac84b35\") " pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.413803 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a6c9480c-4161-4c38-bec1-0822c6692f6e-cnibin\") pod \"multus-additional-cni-plugins-kx4nl\" (UID: \"a6c9480c-4161-4c38-bec1-0822c6692f6e\") " pod="openshift-multus/multus-additional-cni-plugins-kx4nl" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.413836 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7t282\" (UniqueName: \"kubernetes.io/projected/a6c9480c-4161-4c38-bec1-0822c6692f6e-kube-api-access-7t282\") pod \"multus-additional-cni-plugins-kx4nl\" (UID: \"a6c9480c-4161-4c38-bec1-0822c6692f6e\") " pod="openshift-multus/multus-additional-cni-plugins-kx4nl" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.413869 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/18916d6d-e063-40a0-816f-554f95cd2956-host-var-lib-cni-bin\") pod \"multus-msgfd\" (UID: \"18916d6d-e063-40a0-816f-554f95cd2956\") " pod="openshift-multus/multus-msgfd" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.413898 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/18916d6d-e063-40a0-816f-554f95cd2956-os-release\") pod \"multus-msgfd\" (UID: \"18916d6d-e063-40a0-816f-554f95cd2956\") " pod="openshift-multus/multus-msgfd" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.413922 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/18916d6d-e063-40a0-816f-554f95cd2956-hostroot\") pod \"multus-msgfd\" (UID: \"18916d6d-e063-40a0-816f-554f95cd2956\") " pod="openshift-multus/multus-msgfd" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.413962 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/18916d6d-e063-40a0-816f-554f95cd2956-multus-conf-dir\") pod \"multus-msgfd\" (UID: \"18916d6d-e063-40a0-816f-554f95cd2956\") " pod="openshift-multus/multus-msgfd" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.413989 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a6c9480c-4161-4c38-bec1-0822c6692f6e-cni-binary-copy\") pod \"multus-additional-cni-plugins-kx4nl\" (UID: \"a6c9480c-4161-4c38-bec1-0822c6692f6e\") " pod="openshift-multus/multus-additional-cni-plugins-kx4nl" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.414043 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a6c9480c-4161-4c38-bec1-0822c6692f6e-tuning-conf-dir\") pod \"multus-additional-cni-plugins-kx4nl\" (UID: \"a6c9480c-4161-4c38-bec1-0822c6692f6e\") " pod="openshift-multus/multus-additional-cni-plugins-kx4nl" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.414102 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/ca38b6e7-b21c-453d-8b6c-a163dac84b35-rootfs\") pod \"machine-config-daemon-k8v8k\" (UID: \"ca38b6e7-b21c-453d-8b6c-a163dac84b35\") " pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.415119 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.429394 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.441099 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f8pfh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13cb51e0-9eb4-4948-a9bf-93cddaa429fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkcvd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f8pfh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.455042 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b5cb9af7fe50ad534e758ba5647e162dfc951f41f07330e8b671427811de556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.468089 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.485988 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6c9480c-4161-4c38-bec1-0822c6692f6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx4nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.498042 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca38b6e7-b21c-453d-8b6c-a163dac84b35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k8v8k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.510883 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6556f8ef16656338bd11e718549ef3c019e96928825ab9dc0596f24b8f43e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbc64aec6f296c59b9fb1e8c183c9f80c346f2d76620db59376c914ffcec02b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.515715 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/18916d6d-e063-40a0-816f-554f95cd2956-cni-binary-copy\") pod \"multus-msgfd\" (UID: \"18916d6d-e063-40a0-816f-554f95cd2956\") " pod="openshift-multus/multus-msgfd" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.515782 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/a6c9480c-4161-4c38-bec1-0822c6692f6e-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-kx4nl\" (UID: \"a6c9480c-4161-4c38-bec1-0822c6692f6e\") " pod="openshift-multus/multus-additional-cni-plugins-kx4nl" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.515838 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/18916d6d-e063-40a0-816f-554f95cd2956-multus-cni-dir\") pod \"multus-msgfd\" (UID: \"18916d6d-e063-40a0-816f-554f95cd2956\") " pod="openshift-multus/multus-msgfd" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.515867 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/18916d6d-e063-40a0-816f-554f95cd2956-host-run-k8s-cni-cncf-io\") pod \"multus-msgfd\" (UID: \"18916d6d-e063-40a0-816f-554f95cd2956\") " pod="openshift-multus/multus-msgfd" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.515918 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a6c9480c-4161-4c38-bec1-0822c6692f6e-os-release\") pod \"multus-additional-cni-plugins-kx4nl\" (UID: \"a6c9480c-4161-4c38-bec1-0822c6692f6e\") " pod="openshift-multus/multus-additional-cni-plugins-kx4nl" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.515948 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/18916d6d-e063-40a0-816f-554f95cd2956-cnibin\") pod \"multus-msgfd\" (UID: \"18916d6d-e063-40a0-816f-554f95cd2956\") " pod="openshift-multus/multus-msgfd" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.516001 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/18916d6d-e063-40a0-816f-554f95cd2956-host-run-netns\") pod \"multus-msgfd\" (UID: \"18916d6d-e063-40a0-816f-554f95cd2956\") " pod="openshift-multus/multus-msgfd" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.516069 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/18916d6d-e063-40a0-816f-554f95cd2956-host-run-k8s-cni-cncf-io\") pod \"multus-msgfd\" (UID: \"18916d6d-e063-40a0-816f-554f95cd2956\") " pod="openshift-multus/multus-msgfd" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.516122 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/18916d6d-e063-40a0-816f-554f95cd2956-host-run-netns\") pod \"multus-msgfd\" (UID: \"18916d6d-e063-40a0-816f-554f95cd2956\") " pod="openshift-multus/multus-msgfd" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.516105 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/18916d6d-e063-40a0-816f-554f95cd2956-cnibin\") pod \"multus-msgfd\" (UID: \"18916d6d-e063-40a0-816f-554f95cd2956\") " pod="openshift-multus/multus-msgfd" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.516200 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/18916d6d-e063-40a0-816f-554f95cd2956-host-run-multus-certs\") pod \"multus-msgfd\" (UID: \"18916d6d-e063-40a0-816f-554f95cd2956\") " pod="openshift-multus/multus-msgfd" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.516249 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a6c9480c-4161-4c38-bec1-0822c6692f6e-system-cni-dir\") pod \"multus-additional-cni-plugins-kx4nl\" (UID: \"a6c9480c-4161-4c38-bec1-0822c6692f6e\") " pod="openshift-multus/multus-additional-cni-plugins-kx4nl" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.516307 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/18916d6d-e063-40a0-816f-554f95cd2956-etc-kubernetes\") pod \"multus-msgfd\" (UID: \"18916d6d-e063-40a0-816f-554f95cd2956\") " pod="openshift-multus/multus-msgfd" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.516298 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/18916d6d-e063-40a0-816f-554f95cd2956-host-run-multus-certs\") pod \"multus-msgfd\" (UID: \"18916d6d-e063-40a0-816f-554f95cd2956\") " pod="openshift-multus/multus-msgfd" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.516348 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a6c9480c-4161-4c38-bec1-0822c6692f6e-os-release\") pod \"multus-additional-cni-plugins-kx4nl\" (UID: \"a6c9480c-4161-4c38-bec1-0822c6692f6e\") " pod="openshift-multus/multus-additional-cni-plugins-kx4nl" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.516374 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ca38b6e7-b21c-453d-8b6c-a163dac84b35-proxy-tls\") pod \"machine-config-daemon-k8v8k\" (UID: \"ca38b6e7-b21c-453d-8b6c-a163dac84b35\") " pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.516324 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a6c9480c-4161-4c38-bec1-0822c6692f6e-system-cni-dir\") pod \"multus-additional-cni-plugins-kx4nl\" (UID: \"a6c9480c-4161-4c38-bec1-0822c6692f6e\") " pod="openshift-multus/multus-additional-cni-plugins-kx4nl" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.516384 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/18916d6d-e063-40a0-816f-554f95cd2956-multus-cni-dir\") pod \"multus-msgfd\" (UID: \"18916d6d-e063-40a0-816f-554f95cd2956\") " pod="openshift-multus/multus-msgfd" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.516412 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bm52q\" (UniqueName: \"kubernetes.io/projected/ca38b6e7-b21c-453d-8b6c-a163dac84b35-kube-api-access-bm52q\") pod \"machine-config-daemon-k8v8k\" (UID: \"ca38b6e7-b21c-453d-8b6c-a163dac84b35\") " pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.516399 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/18916d6d-e063-40a0-816f-554f95cd2956-etc-kubernetes\") pod \"multus-msgfd\" (UID: \"18916d6d-e063-40a0-816f-554f95cd2956\") " pod="openshift-multus/multus-msgfd" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.516477 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/18916d6d-e063-40a0-816f-554f95cd2956-multus-socket-dir-parent\") pod \"multus-msgfd\" (UID: \"18916d6d-e063-40a0-816f-554f95cd2956\") " pod="openshift-multus/multus-msgfd" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.516553 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/18916d6d-e063-40a0-816f-554f95cd2956-multus-socket-dir-parent\") pod \"multus-msgfd\" (UID: \"18916d6d-e063-40a0-816f-554f95cd2956\") " pod="openshift-multus/multus-msgfd" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.516592 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/18916d6d-e063-40a0-816f-554f95cd2956-system-cni-dir\") pod \"multus-msgfd\" (UID: \"18916d6d-e063-40a0-816f-554f95cd2956\") " pod="openshift-multus/multus-msgfd" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.516634 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/18916d6d-e063-40a0-816f-554f95cd2956-system-cni-dir\") pod \"multus-msgfd\" (UID: \"18916d6d-e063-40a0-816f-554f95cd2956\") " pod="openshift-multus/multus-msgfd" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.516651 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/18916d6d-e063-40a0-816f-554f95cd2956-cni-binary-copy\") pod \"multus-msgfd\" (UID: \"18916d6d-e063-40a0-816f-554f95cd2956\") " pod="openshift-multus/multus-msgfd" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.516654 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/18916d6d-e063-40a0-816f-554f95cd2956-host-var-lib-kubelet\") pod \"multus-msgfd\" (UID: \"18916d6d-e063-40a0-816f-554f95cd2956\") " pod="openshift-multus/multus-msgfd" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.516685 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/18916d6d-e063-40a0-816f-554f95cd2956-host-var-lib-kubelet\") pod \"multus-msgfd\" (UID: \"18916d6d-e063-40a0-816f-554f95cd2956\") " pod="openshift-multus/multus-msgfd" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.516707 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmn2s\" (UniqueName: \"kubernetes.io/projected/18916d6d-e063-40a0-816f-554f95cd2956-kube-api-access-qmn2s\") pod \"multus-msgfd\" (UID: \"18916d6d-e063-40a0-816f-554f95cd2956\") " pod="openshift-multus/multus-msgfd" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.516737 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/18916d6d-e063-40a0-816f-554f95cd2956-host-var-lib-cni-multus\") pod \"multus-msgfd\" (UID: \"18916d6d-e063-40a0-816f-554f95cd2956\") " pod="openshift-multus/multus-msgfd" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.516757 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/18916d6d-e063-40a0-816f-554f95cd2956-multus-daemon-config\") pod \"multus-msgfd\" (UID: \"18916d6d-e063-40a0-816f-554f95cd2956\") " pod="openshift-multus/multus-msgfd" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.516775 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ca38b6e7-b21c-453d-8b6c-a163dac84b35-mcd-auth-proxy-config\") pod \"machine-config-daemon-k8v8k\" (UID: \"ca38b6e7-b21c-453d-8b6c-a163dac84b35\") " pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.516791 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a6c9480c-4161-4c38-bec1-0822c6692f6e-cnibin\") pod \"multus-additional-cni-plugins-kx4nl\" (UID: \"a6c9480c-4161-4c38-bec1-0822c6692f6e\") " pod="openshift-multus/multus-additional-cni-plugins-kx4nl" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.516808 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/18916d6d-e063-40a0-816f-554f95cd2956-host-var-lib-cni-multus\") pod \"multus-msgfd\" (UID: \"18916d6d-e063-40a0-816f-554f95cd2956\") " pod="openshift-multus/multus-msgfd" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.516810 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7t282\" (UniqueName: \"kubernetes.io/projected/a6c9480c-4161-4c38-bec1-0822c6692f6e-kube-api-access-7t282\") pod \"multus-additional-cni-plugins-kx4nl\" (UID: \"a6c9480c-4161-4c38-bec1-0822c6692f6e\") " pod="openshift-multus/multus-additional-cni-plugins-kx4nl" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.516854 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a6c9480c-4161-4c38-bec1-0822c6692f6e-cnibin\") pod \"multus-additional-cni-plugins-kx4nl\" (UID: \"a6c9480c-4161-4c38-bec1-0822c6692f6e\") " pod="openshift-multus/multus-additional-cni-plugins-kx4nl" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.516876 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/18916d6d-e063-40a0-816f-554f95cd2956-host-var-lib-cni-bin\") pod \"multus-msgfd\" (UID: \"18916d6d-e063-40a0-816f-554f95cd2956\") " pod="openshift-multus/multus-msgfd" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.516908 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/18916d6d-e063-40a0-816f-554f95cd2956-os-release\") pod \"multus-msgfd\" (UID: \"18916d6d-e063-40a0-816f-554f95cd2956\") " pod="openshift-multus/multus-msgfd" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.516927 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/a6c9480c-4161-4c38-bec1-0822c6692f6e-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-kx4nl\" (UID: \"a6c9480c-4161-4c38-bec1-0822c6692f6e\") " pod="openshift-multus/multus-additional-cni-plugins-kx4nl" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.516959 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/18916d6d-e063-40a0-816f-554f95cd2956-host-var-lib-cni-bin\") pod \"multus-msgfd\" (UID: \"18916d6d-e063-40a0-816f-554f95cd2956\") " pod="openshift-multus/multus-msgfd" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.516966 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/18916d6d-e063-40a0-816f-554f95cd2956-hostroot\") pod \"multus-msgfd\" (UID: \"18916d6d-e063-40a0-816f-554f95cd2956\") " pod="openshift-multus/multus-msgfd" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.516937 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/18916d6d-e063-40a0-816f-554f95cd2956-hostroot\") pod \"multus-msgfd\" (UID: \"18916d6d-e063-40a0-816f-554f95cd2956\") " pod="openshift-multus/multus-msgfd" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.517013 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/18916d6d-e063-40a0-816f-554f95cd2956-os-release\") pod \"multus-msgfd\" (UID: \"18916d6d-e063-40a0-816f-554f95cd2956\") " pod="openshift-multus/multus-msgfd" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.517051 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/18916d6d-e063-40a0-816f-554f95cd2956-multus-conf-dir\") pod \"multus-msgfd\" (UID: \"18916d6d-e063-40a0-816f-554f95cd2956\") " pod="openshift-multus/multus-msgfd" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.517118 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a6c9480c-4161-4c38-bec1-0822c6692f6e-cni-binary-copy\") pod \"multus-additional-cni-plugins-kx4nl\" (UID: \"a6c9480c-4161-4c38-bec1-0822c6692f6e\") " pod="openshift-multus/multus-additional-cni-plugins-kx4nl" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.517160 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a6c9480c-4161-4c38-bec1-0822c6692f6e-tuning-conf-dir\") pod \"multus-additional-cni-plugins-kx4nl\" (UID: \"a6c9480c-4161-4c38-bec1-0822c6692f6e\") " pod="openshift-multus/multus-additional-cni-plugins-kx4nl" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.517201 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/ca38b6e7-b21c-453d-8b6c-a163dac84b35-rootfs\") pod \"machine-config-daemon-k8v8k\" (UID: \"ca38b6e7-b21c-453d-8b6c-a163dac84b35\") " pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.517122 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/18916d6d-e063-40a0-816f-554f95cd2956-multus-conf-dir\") pod \"multus-msgfd\" (UID: \"18916d6d-e063-40a0-816f-554f95cd2956\") " pod="openshift-multus/multus-msgfd" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.517313 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/ca38b6e7-b21c-453d-8b6c-a163dac84b35-rootfs\") pod \"machine-config-daemon-k8v8k\" (UID: \"ca38b6e7-b21c-453d-8b6c-a163dac84b35\") " pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.517465 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/18916d6d-e063-40a0-816f-554f95cd2956-multus-daemon-config\") pod \"multus-msgfd\" (UID: \"18916d6d-e063-40a0-816f-554f95cd2956\") " pod="openshift-multus/multus-msgfd" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.517672 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a6c9480c-4161-4c38-bec1-0822c6692f6e-cni-binary-copy\") pod \"multus-additional-cni-plugins-kx4nl\" (UID: \"a6c9480c-4161-4c38-bec1-0822c6692f6e\") " pod="openshift-multus/multus-additional-cni-plugins-kx4nl" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.517674 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ca38b6e7-b21c-453d-8b6c-a163dac84b35-mcd-auth-proxy-config\") pod \"machine-config-daemon-k8v8k\" (UID: \"ca38b6e7-b21c-453d-8b6c-a163dac84b35\") " pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.517762 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a6c9480c-4161-4c38-bec1-0822c6692f6e-tuning-conf-dir\") pod \"multus-additional-cni-plugins-kx4nl\" (UID: \"a6c9480c-4161-4c38-bec1-0822c6692f6e\") " pod="openshift-multus/multus-additional-cni-plugins-kx4nl" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.526186 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ca38b6e7-b21c-453d-8b6c-a163dac84b35-proxy-tls\") pod \"machine-config-daemon-k8v8k\" (UID: \"ca38b6e7-b21c-453d-8b6c-a163dac84b35\") " pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.526354 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f8pfh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13cb51e0-9eb4-4948-a9bf-93cddaa429fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e67e9f34fe5e5e9f272673e47a80dfec89a2832289e719b09d5a13399412b2ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkcvd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f8pfh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.538184 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7t282\" (UniqueName: \"kubernetes.io/projected/a6c9480c-4161-4c38-bec1-0822c6692f6e-kube-api-access-7t282\") pod \"multus-additional-cni-plugins-kx4nl\" (UID: \"a6c9480c-4161-4c38-bec1-0822c6692f6e\") " pod="openshift-multus/multus-additional-cni-plugins-kx4nl" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.538491 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bm52q\" (UniqueName: \"kubernetes.io/projected/ca38b6e7-b21c-453d-8b6c-a163dac84b35-kube-api-access-bm52q\") pod \"machine-config-daemon-k8v8k\" (UID: \"ca38b6e7-b21c-453d-8b6c-a163dac84b35\") " pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.545532 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-msgfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18916d6d-e063-40a0-816f-554f95cd2956\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-msgfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.550186 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmn2s\" (UniqueName: \"kubernetes.io/projected/18916d6d-e063-40a0-816f-554f95cd2956-kube-api-access-qmn2s\") pod \"multus-msgfd\" (UID: \"18916d6d-e063-40a0-816f-554f95cd2956\") " pod="openshift-multus/multus-msgfd" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.566070 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.581480 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.593766 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efd34c89-7350-4ce0-83d9-302614df88f7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa3ef5d82c776e482d3da2d223d74423393c75b813707483fadca8cfbb5ed3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://695c70a36ec8a626d22b6dc04fdaad77e3e1f27a035ce6f62b96afe1f2c29361\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2611c9a878eac336beeea637370ce7fe47a5a80a6f29002cb2fb79d4637a1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d0e25e29d8f9c5146809e50f50a20c537f5ddecea1b902928a94870b5d44ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68d1439ead0f87e8cde6925c6db2cfde8a7fe89c6e5afaf719868740138742df\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:54:16Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 15:54:01.029442 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:54:01.030078 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2660512818/tls.crt::/tmp/serving-cert-2660512818/tls.key\\\\\\\"\\\\nI0217 15:54:16.361222 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:54:16.370125 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:54:16.370169 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:54:16.370202 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:54:16.370212 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:54:16.383437 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:54:16.383473 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383482 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:54:16.383494 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:54:16.383498 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:54:16.383502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:54:16.383616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:54:16.393934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://715d799f5e1732f88175b90bad28450b9c5148e89bf47ac3e47f9585acf3b392\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:53:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.606063 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.616545 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pr5s4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4989dd6-5d44-42b5-882c-12a10ffc7911\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://228e9f46385cedf80299c68685a8b2b94d96c41ade18eeea5de7a83c648cf704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2xc9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pr5s4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.656067 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-msgfd" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.669182 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-kx4nl" Feb 17 15:54:18 crc kubenswrapper[4808]: W0217 15:54:18.673715 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod18916d6d_e063_40a0_816f_554f95cd2956.slice/crio-0830da4e22ca1f08d719d050f54327f8d31a2fd2b5efe349b722bc7cea49785d WatchSource:0}: Error finding container 0830da4e22ca1f08d719d050f54327f8d31a2fd2b5efe349b722bc7cea49785d: Status 404 returned error can't find the container with id 0830da4e22ca1f08d719d050f54327f8d31a2fd2b5efe349b722bc7cea49785d Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.676432 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" Feb 17 15:54:18 crc kubenswrapper[4808]: W0217 15:54:18.700969 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda6c9480c_4161_4c38_bec1_0822c6692f6e.slice/crio-7b2d0c263fd8165a5a56a6c8d7a691d79a6bf709c4bbd0f10203b50e2ce86215 WatchSource:0}: Error finding container 7b2d0c263fd8165a5a56a6c8d7a691d79a6bf709c4bbd0f10203b50e2ce86215: Status 404 returned error can't find the container with id 7b2d0c263fd8165a5a56a6c8d7a691d79a6bf709c4bbd0f10203b50e2ce86215 Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.702671 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-tgvlh"] Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.704325 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.708190 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.709238 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.709724 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.711133 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.711544 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.711705 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.713417 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.728996 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b5cb9af7fe50ad534e758ba5647e162dfc951f41f07330e8b671427811de556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.741936 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca38b6e7-b21c-453d-8b6c-a163dac84b35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k8v8k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.762096 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.800142 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6c9480c-4161-4c38-bec1-0822c6692f6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx4nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.819819 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:54:18 crc kubenswrapper[4808]: E0217 15:54:18.819960 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:54:20.819941031 +0000 UTC m=+24.336300104 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.819990 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-run-ovn\") pod \"ovnkube-node-tgvlh\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.820016 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5748f02a-e3dd-47c7-b89d-b472c718e593-ovn-node-metrics-cert\") pod \"ovnkube-node-tgvlh\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.820031 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-run-systemd\") pod \"ovnkube-node-tgvlh\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.820047 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-run-openvswitch\") pod \"ovnkube-node-tgvlh\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.820063 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-tgvlh\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.820109 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-host-cni-netd\") pod \"ovnkube-node-tgvlh\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.820132 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/5748f02a-e3dd-47c7-b89d-b472c718e593-ovnkube-script-lib\") pod \"ovnkube-node-tgvlh\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.820155 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-host-run-ovn-kubernetes\") pod \"ovnkube-node-tgvlh\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.820175 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-etc-openvswitch\") pod \"ovnkube-node-tgvlh\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.820326 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-var-lib-openvswitch\") pod \"ovnkube-node-tgvlh\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.820419 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5748f02a-e3dd-47c7-b89d-b472c718e593-ovnkube-config\") pod \"ovnkube-node-tgvlh\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.820497 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-host-cni-bin\") pod \"ovnkube-node-tgvlh\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.820549 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-node-log\") pod \"ovnkube-node-tgvlh\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.820619 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-log-socket\") pod \"ovnkube-node-tgvlh\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.820663 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnzj8\" (UniqueName: \"kubernetes.io/projected/5748f02a-e3dd-47c7-b89d-b472c718e593-kube-api-access-qnzj8\") pod \"ovnkube-node-tgvlh\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.820715 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-host-kubelet\") pod \"ovnkube-node-tgvlh\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.820755 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-systemd-units\") pod \"ovnkube-node-tgvlh\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.820787 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-host-run-netns\") pod \"ovnkube-node-tgvlh\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.820828 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-host-slash\") pod \"ovnkube-node-tgvlh\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.820866 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5748f02a-e3dd-47c7-b89d-b472c718e593-env-overrides\") pod \"ovnkube-node-tgvlh\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.842544 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.884723 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6556f8ef16656338bd11e718549ef3c019e96928825ab9dc0596f24b8f43e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbc64aec6f296c59b9fb1e8c183c9f80c346f2d76620db59376c914ffcec02b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.895305 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f8pfh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13cb51e0-9eb4-4948-a9bf-93cddaa429fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e67e9f34fe5e5e9f272673e47a80dfec89a2832289e719b09d5a13399412b2ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkcvd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f8pfh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.912885 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-msgfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18916d6d-e063-40a0-816f-554f95cd2956\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-msgfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.922386 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-host-kubelet\") pod \"ovnkube-node-tgvlh\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.922452 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-systemd-units\") pod \"ovnkube-node-tgvlh\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.922547 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-host-run-netns\") pod \"ovnkube-node-tgvlh\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.922582 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-host-slash\") pod \"ovnkube-node-tgvlh\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.922604 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5748f02a-e3dd-47c7-b89d-b472c718e593-env-overrides\") pod \"ovnkube-node-tgvlh\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.922629 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.922659 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.922681 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-run-openvswitch\") pod \"ovnkube-node-tgvlh\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.922699 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-run-ovn\") pod \"ovnkube-node-tgvlh\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.922718 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5748f02a-e3dd-47c7-b89d-b472c718e593-ovn-node-metrics-cert\") pod \"ovnkube-node-tgvlh\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.922738 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-run-systemd\") pod \"ovnkube-node-tgvlh\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.922761 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.922783 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-tgvlh\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.922807 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-host-run-ovn-kubernetes\") pod \"ovnkube-node-tgvlh\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.922830 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-host-cni-netd\") pod \"ovnkube-node-tgvlh\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.922850 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/5748f02a-e3dd-47c7-b89d-b472c718e593-ovnkube-script-lib\") pod \"ovnkube-node-tgvlh\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.922869 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-etc-openvswitch\") pod \"ovnkube-node-tgvlh\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.922889 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.922912 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-var-lib-openvswitch\") pod \"ovnkube-node-tgvlh\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.922936 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-host-cni-bin\") pod \"ovnkube-node-tgvlh\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.922956 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5748f02a-e3dd-47c7-b89d-b472c718e593-ovnkube-config\") pod \"ovnkube-node-tgvlh\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.922974 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-node-log\") pod \"ovnkube-node-tgvlh\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.922992 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-log-socket\") pod \"ovnkube-node-tgvlh\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.923008 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qnzj8\" (UniqueName: \"kubernetes.io/projected/5748f02a-e3dd-47c7-b89d-b472c718e593-kube-api-access-qnzj8\") pod \"ovnkube-node-tgvlh\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:54:18 crc kubenswrapper[4808]: E0217 15:54:18.923231 4808 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 15:54:18 crc kubenswrapper[4808]: E0217 15:54:18.923305 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 15:54:20.923286035 +0000 UTC m=+24.439645098 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 15:54:18 crc kubenswrapper[4808]: E0217 15:54:18.923321 4808 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.923343 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-tgvlh\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.923383 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-host-run-ovn-kubernetes\") pod \"ovnkube-node-tgvlh\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.923411 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-host-cni-netd\") pod \"ovnkube-node-tgvlh\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:54:18 crc kubenswrapper[4808]: E0217 15:54:18.923417 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 15:54:20.923394648 +0000 UTC m=+24.439753721 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.923471 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-host-kubelet\") pod \"ovnkube-node-tgvlh\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.923535 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-systemd-units\") pod \"ovnkube-node-tgvlh\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.923562 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-host-run-netns\") pod \"ovnkube-node-tgvlh\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.923619 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-host-slash\") pod \"ovnkube-node-tgvlh\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.923760 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-etc-openvswitch\") pod \"ovnkube-node-tgvlh\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.923803 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-node-log\") pod \"ovnkube-node-tgvlh\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.923927 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-run-systemd\") pod \"ovnkube-node-tgvlh\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.923957 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-run-openvswitch\") pod \"ovnkube-node-tgvlh\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.923979 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-run-ovn\") pod \"ovnkube-node-tgvlh\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:54:18 crc kubenswrapper[4808]: E0217 15:54:18.924042 4808 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 15:54:18 crc kubenswrapper[4808]: E0217 15:54:18.924061 4808 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 15:54:18 crc kubenswrapper[4808]: E0217 15:54:18.924076 4808 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.924112 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-log-socket\") pod \"ovnkube-node-tgvlh\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.924112 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-host-cni-bin\") pod \"ovnkube-node-tgvlh\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:54:18 crc kubenswrapper[4808]: E0217 15:54:18.924252 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-17 15:54:20.924222909 +0000 UTC m=+24.440581982 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.924297 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-var-lib-openvswitch\") pod \"ovnkube-node-tgvlh\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:54:18 crc kubenswrapper[4808]: E0217 15:54:18.924349 4808 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 15:54:18 crc kubenswrapper[4808]: E0217 15:54:18.924371 4808 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 15:54:18 crc kubenswrapper[4808]: E0217 15:54:18.924383 4808 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:54:18 crc kubenswrapper[4808]: E0217 15:54:18.924492 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-17 15:54:20.924468836 +0000 UTC m=+24.440827909 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.924780 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/5748f02a-e3dd-47c7-b89d-b472c718e593-ovnkube-script-lib\") pod \"ovnkube-node-tgvlh\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.924827 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5748f02a-e3dd-47c7-b89d-b472c718e593-env-overrides\") pod \"ovnkube-node-tgvlh\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.926043 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5748f02a-e3dd-47c7-b89d-b472c718e593-ovnkube-config\") pod \"ovnkube-node-tgvlh\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.932494 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748f02a-e3dd-47c7-b89d-b472c718e593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tgvlh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.947814 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.960020 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pr5s4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4989dd6-5d44-42b5-882c-12a10ffc7911\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://228e9f46385cedf80299c68685a8b2b94d96c41ade18eeea5de7a83c648cf704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2xc9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pr5s4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.975111 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5748f02a-e3dd-47c7-b89d-b472c718e593-ovn-node-metrics-cert\") pod \"ovnkube-node-tgvlh\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.974965 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efd34c89-7350-4ce0-83d9-302614df88f7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa3ef5d82c776e482d3da2d223d74423393c75b813707483fadca8cfbb5ed3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://695c70a36ec8a626d22b6dc04fdaad77e3e1f27a035ce6f62b96afe1f2c29361\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2611c9a878eac336beeea637370ce7fe47a5a80a6f29002cb2fb79d4637a1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d0e25e29d8f9c5146809e50f50a20c537f5ddecea1b902928a94870b5d44ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68d1439ead0f87e8cde6925c6db2cfde8a7fe89c6e5afaf719868740138742df\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:54:16Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 15:54:01.029442 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:54:01.030078 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2660512818/tls.crt::/tmp/serving-cert-2660512818/tls.key\\\\\\\"\\\\nI0217 15:54:16.361222 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:54:16.370125 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:54:16.370169 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:54:16.370202 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:54:16.370212 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:54:16.383437 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:54:16.383473 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383482 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:54:16.383494 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:54:16.383498 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:54:16.383502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:54:16.383616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:54:16.393934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://715d799f5e1732f88175b90bad28450b9c5148e89bf47ac3e47f9585acf3b392\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:53:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.975498 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qnzj8\" (UniqueName: \"kubernetes.io/projected/5748f02a-e3dd-47c7-b89d-b472c718e593-kube-api-access-qnzj8\") pod \"ovnkube-node-tgvlh\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.989947 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.019330 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:54:19 crc kubenswrapper[4808]: W0217 15:54:19.033386 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5748f02a_e3dd_47c7_b89d_b472c718e593.slice/crio-ad60f37f93ae8b251f62c5805faa94eb63cd424e9052d1f8a1dad95e11326ec9 WatchSource:0}: Error finding container ad60f37f93ae8b251f62c5805faa94eb63cd424e9052d1f8a1dad95e11326ec9: Status 404 returned error can't find the container with id ad60f37f93ae8b251f62c5805faa94eb63cd424e9052d1f8a1dad95e11326ec9 Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.098320 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 15:20:53.965851242 +0000 UTC Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.145803 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.145858 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.145941 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:54:19 crc kubenswrapper[4808]: E0217 15:54:19.145993 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:54:19 crc kubenswrapper[4808]: E0217 15:54:19.146123 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:54:19 crc kubenswrapper[4808]: E0217 15:54:19.146233 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.150225 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.151087 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.152427 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.153190 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.154235 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.154819 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.155484 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.156676 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.157432 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.158361 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.158914 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.159989 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.160522 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.161034 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.161975 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.162476 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.163512 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.163952 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.164518 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.165538 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.166156 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.167108 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.167536 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.168531 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.168989 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.169749 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.170872 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.171379 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.172368 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.172863 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.173712 4808 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.173822 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.175395 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.176311 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.176771 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.178263 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.178923 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.179875 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.180635 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.181865 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.182343 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.183387 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.184021 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.185023 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.185469 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.186342 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.186905 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.187954 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.188443 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.190071 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.190531 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.191514 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.192157 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.192630 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.290627 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" event={"ID":"ca38b6e7-b21c-453d-8b6c-a163dac84b35","Type":"ContainerStarted","Data":"14df09051221e795ef203b228b1f61d67e86d8052d81b4853a27d50d2b6e64bb"} Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.290681 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" event={"ID":"ca38b6e7-b21c-453d-8b6c-a163dac84b35","Type":"ContainerStarted","Data":"383650c9e8169aa5621d731ebcbfdd1ace0491ad4e7931fca1f6b595e0e782b9"} Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.290700 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" event={"ID":"ca38b6e7-b21c-453d-8b6c-a163dac84b35","Type":"ContainerStarted","Data":"e07e40ad4d38873b67ba6ba5a9d61cab8dd149e8e9c16cd0656006595f3789f3"} Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.292473 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-msgfd" event={"ID":"18916d6d-e063-40a0-816f-554f95cd2956","Type":"ContainerStarted","Data":"d94a7bfe9ebc3fcec167acc2f840374566394d9425801a71bd3626ce196ee3a1"} Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.292532 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-msgfd" event={"ID":"18916d6d-e063-40a0-816f-554f95cd2956","Type":"ContainerStarted","Data":"0830da4e22ca1f08d719d050f54327f8d31a2fd2b5efe349b722bc7cea49785d"} Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.294679 4808 generic.go:334] "Generic (PLEG): container finished" podID="5748f02a-e3dd-47c7-b89d-b472c718e593" containerID="35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437" exitCode=0 Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.294729 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" event={"ID":"5748f02a-e3dd-47c7-b89d-b472c718e593","Type":"ContainerDied","Data":"35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437"} Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.294750 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" event={"ID":"5748f02a-e3dd-47c7-b89d-b472c718e593","Type":"ContainerStarted","Data":"ad60f37f93ae8b251f62c5805faa94eb63cd424e9052d1f8a1dad95e11326ec9"} Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.297494 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4nl" event={"ID":"a6c9480c-4161-4c38-bec1-0822c6692f6e","Type":"ContainerStarted","Data":"7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d"} Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.297532 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4nl" event={"ID":"a6c9480c-4161-4c38-bec1-0822c6692f6e","Type":"ContainerStarted","Data":"7b2d0c263fd8165a5a56a6c8d7a691d79a6bf709c4bbd0f10203b50e2ce86215"} Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.306446 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:19Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.323727 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:19Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.341428 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6556f8ef16656338bd11e718549ef3c019e96928825ab9dc0596f24b8f43e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbc64aec6f296c59b9fb1e8c183c9f80c346f2d76620db59376c914ffcec02b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:19Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.353473 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f8pfh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13cb51e0-9eb4-4948-a9bf-93cddaa429fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e67e9f34fe5e5e9f272673e47a80dfec89a2832289e719b09d5a13399412b2ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkcvd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f8pfh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:19Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.366943 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-msgfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18916d6d-e063-40a0-816f-554f95cd2956\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-msgfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:19Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.389865 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748f02a-e3dd-47c7-b89d-b472c718e593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tgvlh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:19Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.414227 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efd34c89-7350-4ce0-83d9-302614df88f7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa3ef5d82c776e482d3da2d223d74423393c75b813707483fadca8cfbb5ed3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://695c70a36ec8a626d22b6dc04fdaad77e3e1f27a035ce6f62b96afe1f2c29361\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2611c9a878eac336beeea637370ce7fe47a5a80a6f29002cb2fb79d4637a1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d0e25e29d8f9c5146809e50f50a20c537f5ddecea1b902928a94870b5d44ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68d1439ead0f87e8cde6925c6db2cfde8a7fe89c6e5afaf719868740138742df\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:54:16Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 15:54:01.029442 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:54:01.030078 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2660512818/tls.crt::/tmp/serving-cert-2660512818/tls.key\\\\\\\"\\\\nI0217 15:54:16.361222 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:54:16.370125 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:54:16.370169 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:54:16.370202 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:54:16.370212 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:54:16.383437 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:54:16.383473 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383482 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:54:16.383494 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:54:16.383498 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:54:16.383502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:54:16.383616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:54:16.393934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://715d799f5e1732f88175b90bad28450b9c5148e89bf47ac3e47f9585acf3b392\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:53:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:19Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.430308 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:19Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.443079 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pr5s4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4989dd6-5d44-42b5-882c-12a10ffc7911\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://228e9f46385cedf80299c68685a8b2b94d96c41ade18eeea5de7a83c648cf704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2xc9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pr5s4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:19Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.463693 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b5cb9af7fe50ad534e758ba5647e162dfc951f41f07330e8b671427811de556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:19Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.478469 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:19Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.496325 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6c9480c-4161-4c38-bec1-0822c6692f6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx4nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:19Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.514766 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca38b6e7-b21c-453d-8b6c-a163dac84b35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14df09051221e795ef203b228b1f61d67e86d8052d81b4853a27d50d2b6e64bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://383650c9e8169aa5621d731ebcbfdd1ace0491ad4e7931fca1f6b595e0e782b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k8v8k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:19Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.530098 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6556f8ef16656338bd11e718549ef3c019e96928825ab9dc0596f24b8f43e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbc64aec6f296c59b9fb1e8c183c9f80c346f2d76620db59376c914ffcec02b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:19Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.543756 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f8pfh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13cb51e0-9eb4-4948-a9bf-93cddaa429fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e67e9f34fe5e5e9f272673e47a80dfec89a2832289e719b09d5a13399412b2ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkcvd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f8pfh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:19Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.560560 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-msgfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18916d6d-e063-40a0-816f-554f95cd2956\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d94a7bfe9ebc3fcec167acc2f840374566394d9425801a71bd3626ce196ee3a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-msgfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:19Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.586655 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748f02a-e3dd-47c7-b89d-b472c718e593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tgvlh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:19Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.601593 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:19Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.620305 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:19Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.640282 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efd34c89-7350-4ce0-83d9-302614df88f7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa3ef5d82c776e482d3da2d223d74423393c75b813707483fadca8cfbb5ed3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://695c70a36ec8a626d22b6dc04fdaad77e3e1f27a035ce6f62b96afe1f2c29361\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2611c9a878eac336beeea637370ce7fe47a5a80a6f29002cb2fb79d4637a1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d0e25e29d8f9c5146809e50f50a20c537f5ddecea1b902928a94870b5d44ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68d1439ead0f87e8cde6925c6db2cfde8a7fe89c6e5afaf719868740138742df\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:54:16Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 15:54:01.029442 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:54:01.030078 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2660512818/tls.crt::/tmp/serving-cert-2660512818/tls.key\\\\\\\"\\\\nI0217 15:54:16.361222 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:54:16.370125 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:54:16.370169 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:54:16.370202 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:54:16.370212 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:54:16.383437 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:54:16.383473 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383482 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:54:16.383494 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:54:16.383498 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:54:16.383502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:54:16.383616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:54:16.393934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://715d799f5e1732f88175b90bad28450b9c5148e89bf47ac3e47f9585acf3b392\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:53:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:19Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.661401 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:19Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.703396 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pr5s4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4989dd6-5d44-42b5-882c-12a10ffc7911\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://228e9f46385cedf80299c68685a8b2b94d96c41ade18eeea5de7a83c648cf704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2xc9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pr5s4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:19Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.743281 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b5cb9af7fe50ad534e758ba5647e162dfc951f41f07330e8b671427811de556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:19Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.780396 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:19Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.819849 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6c9480c-4161-4c38-bec1-0822c6692f6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx4nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:19Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.857999 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca38b6e7-b21c-453d-8b6c-a163dac84b35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14df09051221e795ef203b228b1f61d67e86d8052d81b4853a27d50d2b6e64bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://383650c9e8169aa5621d731ebcbfdd1ace0491ad4e7931fca1f6b595e0e782b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k8v8k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:19Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:20 crc kubenswrapper[4808]: I0217 15:54:20.099160 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 07:32:01.967522047 +0000 UTC Feb 17 15:54:20 crc kubenswrapper[4808]: I0217 15:54:20.304976 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" event={"ID":"5748f02a-e3dd-47c7-b89d-b472c718e593","Type":"ContainerStarted","Data":"28b04c73bfd5eadf6c1e436f6a7150074ee8357cef79b0e040c1d9f3809aab13"} Feb 17 15:54:20 crc kubenswrapper[4808]: I0217 15:54:20.305028 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" event={"ID":"5748f02a-e3dd-47c7-b89d-b472c718e593","Type":"ContainerStarted","Data":"4c263e6c0445a0badadcbc5b50c370fd4ee9a4d0cb3e535e3d7944e938cbea4f"} Feb 17 15:54:20 crc kubenswrapper[4808]: I0217 15:54:20.305041 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" event={"ID":"5748f02a-e3dd-47c7-b89d-b472c718e593","Type":"ContainerStarted","Data":"80ab3de82f2a3f22425c34c9b4abcbc925a7076e3f2ce3b952f10aeb856e1c09"} Feb 17 15:54:20 crc kubenswrapper[4808]: I0217 15:54:20.305068 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" event={"ID":"5748f02a-e3dd-47c7-b89d-b472c718e593","Type":"ContainerStarted","Data":"5e9e729fa5a68d07a0f7e4a86114ed39e4128428e5a21c2f3f113f869adc9fc2"} Feb 17 15:54:20 crc kubenswrapper[4808]: I0217 15:54:20.305079 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" event={"ID":"5748f02a-e3dd-47c7-b89d-b472c718e593","Type":"ContainerStarted","Data":"26a9d62d12c66018649ffcb84c69e20f1c08f3241bdb02ba4306b08dbe5ec49a"} Feb 17 15:54:20 crc kubenswrapper[4808]: I0217 15:54:20.308015 4808 generic.go:334] "Generic (PLEG): container finished" podID="a6c9480c-4161-4c38-bec1-0822c6692f6e" containerID="7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d" exitCode=0 Feb 17 15:54:20 crc kubenswrapper[4808]: I0217 15:54:20.308048 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4nl" event={"ID":"a6c9480c-4161-4c38-bec1-0822c6692f6e","Type":"ContainerDied","Data":"7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d"} Feb 17 15:54:20 crc kubenswrapper[4808]: I0217 15:54:20.327493 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca38b6e7-b21c-453d-8b6c-a163dac84b35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14df09051221e795ef203b228b1f61d67e86d8052d81b4853a27d50d2b6e64bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://383650c9e8169aa5621d731ebcbfdd1ace0491ad4e7931fca1f6b595e0e782b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k8v8k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:20Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:20 crc kubenswrapper[4808]: I0217 15:54:20.347959 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:20Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:20 crc kubenswrapper[4808]: I0217 15:54:20.367074 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6c9480c-4161-4c38-bec1-0822c6692f6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx4nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:20Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:20 crc kubenswrapper[4808]: I0217 15:54:20.385214 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:20Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:20 crc kubenswrapper[4808]: I0217 15:54:20.406525 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6556f8ef16656338bd11e718549ef3c019e96928825ab9dc0596f24b8f43e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbc64aec6f296c59b9fb1e8c183c9f80c346f2d76620db59376c914ffcec02b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:20Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:20 crc kubenswrapper[4808]: I0217 15:54:20.422474 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f8pfh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13cb51e0-9eb4-4948-a9bf-93cddaa429fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e67e9f34fe5e5e9f272673e47a80dfec89a2832289e719b09d5a13399412b2ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkcvd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f8pfh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:20Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:20 crc kubenswrapper[4808]: I0217 15:54:20.441123 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-msgfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18916d6d-e063-40a0-816f-554f95cd2956\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d94a7bfe9ebc3fcec167acc2f840374566394d9425801a71bd3626ce196ee3a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-msgfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:20Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:20 crc kubenswrapper[4808]: I0217 15:54:20.467388 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748f02a-e3dd-47c7-b89d-b472c718e593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tgvlh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:20Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:20 crc kubenswrapper[4808]: I0217 15:54:20.483847 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:20Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:20 crc kubenswrapper[4808]: I0217 15:54:20.496999 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pr5s4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4989dd6-5d44-42b5-882c-12a10ffc7911\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://228e9f46385cedf80299c68685a8b2b94d96c41ade18eeea5de7a83c648cf704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2xc9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pr5s4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:20Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:20 crc kubenswrapper[4808]: I0217 15:54:20.517525 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efd34c89-7350-4ce0-83d9-302614df88f7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa3ef5d82c776e482d3da2d223d74423393c75b813707483fadca8cfbb5ed3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://695c70a36ec8a626d22b6dc04fdaad77e3e1f27a035ce6f62b96afe1f2c29361\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2611c9a878eac336beeea637370ce7fe47a5a80a6f29002cb2fb79d4637a1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d0e25e29d8f9c5146809e50f50a20c537f5ddecea1b902928a94870b5d44ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68d1439ead0f87e8cde6925c6db2cfde8a7fe89c6e5afaf719868740138742df\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:54:16Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 15:54:01.029442 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:54:01.030078 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2660512818/tls.crt::/tmp/serving-cert-2660512818/tls.key\\\\\\\"\\\\nI0217 15:54:16.361222 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:54:16.370125 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:54:16.370169 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:54:16.370202 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:54:16.370212 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:54:16.383437 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:54:16.383473 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383482 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:54:16.383494 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:54:16.383498 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:54:16.383502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:54:16.383616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:54:16.393934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://715d799f5e1732f88175b90bad28450b9c5148e89bf47ac3e47f9585acf3b392\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:53:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:20Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:20 crc kubenswrapper[4808]: I0217 15:54:20.539785 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:20Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:20 crc kubenswrapper[4808]: I0217 15:54:20.555691 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b5cb9af7fe50ad534e758ba5647e162dfc951f41f07330e8b671427811de556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:20Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:20 crc kubenswrapper[4808]: I0217 15:54:20.704229 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 15:54:20 crc kubenswrapper[4808]: I0217 15:54:20.710748 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 15:54:20 crc kubenswrapper[4808]: I0217 15:54:20.714702 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Feb 17 15:54:20 crc kubenswrapper[4808]: I0217 15:54:20.720216 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:20Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:20 crc kubenswrapper[4808]: I0217 15:54:20.737169 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:20Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:20 crc kubenswrapper[4808]: I0217 15:54:20.753485 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6556f8ef16656338bd11e718549ef3c019e96928825ab9dc0596f24b8f43e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbc64aec6f296c59b9fb1e8c183c9f80c346f2d76620db59376c914ffcec02b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:20Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:20 crc kubenswrapper[4808]: I0217 15:54:20.766401 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f8pfh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13cb51e0-9eb4-4948-a9bf-93cddaa429fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e67e9f34fe5e5e9f272673e47a80dfec89a2832289e719b09d5a13399412b2ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkcvd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f8pfh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:20Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:20 crc kubenswrapper[4808]: I0217 15:54:20.785722 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-msgfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18916d6d-e063-40a0-816f-554f95cd2956\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d94a7bfe9ebc3fcec167acc2f840374566394d9425801a71bd3626ce196ee3a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-msgfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:20Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:20 crc kubenswrapper[4808]: I0217 15:54:20.816625 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748f02a-e3dd-47c7-b89d-b472c718e593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tgvlh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:20Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:20 crc kubenswrapper[4808]: I0217 15:54:20.833203 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:20Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:20 crc kubenswrapper[4808]: I0217 15:54:20.843968 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pr5s4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4989dd6-5d44-42b5-882c-12a10ffc7911\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://228e9f46385cedf80299c68685a8b2b94d96c41ade18eeea5de7a83c648cf704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2xc9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pr5s4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:20Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:20 crc kubenswrapper[4808]: I0217 15:54:20.845915 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:54:20 crc kubenswrapper[4808]: E0217 15:54:20.845983 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:54:24.845959458 +0000 UTC m=+28.362318541 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:54:20 crc kubenswrapper[4808]: I0217 15:54:20.862198 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efd34c89-7350-4ce0-83d9-302614df88f7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa3ef5d82c776e482d3da2d223d74423393c75b813707483fadca8cfbb5ed3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://695c70a36ec8a626d22b6dc04fdaad77e3e1f27a035ce6f62b96afe1f2c29361\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2611c9a878eac336beeea637370ce7fe47a5a80a6f29002cb2fb79d4637a1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d0e25e29d8f9c5146809e50f50a20c537f5ddecea1b902928a94870b5d44ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68d1439ead0f87e8cde6925c6db2cfde8a7fe89c6e5afaf719868740138742df\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:54:16Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 15:54:01.029442 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:54:01.030078 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2660512818/tls.crt::/tmp/serving-cert-2660512818/tls.key\\\\\\\"\\\\nI0217 15:54:16.361222 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:54:16.370125 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:54:16.370169 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:54:16.370202 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:54:16.370212 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:54:16.383437 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:54:16.383473 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383482 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:54:16.383494 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:54:16.383498 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:54:16.383502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:54:16.383616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:54:16.393934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://715d799f5e1732f88175b90bad28450b9c5148e89bf47ac3e47f9585acf3b392\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:53:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:20Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:20 crc kubenswrapper[4808]: I0217 15:54:20.876653 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b5cb9af7fe50ad534e758ba5647e162dfc951f41f07330e8b671427811de556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:20Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:20 crc kubenswrapper[4808]: I0217 15:54:20.891962 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6c9480c-4161-4c38-bec1-0822c6692f6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx4nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:20Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:20 crc kubenswrapper[4808]: I0217 15:54:20.903521 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca38b6e7-b21c-453d-8b6c-a163dac84b35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14df09051221e795ef203b228b1f61d67e86d8052d81b4853a27d50d2b6e64bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://383650c9e8169aa5621d731ebcbfdd1ace0491ad4e7931fca1f6b595e0e782b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k8v8k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:20Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:20 crc kubenswrapper[4808]: I0217 15:54:20.922538 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:20Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:20 crc kubenswrapper[4808]: I0217 15:54:20.949771 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:54:20 crc kubenswrapper[4808]: I0217 15:54:20.949846 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:54:20 crc kubenswrapper[4808]: I0217 15:54:20.949891 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:54:20 crc kubenswrapper[4808]: I0217 15:54:20.949934 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:54:20 crc kubenswrapper[4808]: E0217 15:54:20.950055 4808 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 15:54:20 crc kubenswrapper[4808]: E0217 15:54:20.950118 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 15:54:24.95010014 +0000 UTC m=+28.466459213 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 15:54:20 crc kubenswrapper[4808]: E0217 15:54:20.950238 4808 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 15:54:20 crc kubenswrapper[4808]: E0217 15:54:20.950264 4808 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 15:54:20 crc kubenswrapper[4808]: E0217 15:54:20.950279 4808 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:54:20 crc kubenswrapper[4808]: E0217 15:54:20.950314 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-17 15:54:24.950300315 +0000 UTC m=+28.466659378 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:54:20 crc kubenswrapper[4808]: E0217 15:54:20.950381 4808 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 15:54:20 crc kubenswrapper[4808]: E0217 15:54:20.950398 4808 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 15:54:20 crc kubenswrapper[4808]: E0217 15:54:20.950408 4808 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:54:20 crc kubenswrapper[4808]: E0217 15:54:20.950439 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-17 15:54:24.950431979 +0000 UTC m=+28.466791052 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:54:20 crc kubenswrapper[4808]: E0217 15:54:20.950510 4808 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 15:54:20 crc kubenswrapper[4808]: E0217 15:54:20.950542 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 15:54:24.950529131 +0000 UTC m=+28.466888204 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 15:54:20 crc kubenswrapper[4808]: I0217 15:54:20.967603 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6c9480c-4161-4c38-bec1-0822c6692f6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx4nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:20Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:20 crc kubenswrapper[4808]: I0217 15:54:20.999489 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca38b6e7-b21c-453d-8b6c-a163dac84b35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14df09051221e795ef203b228b1f61d67e86d8052d81b4853a27d50d2b6e64bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://383650c9e8169aa5621d731ebcbfdd1ace0491ad4e7931fca1f6b595e0e782b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k8v8k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:20Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:21 crc kubenswrapper[4808]: I0217 15:54:21.041865 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e109410f-af42-4d80-bf58-9af3a5dde09a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd52f8fe1e994b2f877ce0843ce86d86d7674bace8c4ca163e3232248313435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b00de586738e2d759aa971e2114def8fdfeb2a25fd72f482d75b9f46ea9a3d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12c45de72b21abdab0a1073a9a1a357c8d593f68a339bf9b455b5e87aa7863aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59dcbb2be526e98cfd0a3c8cf833d6cfdef0120c58b47e52fb62f56adffb1d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:21Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:21 crc kubenswrapper[4808]: I0217 15:54:21.086763 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:21Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:21 crc kubenswrapper[4808]: I0217 15:54:21.100053 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 22:32:53.20709655 +0000 UTC Feb 17 15:54:21 crc kubenswrapper[4808]: I0217 15:54:21.120159 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:21Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:21 crc kubenswrapper[4808]: I0217 15:54:21.145435 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:54:21 crc kubenswrapper[4808]: I0217 15:54:21.145491 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:54:21 crc kubenswrapper[4808]: I0217 15:54:21.145438 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:54:21 crc kubenswrapper[4808]: E0217 15:54:21.145648 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:54:21 crc kubenswrapper[4808]: E0217 15:54:21.145768 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:54:21 crc kubenswrapper[4808]: E0217 15:54:21.145914 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:54:21 crc kubenswrapper[4808]: I0217 15:54:21.160033 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:21Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:21 crc kubenswrapper[4808]: I0217 15:54:21.200820 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6556f8ef16656338bd11e718549ef3c019e96928825ab9dc0596f24b8f43e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbc64aec6f296c59b9fb1e8c183c9f80c346f2d76620db59376c914ffcec02b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:21Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:21 crc kubenswrapper[4808]: I0217 15:54:21.238984 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f8pfh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13cb51e0-9eb4-4948-a9bf-93cddaa429fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e67e9f34fe5e5e9f272673e47a80dfec89a2832289e719b09d5a13399412b2ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkcvd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f8pfh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:21Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:21 crc kubenswrapper[4808]: I0217 15:54:21.281604 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-msgfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18916d6d-e063-40a0-816f-554f95cd2956\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d94a7bfe9ebc3fcec167acc2f840374566394d9425801a71bd3626ce196ee3a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-msgfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:21Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:21 crc kubenswrapper[4808]: I0217 15:54:21.317176 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" event={"ID":"5748f02a-e3dd-47c7-b89d-b472c718e593","Type":"ContainerStarted","Data":"58ee49f9d112bd2fe6a3cc5f499d1be9d4c51f2741ffb9bf24754a46a0a12814"} Feb 17 15:54:21 crc kubenswrapper[4808]: I0217 15:54:21.320342 4808 generic.go:334] "Generic (PLEG): container finished" podID="a6c9480c-4161-4c38-bec1-0822c6692f6e" containerID="b8d4091ef21fb9fef52dafcd7f1d0e865ff57652fcb75d0ba1e16361bcb81f44" exitCode=0 Feb 17 15:54:21 crc kubenswrapper[4808]: I0217 15:54:21.320439 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4nl" event={"ID":"a6c9480c-4161-4c38-bec1-0822c6692f6e","Type":"ContainerDied","Data":"b8d4091ef21fb9fef52dafcd7f1d0e865ff57652fcb75d0ba1e16361bcb81f44"} Feb 17 15:54:21 crc kubenswrapper[4808]: I0217 15:54:21.322361 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"3aaaa97d92e1acc8fe17594a75ed3e720801983ea175873486102bca899d9c04"} Feb 17 15:54:21 crc kubenswrapper[4808]: I0217 15:54:21.335259 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748f02a-e3dd-47c7-b89d-b472c718e593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tgvlh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:21Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:21 crc kubenswrapper[4808]: I0217 15:54:21.367748 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:21Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:21 crc kubenswrapper[4808]: I0217 15:54:21.405634 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pr5s4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4989dd6-5d44-42b5-882c-12a10ffc7911\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://228e9f46385cedf80299c68685a8b2b94d96c41ade18eeea5de7a83c648cf704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2xc9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pr5s4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:21Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:21 crc kubenswrapper[4808]: I0217 15:54:21.440823 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efd34c89-7350-4ce0-83d9-302614df88f7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa3ef5d82c776e482d3da2d223d74423393c75b813707483fadca8cfbb5ed3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://695c70a36ec8a626d22b6dc04fdaad77e3e1f27a035ce6f62b96afe1f2c29361\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2611c9a878eac336beeea637370ce7fe47a5a80a6f29002cb2fb79d4637a1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d0e25e29d8f9c5146809e50f50a20c537f5ddecea1b902928a94870b5d44ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68d1439ead0f87e8cde6925c6db2cfde8a7fe89c6e5afaf719868740138742df\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:54:16Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 15:54:01.029442 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:54:01.030078 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2660512818/tls.crt::/tmp/serving-cert-2660512818/tls.key\\\\\\\"\\\\nI0217 15:54:16.361222 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:54:16.370125 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:54:16.370169 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:54:16.370202 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:54:16.370212 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:54:16.383437 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:54:16.383473 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383482 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:54:16.383494 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:54:16.383498 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:54:16.383502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:54:16.383616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:54:16.393934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://715d799f5e1732f88175b90bad28450b9c5148e89bf47ac3e47f9585acf3b392\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:53:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:21Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:21 crc kubenswrapper[4808]: I0217 15:54:21.484162 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b5cb9af7fe50ad534e758ba5647e162dfc951f41f07330e8b671427811de556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:21Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:21 crc kubenswrapper[4808]: I0217 15:54:21.523736 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b5cb9af7fe50ad534e758ba5647e162dfc951f41f07330e8b671427811de556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:21Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:21 crc kubenswrapper[4808]: I0217 15:54:21.560792 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e109410f-af42-4d80-bf58-9af3a5dde09a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd52f8fe1e994b2f877ce0843ce86d86d7674bace8c4ca163e3232248313435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b00de586738e2d759aa971e2114def8fdfeb2a25fd72f482d75b9f46ea9a3d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12c45de72b21abdab0a1073a9a1a357c8d593f68a339bf9b455b5e87aa7863aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59dcbb2be526e98cfd0a3c8cf833d6cfdef0120c58b47e52fb62f56adffb1d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:21Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:21 crc kubenswrapper[4808]: I0217 15:54:21.603748 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:21Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:21 crc kubenswrapper[4808]: I0217 15:54:21.642567 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6c9480c-4161-4c38-bec1-0822c6692f6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8d4091ef21fb9fef52dafcd7f1d0e865ff57652fcb75d0ba1e16361bcb81f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8d4091ef21fb9fef52dafcd7f1d0e865ff57652fcb75d0ba1e16361bcb81f44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx4nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:21Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:21 crc kubenswrapper[4808]: I0217 15:54:21.686182 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca38b6e7-b21c-453d-8b6c-a163dac84b35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14df09051221e795ef203b228b1f61d67e86d8052d81b4853a27d50d2b6e64bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://383650c9e8169aa5621d731ebcbfdd1ace0491ad4e7931fca1f6b595e0e782b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k8v8k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:21Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:21 crc kubenswrapper[4808]: I0217 15:54:21.721436 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f8pfh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13cb51e0-9eb4-4948-a9bf-93cddaa429fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e67e9f34fe5e5e9f272673e47a80dfec89a2832289e719b09d5a13399412b2ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkcvd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f8pfh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:21Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:21 crc kubenswrapper[4808]: I0217 15:54:21.758151 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-msgfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18916d6d-e063-40a0-816f-554f95cd2956\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d94a7bfe9ebc3fcec167acc2f840374566394d9425801a71bd3626ce196ee3a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-msgfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:21Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:21 crc kubenswrapper[4808]: I0217 15:54:21.806170 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748f02a-e3dd-47c7-b89d-b472c718e593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tgvlh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:21Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:21 crc kubenswrapper[4808]: I0217 15:54:21.839938 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:21Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:21 crc kubenswrapper[4808]: I0217 15:54:21.885285 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:21Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:21 crc kubenswrapper[4808]: I0217 15:54:21.919848 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6556f8ef16656338bd11e718549ef3c019e96928825ab9dc0596f24b8f43e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbc64aec6f296c59b9fb1e8c183c9f80c346f2d76620db59376c914ffcec02b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:21Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:21 crc kubenswrapper[4808]: I0217 15:54:21.960342 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efd34c89-7350-4ce0-83d9-302614df88f7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa3ef5d82c776e482d3da2d223d74423393c75b813707483fadca8cfbb5ed3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://695c70a36ec8a626d22b6dc04fdaad77e3e1f27a035ce6f62b96afe1f2c29361\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2611c9a878eac336beeea637370ce7fe47a5a80a6f29002cb2fb79d4637a1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d0e25e29d8f9c5146809e50f50a20c537f5ddecea1b902928a94870b5d44ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68d1439ead0f87e8cde6925c6db2cfde8a7fe89c6e5afaf719868740138742df\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:54:16Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 15:54:01.029442 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:54:01.030078 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2660512818/tls.crt::/tmp/serving-cert-2660512818/tls.key\\\\\\\"\\\\nI0217 15:54:16.361222 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:54:16.370125 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:54:16.370169 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:54:16.370202 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:54:16.370212 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:54:16.383437 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:54:16.383473 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383482 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:54:16.383494 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:54:16.383498 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:54:16.383502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:54:16.383616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:54:16.393934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://715d799f5e1732f88175b90bad28450b9c5148e89bf47ac3e47f9585acf3b392\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:53:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:21Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:21 crc kubenswrapper[4808]: I0217 15:54:21.997791 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3aaaa97d92e1acc8fe17594a75ed3e720801983ea175873486102bca899d9c04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:21Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:22 crc kubenswrapper[4808]: I0217 15:54:22.037793 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pr5s4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4989dd6-5d44-42b5-882c-12a10ffc7911\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://228e9f46385cedf80299c68685a8b2b94d96c41ade18eeea5de7a83c648cf704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2xc9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pr5s4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:22Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:22 crc kubenswrapper[4808]: I0217 15:54:22.101086 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 08:50:11.523443574 +0000 UTC Feb 17 15:54:22 crc kubenswrapper[4808]: I0217 15:54:22.329385 4808 generic.go:334] "Generic (PLEG): container finished" podID="a6c9480c-4161-4c38-bec1-0822c6692f6e" containerID="26ac79dab2ec2e8e379a62382daa37e5c1feaa0666d3c6426bd9a295c64fdd5b" exitCode=0 Feb 17 15:54:22 crc kubenswrapper[4808]: I0217 15:54:22.329489 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4nl" event={"ID":"a6c9480c-4161-4c38-bec1-0822c6692f6e","Type":"ContainerDied","Data":"26ac79dab2ec2e8e379a62382daa37e5c1feaa0666d3c6426bd9a295c64fdd5b"} Feb 17 15:54:22 crc kubenswrapper[4808]: I0217 15:54:22.352958 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3aaaa97d92e1acc8fe17594a75ed3e720801983ea175873486102bca899d9c04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:22Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:22 crc kubenswrapper[4808]: I0217 15:54:22.378405 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pr5s4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4989dd6-5d44-42b5-882c-12a10ffc7911\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://228e9f46385cedf80299c68685a8b2b94d96c41ade18eeea5de7a83c648cf704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2xc9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pr5s4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:22Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:22 crc kubenswrapper[4808]: I0217 15:54:22.404270 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efd34c89-7350-4ce0-83d9-302614df88f7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa3ef5d82c776e482d3da2d223d74423393c75b813707483fadca8cfbb5ed3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://695c70a36ec8a626d22b6dc04fdaad77e3e1f27a035ce6f62b96afe1f2c29361\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2611c9a878eac336beeea637370ce7fe47a5a80a6f29002cb2fb79d4637a1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d0e25e29d8f9c5146809e50f50a20c537f5ddecea1b902928a94870b5d44ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68d1439ead0f87e8cde6925c6db2cfde8a7fe89c6e5afaf719868740138742df\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:54:16Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 15:54:01.029442 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:54:01.030078 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2660512818/tls.crt::/tmp/serving-cert-2660512818/tls.key\\\\\\\"\\\\nI0217 15:54:16.361222 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:54:16.370125 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:54:16.370169 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:54:16.370202 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:54:16.370212 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:54:16.383437 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:54:16.383473 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383482 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:54:16.383494 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:54:16.383498 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:54:16.383502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:54:16.383616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:54:16.393934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://715d799f5e1732f88175b90bad28450b9c5148e89bf47ac3e47f9585acf3b392\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:53:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:22Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:22 crc kubenswrapper[4808]: I0217 15:54:22.427358 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b5cb9af7fe50ad534e758ba5647e162dfc951f41f07330e8b671427811de556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:22Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:22 crc kubenswrapper[4808]: I0217 15:54:22.475294 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6c9480c-4161-4c38-bec1-0822c6692f6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8d4091ef21fb9fef52dafcd7f1d0e865ff57652fcb75d0ba1e16361bcb81f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8d4091ef21fb9fef52dafcd7f1d0e865ff57652fcb75d0ba1e16361bcb81f44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26ac79dab2ec2e8e379a62382daa37e5c1feaa0666d3c6426bd9a295c64fdd5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26ac79dab2ec2e8e379a62382daa37e5c1feaa0666d3c6426bd9a295c64fdd5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx4nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:22Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:22 crc kubenswrapper[4808]: I0217 15:54:22.492169 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca38b6e7-b21c-453d-8b6c-a163dac84b35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14df09051221e795ef203b228b1f61d67e86d8052d81b4853a27d50d2b6e64bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://383650c9e8169aa5621d731ebcbfdd1ace0491ad4e7931fca1f6b595e0e782b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k8v8k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:22Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:22 crc kubenswrapper[4808]: I0217 15:54:22.510757 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e109410f-af42-4d80-bf58-9af3a5dde09a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd52f8fe1e994b2f877ce0843ce86d86d7674bace8c4ca163e3232248313435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b00de586738e2d759aa971e2114def8fdfeb2a25fd72f482d75b9f46ea9a3d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12c45de72b21abdab0a1073a9a1a357c8d593f68a339bf9b455b5e87aa7863aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59dcbb2be526e98cfd0a3c8cf833d6cfdef0120c58b47e52fb62f56adffb1d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:22Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:22 crc kubenswrapper[4808]: I0217 15:54:22.528060 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:22Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:22 crc kubenswrapper[4808]: I0217 15:54:22.544135 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:22Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:22 crc kubenswrapper[4808]: I0217 15:54:22.562451 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:22Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:22 crc kubenswrapper[4808]: I0217 15:54:22.575931 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6556f8ef16656338bd11e718549ef3c019e96928825ab9dc0596f24b8f43e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbc64aec6f296c59b9fb1e8c183c9f80c346f2d76620db59376c914ffcec02b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:22Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:22 crc kubenswrapper[4808]: I0217 15:54:22.593912 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f8pfh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13cb51e0-9eb4-4948-a9bf-93cddaa429fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e67e9f34fe5e5e9f272673e47a80dfec89a2832289e719b09d5a13399412b2ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkcvd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f8pfh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:22Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:22 crc kubenswrapper[4808]: I0217 15:54:22.611467 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-msgfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18916d6d-e063-40a0-816f-554f95cd2956\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d94a7bfe9ebc3fcec167acc2f840374566394d9425801a71bd3626ce196ee3a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-msgfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:22Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:22 crc kubenswrapper[4808]: I0217 15:54:22.637073 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748f02a-e3dd-47c7-b89d-b472c718e593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tgvlh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:22Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:22 crc kubenswrapper[4808]: I0217 15:54:22.765954 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:54:22 crc kubenswrapper[4808]: I0217 15:54:22.768426 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:22 crc kubenswrapper[4808]: I0217 15:54:22.768478 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:22 crc kubenswrapper[4808]: I0217 15:54:22.768489 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:22 crc kubenswrapper[4808]: I0217 15:54:22.768623 4808 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 17 15:54:22 crc kubenswrapper[4808]: I0217 15:54:22.777500 4808 kubelet_node_status.go:115] "Node was previously registered" node="crc" Feb 17 15:54:22 crc kubenswrapper[4808]: I0217 15:54:22.777799 4808 kubelet_node_status.go:79] "Successfully registered node" node="crc" Feb 17 15:54:22 crc kubenswrapper[4808]: I0217 15:54:22.778993 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:22 crc kubenswrapper[4808]: I0217 15:54:22.779020 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:22 crc kubenswrapper[4808]: I0217 15:54:22.779030 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:22 crc kubenswrapper[4808]: I0217 15:54:22.779042 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:22 crc kubenswrapper[4808]: I0217 15:54:22.779057 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:22Z","lastTransitionTime":"2026-02-17T15:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:22 crc kubenswrapper[4808]: E0217 15:54:22.799389 4808 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7379f6dd-5937-4d60-901f-8c9dc45481b3\\\",\\\"systemUUID\\\":\\\"8fe3bc97-dd01-4038-9ff9-743e71f8162b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:22Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:22 crc kubenswrapper[4808]: I0217 15:54:22.803507 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:22 crc kubenswrapper[4808]: I0217 15:54:22.803539 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:22 crc kubenswrapper[4808]: I0217 15:54:22.803552 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:22 crc kubenswrapper[4808]: I0217 15:54:22.803564 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:22 crc kubenswrapper[4808]: I0217 15:54:22.803591 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:22Z","lastTransitionTime":"2026-02-17T15:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:22 crc kubenswrapper[4808]: E0217 15:54:22.827436 4808 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7379f6dd-5937-4d60-901f-8c9dc45481b3\\\",\\\"systemUUID\\\":\\\"8fe3bc97-dd01-4038-9ff9-743e71f8162b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:22Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:22 crc kubenswrapper[4808]: I0217 15:54:22.834513 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:22 crc kubenswrapper[4808]: I0217 15:54:22.834613 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:22 crc kubenswrapper[4808]: I0217 15:54:22.834631 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:22 crc kubenswrapper[4808]: I0217 15:54:22.834659 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:22 crc kubenswrapper[4808]: I0217 15:54:22.834677 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:22Z","lastTransitionTime":"2026-02-17T15:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:22 crc kubenswrapper[4808]: E0217 15:54:22.849967 4808 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7379f6dd-5937-4d60-901f-8c9dc45481b3\\\",\\\"systemUUID\\\":\\\"8fe3bc97-dd01-4038-9ff9-743e71f8162b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:22Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:22 crc kubenswrapper[4808]: I0217 15:54:22.854846 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:22 crc kubenswrapper[4808]: I0217 15:54:22.854952 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:22 crc kubenswrapper[4808]: I0217 15:54:22.855026 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:22 crc kubenswrapper[4808]: I0217 15:54:22.855109 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:22 crc kubenswrapper[4808]: I0217 15:54:22.855236 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:22Z","lastTransitionTime":"2026-02-17T15:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:22 crc kubenswrapper[4808]: E0217 15:54:22.868406 4808 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7379f6dd-5937-4d60-901f-8c9dc45481b3\\\",\\\"systemUUID\\\":\\\"8fe3bc97-dd01-4038-9ff9-743e71f8162b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:22Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:22 crc kubenswrapper[4808]: I0217 15:54:22.873475 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:22 crc kubenswrapper[4808]: I0217 15:54:22.873516 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:22 crc kubenswrapper[4808]: I0217 15:54:22.873525 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:22 crc kubenswrapper[4808]: I0217 15:54:22.873542 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:22 crc kubenswrapper[4808]: I0217 15:54:22.873557 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:22Z","lastTransitionTime":"2026-02-17T15:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:22 crc kubenswrapper[4808]: E0217 15:54:22.886472 4808 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7379f6dd-5937-4d60-901f-8c9dc45481b3\\\",\\\"systemUUID\\\":\\\"8fe3bc97-dd01-4038-9ff9-743e71f8162b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:22Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:22 crc kubenswrapper[4808]: E0217 15:54:22.886757 4808 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 17 15:54:22 crc kubenswrapper[4808]: I0217 15:54:22.888848 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:22 crc kubenswrapper[4808]: I0217 15:54:22.888880 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:22 crc kubenswrapper[4808]: I0217 15:54:22.888891 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:22 crc kubenswrapper[4808]: I0217 15:54:22.888907 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:22 crc kubenswrapper[4808]: I0217 15:54:22.888918 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:22Z","lastTransitionTime":"2026-02-17T15:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:22 crc kubenswrapper[4808]: I0217 15:54:22.992142 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:22 crc kubenswrapper[4808]: I0217 15:54:22.992199 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:22 crc kubenswrapper[4808]: I0217 15:54:22.992211 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:22 crc kubenswrapper[4808]: I0217 15:54:22.992230 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:22 crc kubenswrapper[4808]: I0217 15:54:22.992247 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:22Z","lastTransitionTime":"2026-02-17T15:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:23 crc kubenswrapper[4808]: I0217 15:54:23.095637 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:23 crc kubenswrapper[4808]: I0217 15:54:23.095678 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:23 crc kubenswrapper[4808]: I0217 15:54:23.095689 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:23 crc kubenswrapper[4808]: I0217 15:54:23.095708 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:23 crc kubenswrapper[4808]: I0217 15:54:23.095720 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:23Z","lastTransitionTime":"2026-02-17T15:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:23 crc kubenswrapper[4808]: I0217 15:54:23.101487 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 11:38:15.710958583 +0000 UTC Feb 17 15:54:23 crc kubenswrapper[4808]: I0217 15:54:23.147381 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:54:23 crc kubenswrapper[4808]: E0217 15:54:23.147505 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:54:23 crc kubenswrapper[4808]: I0217 15:54:23.147885 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:54:23 crc kubenswrapper[4808]: E0217 15:54:23.147964 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:54:23 crc kubenswrapper[4808]: I0217 15:54:23.148025 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:54:23 crc kubenswrapper[4808]: E0217 15:54:23.148072 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:54:23 crc kubenswrapper[4808]: I0217 15:54:23.198957 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:23 crc kubenswrapper[4808]: I0217 15:54:23.199010 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:23 crc kubenswrapper[4808]: I0217 15:54:23.199019 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:23 crc kubenswrapper[4808]: I0217 15:54:23.199036 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:23 crc kubenswrapper[4808]: I0217 15:54:23.199050 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:23Z","lastTransitionTime":"2026-02-17T15:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:23 crc kubenswrapper[4808]: I0217 15:54:23.302271 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:23 crc kubenswrapper[4808]: I0217 15:54:23.302350 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:23 crc kubenswrapper[4808]: I0217 15:54:23.302371 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:23 crc kubenswrapper[4808]: I0217 15:54:23.302399 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:23 crc kubenswrapper[4808]: I0217 15:54:23.302422 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:23Z","lastTransitionTime":"2026-02-17T15:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:23 crc kubenswrapper[4808]: I0217 15:54:23.335929 4808 generic.go:334] "Generic (PLEG): container finished" podID="a6c9480c-4161-4c38-bec1-0822c6692f6e" containerID="43f3b959a4804631ce679ee8dd89b1fa9249892328d303865de288a5a7529af8" exitCode=0 Feb 17 15:54:23 crc kubenswrapper[4808]: I0217 15:54:23.336008 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4nl" event={"ID":"a6c9480c-4161-4c38-bec1-0822c6692f6e","Type":"ContainerDied","Data":"43f3b959a4804631ce679ee8dd89b1fa9249892328d303865de288a5a7529af8"} Feb 17 15:54:23 crc kubenswrapper[4808]: I0217 15:54:23.345764 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" event={"ID":"5748f02a-e3dd-47c7-b89d-b472c718e593","Type":"ContainerStarted","Data":"363a0f82d4347e522c91f27597bc03aa33f75e0399760fcc5cfdc1772eb6aabf"} Feb 17 15:54:23 crc kubenswrapper[4808]: I0217 15:54:23.355200 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b5cb9af7fe50ad534e758ba5647e162dfc951f41f07330e8b671427811de556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:23Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:23 crc kubenswrapper[4808]: I0217 15:54:23.375945 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e109410f-af42-4d80-bf58-9af3a5dde09a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd52f8fe1e994b2f877ce0843ce86d86d7674bace8c4ca163e3232248313435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b00de586738e2d759aa971e2114def8fdfeb2a25fd72f482d75b9f46ea9a3d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12c45de72b21abdab0a1073a9a1a357c8d593f68a339bf9b455b5e87aa7863aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59dcbb2be526e98cfd0a3c8cf833d6cfdef0120c58b47e52fb62f56adffb1d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:23Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:23 crc kubenswrapper[4808]: I0217 15:54:23.396994 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:23Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:23 crc kubenswrapper[4808]: I0217 15:54:23.405735 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:23 crc kubenswrapper[4808]: I0217 15:54:23.405840 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:23 crc kubenswrapper[4808]: I0217 15:54:23.405868 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:23 crc kubenswrapper[4808]: I0217 15:54:23.405909 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:23 crc kubenswrapper[4808]: I0217 15:54:23.405942 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:23Z","lastTransitionTime":"2026-02-17T15:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:23 crc kubenswrapper[4808]: I0217 15:54:23.422034 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6c9480c-4161-4c38-bec1-0822c6692f6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8d4091ef21fb9fef52dafcd7f1d0e865ff57652fcb75d0ba1e16361bcb81f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8d4091ef21fb9fef52dafcd7f1d0e865ff57652fcb75d0ba1e16361bcb81f44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26ac79dab2ec2e8e379a62382daa37e5c1feaa0666d3c6426bd9a295c64fdd5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26ac79dab2ec2e8e379a62382daa37e5c1feaa0666d3c6426bd9a295c64fdd5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43f3b959a4804631ce679ee8dd89b1fa9249892328d303865de288a5a7529af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43f3b959a4804631ce679ee8dd89b1fa9249892328d303865de288a5a7529af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx4nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:23Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:23 crc kubenswrapper[4808]: I0217 15:54:23.440169 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca38b6e7-b21c-453d-8b6c-a163dac84b35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14df09051221e795ef203b228b1f61d67e86d8052d81b4853a27d50d2b6e64bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://383650c9e8169aa5621d731ebcbfdd1ace0491ad4e7931fca1f6b595e0e782b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k8v8k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:23Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:23 crc kubenswrapper[4808]: I0217 15:54:23.455447 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f8pfh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13cb51e0-9eb4-4948-a9bf-93cddaa429fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e67e9f34fe5e5e9f272673e47a80dfec89a2832289e719b09d5a13399412b2ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkcvd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f8pfh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:23Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:23 crc kubenswrapper[4808]: I0217 15:54:23.474559 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-msgfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18916d6d-e063-40a0-816f-554f95cd2956\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d94a7bfe9ebc3fcec167acc2f840374566394d9425801a71bd3626ce196ee3a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-msgfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:23Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:23 crc kubenswrapper[4808]: I0217 15:54:23.498888 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748f02a-e3dd-47c7-b89d-b472c718e593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tgvlh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:23Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:23 crc kubenswrapper[4808]: I0217 15:54:23.509563 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:23 crc kubenswrapper[4808]: I0217 15:54:23.509625 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:23 crc kubenswrapper[4808]: I0217 15:54:23.509640 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:23 crc kubenswrapper[4808]: I0217 15:54:23.509664 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:23 crc kubenswrapper[4808]: I0217 15:54:23.509682 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:23Z","lastTransitionTime":"2026-02-17T15:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:23 crc kubenswrapper[4808]: I0217 15:54:23.517600 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:23Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:23 crc kubenswrapper[4808]: I0217 15:54:23.532100 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:23Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:23 crc kubenswrapper[4808]: I0217 15:54:23.545531 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6556f8ef16656338bd11e718549ef3c019e96928825ab9dc0596f24b8f43e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbc64aec6f296c59b9fb1e8c183c9f80c346f2d76620db59376c914ffcec02b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:23Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:23 crc kubenswrapper[4808]: I0217 15:54:23.566070 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efd34c89-7350-4ce0-83d9-302614df88f7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa3ef5d82c776e482d3da2d223d74423393c75b813707483fadca8cfbb5ed3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://695c70a36ec8a626d22b6dc04fdaad77e3e1f27a035ce6f62b96afe1f2c29361\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2611c9a878eac336beeea637370ce7fe47a5a80a6f29002cb2fb79d4637a1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d0e25e29d8f9c5146809e50f50a20c537f5ddecea1b902928a94870b5d44ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68d1439ead0f87e8cde6925c6db2cfde8a7fe89c6e5afaf719868740138742df\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:54:16Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 15:54:01.029442 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:54:01.030078 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2660512818/tls.crt::/tmp/serving-cert-2660512818/tls.key\\\\\\\"\\\\nI0217 15:54:16.361222 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:54:16.370125 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:54:16.370169 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:54:16.370202 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:54:16.370212 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:54:16.383437 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:54:16.383473 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383482 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:54:16.383494 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:54:16.383498 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:54:16.383502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:54:16.383616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:54:16.393934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://715d799f5e1732f88175b90bad28450b9c5148e89bf47ac3e47f9585acf3b392\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:53:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:23Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:23 crc kubenswrapper[4808]: I0217 15:54:23.583599 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3aaaa97d92e1acc8fe17594a75ed3e720801983ea175873486102bca899d9c04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:23Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:23 crc kubenswrapper[4808]: I0217 15:54:23.603891 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pr5s4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4989dd6-5d44-42b5-882c-12a10ffc7911\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://228e9f46385cedf80299c68685a8b2b94d96c41ade18eeea5de7a83c648cf704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2xc9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pr5s4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:23Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:23 crc kubenswrapper[4808]: I0217 15:54:23.613119 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:23 crc kubenswrapper[4808]: I0217 15:54:23.613179 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:23 crc kubenswrapper[4808]: I0217 15:54:23.613200 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:23 crc kubenswrapper[4808]: I0217 15:54:23.613229 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:23 crc kubenswrapper[4808]: I0217 15:54:23.613249 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:23Z","lastTransitionTime":"2026-02-17T15:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:23 crc kubenswrapper[4808]: I0217 15:54:23.716075 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:23 crc kubenswrapper[4808]: I0217 15:54:23.716140 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:23 crc kubenswrapper[4808]: I0217 15:54:23.716159 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:23 crc kubenswrapper[4808]: I0217 15:54:23.716187 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:23 crc kubenswrapper[4808]: I0217 15:54:23.716206 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:23Z","lastTransitionTime":"2026-02-17T15:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:23 crc kubenswrapper[4808]: I0217 15:54:23.821251 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:23 crc kubenswrapper[4808]: I0217 15:54:23.821330 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:23 crc kubenswrapper[4808]: I0217 15:54:23.821354 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:23 crc kubenswrapper[4808]: I0217 15:54:23.821388 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:23 crc kubenswrapper[4808]: I0217 15:54:23.821415 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:23Z","lastTransitionTime":"2026-02-17T15:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:23 crc kubenswrapper[4808]: I0217 15:54:23.892901 4808 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 17 15:54:23 crc kubenswrapper[4808]: I0217 15:54:23.928210 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:23 crc kubenswrapper[4808]: I0217 15:54:23.928284 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:23 crc kubenswrapper[4808]: I0217 15:54:23.928303 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:23 crc kubenswrapper[4808]: I0217 15:54:23.928336 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:23 crc kubenswrapper[4808]: I0217 15:54:23.928356 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:23Z","lastTransitionTime":"2026-02-17T15:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.032374 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.032434 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.032451 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.032478 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.032499 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:24Z","lastTransitionTime":"2026-02-17T15:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.102647 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 06:16:25.944252745 +0000 UTC Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.136848 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.137223 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.137472 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.137666 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.137825 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:24Z","lastTransitionTime":"2026-02-17T15:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.241178 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.241264 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.241285 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.241314 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.241334 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:24Z","lastTransitionTime":"2026-02-17T15:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.345670 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.346167 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.346187 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.346215 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.346236 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:24Z","lastTransitionTime":"2026-02-17T15:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.355386 4808 generic.go:334] "Generic (PLEG): container finished" podID="a6c9480c-4161-4c38-bec1-0822c6692f6e" containerID="4cf535fc0e39f67860383b43629a84bb4608a6a5d42304c537ab91a306ed841c" exitCode=0 Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.355704 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4nl" event={"ID":"a6c9480c-4161-4c38-bec1-0822c6692f6e","Type":"ContainerDied","Data":"4cf535fc0e39f67860383b43629a84bb4608a6a5d42304c537ab91a306ed841c"} Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.384964 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e109410f-af42-4d80-bf58-9af3a5dde09a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd52f8fe1e994b2f877ce0843ce86d86d7674bace8c4ca163e3232248313435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b00de586738e2d759aa971e2114def8fdfeb2a25fd72f482d75b9f46ea9a3d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12c45de72b21abdab0a1073a9a1a357c8d593f68a339bf9b455b5e87aa7863aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59dcbb2be526e98cfd0a3c8cf833d6cfdef0120c58b47e52fb62f56adffb1d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:24Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.410232 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:24Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.442243 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6c9480c-4161-4c38-bec1-0822c6692f6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8d4091ef21fb9fef52dafcd7f1d0e865ff57652fcb75d0ba1e16361bcb81f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8d4091ef21fb9fef52dafcd7f1d0e865ff57652fcb75d0ba1e16361bcb81f44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26ac79dab2ec2e8e379a62382daa37e5c1feaa0666d3c6426bd9a295c64fdd5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26ac79dab2ec2e8e379a62382daa37e5c1feaa0666d3c6426bd9a295c64fdd5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43f3b959a4804631ce679ee8dd89b1fa9249892328d303865de288a5a7529af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43f3b959a4804631ce679ee8dd89b1fa9249892328d303865de288a5a7529af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cf535fc0e39f67860383b43629a84bb4608a6a5d42304c537ab91a306ed841c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cf535fc0e39f67860383b43629a84bb4608a6a5d42304c537ab91a306ed841c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx4nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:24Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.455720 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.455776 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.455799 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.455828 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.455849 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:24Z","lastTransitionTime":"2026-02-17T15:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.466453 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca38b6e7-b21c-453d-8b6c-a163dac84b35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14df09051221e795ef203b228b1f61d67e86d8052d81b4853a27d50d2b6e64bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://383650c9e8169aa5621d731ebcbfdd1ace0491ad4e7931fca1f6b595e0e782b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k8v8k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:24Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.482967 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f8pfh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13cb51e0-9eb4-4948-a9bf-93cddaa429fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e67e9f34fe5e5e9f272673e47a80dfec89a2832289e719b09d5a13399412b2ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkcvd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f8pfh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:24Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.508430 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-msgfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18916d6d-e063-40a0-816f-554f95cd2956\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d94a7bfe9ebc3fcec167acc2f840374566394d9425801a71bd3626ce196ee3a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-msgfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:24Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.536652 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748f02a-e3dd-47c7-b89d-b472c718e593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tgvlh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:24Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.558240 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:24Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.558470 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.558488 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.558496 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.558512 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.558524 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:24Z","lastTransitionTime":"2026-02-17T15:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.572844 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:24Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.586911 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6556f8ef16656338bd11e718549ef3c019e96928825ab9dc0596f24b8f43e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbc64aec6f296c59b9fb1e8c183c9f80c346f2d76620db59376c914ffcec02b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:24Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.603591 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efd34c89-7350-4ce0-83d9-302614df88f7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa3ef5d82c776e482d3da2d223d74423393c75b813707483fadca8cfbb5ed3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://695c70a36ec8a626d22b6dc04fdaad77e3e1f27a035ce6f62b96afe1f2c29361\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2611c9a878eac336beeea637370ce7fe47a5a80a6f29002cb2fb79d4637a1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d0e25e29d8f9c5146809e50f50a20c537f5ddecea1b902928a94870b5d44ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68d1439ead0f87e8cde6925c6db2cfde8a7fe89c6e5afaf719868740138742df\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:54:16Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 15:54:01.029442 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:54:01.030078 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2660512818/tls.crt::/tmp/serving-cert-2660512818/tls.key\\\\\\\"\\\\nI0217 15:54:16.361222 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:54:16.370125 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:54:16.370169 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:54:16.370202 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:54:16.370212 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:54:16.383437 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:54:16.383473 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383482 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:54:16.383494 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:54:16.383498 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:54:16.383502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:54:16.383616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:54:16.393934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://715d799f5e1732f88175b90bad28450b9c5148e89bf47ac3e47f9585acf3b392\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:53:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:24Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.625765 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3aaaa97d92e1acc8fe17594a75ed3e720801983ea175873486102bca899d9c04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:24Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.641655 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pr5s4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4989dd6-5d44-42b5-882c-12a10ffc7911\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://228e9f46385cedf80299c68685a8b2b94d96c41ade18eeea5de7a83c648cf704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2xc9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pr5s4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:24Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.662666 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b5cb9af7fe50ad534e758ba5647e162dfc951f41f07330e8b671427811de556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:24Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.663206 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.663264 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.663280 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.663308 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.663325 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:24Z","lastTransitionTime":"2026-02-17T15:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.766707 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.767131 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.767215 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.767327 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.767434 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:24Z","lastTransitionTime":"2026-02-17T15:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.880055 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.880103 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.880112 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.880130 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.880144 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:24Z","lastTransitionTime":"2026-02-17T15:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.895048 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:54:24 crc kubenswrapper[4808]: E0217 15:54:24.895378 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:54:32.895325552 +0000 UTC m=+36.411684655 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.983668 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.983741 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.983762 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.983798 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.983823 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:24Z","lastTransitionTime":"2026-02-17T15:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.992463 4808 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.996260 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.996354 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.996437 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.996501 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:54:24 crc kubenswrapper[4808]: E0217 15:54:24.996554 4808 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 15:54:24 crc kubenswrapper[4808]: E0217 15:54:24.996604 4808 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 15:54:24 crc kubenswrapper[4808]: E0217 15:54:24.996618 4808 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:54:24 crc kubenswrapper[4808]: E0217 15:54:24.996653 4808 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 15:54:24 crc kubenswrapper[4808]: E0217 15:54:24.996683 4808 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 15:54:24 crc kubenswrapper[4808]: E0217 15:54:24.996688 4808 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 15:54:24 crc kubenswrapper[4808]: E0217 15:54:24.996721 4808 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:54:24 crc kubenswrapper[4808]: E0217 15:54:24.996727 4808 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 15:54:24 crc kubenswrapper[4808]: E0217 15:54:24.996693 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-17 15:54:32.996669892 +0000 UTC m=+36.513028965 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:54:24 crc kubenswrapper[4808]: E0217 15:54:24.996815 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 15:54:32.996781834 +0000 UTC m=+36.513141087 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 15:54:24 crc kubenswrapper[4808]: E0217 15:54:24.996850 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-17 15:54:32.996830396 +0000 UTC m=+36.513189689 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:54:24 crc kubenswrapper[4808]: E0217 15:54:24.996904 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 15:54:32.996885307 +0000 UTC m=+36.513244630 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.086505 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.086547 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.086555 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.086592 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.086602 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:25Z","lastTransitionTime":"2026-02-17T15:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.104294 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 02:47:39.898735694 +0000 UTC Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.145656 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.145714 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:54:25 crc kubenswrapper[4808]: E0217 15:54:25.145831 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:54:25 crc kubenswrapper[4808]: E0217 15:54:25.145926 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.146045 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:54:25 crc kubenswrapper[4808]: E0217 15:54:25.146217 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.190622 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.190675 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.190700 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.190722 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.190735 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:25Z","lastTransitionTime":"2026-02-17T15:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.294623 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.294660 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.294668 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.294681 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.294691 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:25Z","lastTransitionTime":"2026-02-17T15:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.373605 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" event={"ID":"5748f02a-e3dd-47c7-b89d-b472c718e593","Type":"ContainerStarted","Data":"84285376e3391c3ff95b82b22d09c3f0482b993cbcdb226ed8e86f7318a1eab7"} Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.374606 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.374808 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.374889 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.385353 4808 generic.go:334] "Generic (PLEG): container finished" podID="a6c9480c-4161-4c38-bec1-0822c6692f6e" containerID="89610759cc77f66154699ee9784109cba8ce21818125f447368e19fb6cc8cfb4" exitCode=0 Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.385432 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4nl" event={"ID":"a6c9480c-4161-4c38-bec1-0822c6692f6e","Type":"ContainerDied","Data":"89610759cc77f66154699ee9784109cba8ce21818125f447368e19fb6cc8cfb4"} Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.399346 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e109410f-af42-4d80-bf58-9af3a5dde09a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd52f8fe1e994b2f877ce0843ce86d86d7674bace8c4ca163e3232248313435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b00de586738e2d759aa971e2114def8fdfeb2a25fd72f482d75b9f46ea9a3d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12c45de72b21abdab0a1073a9a1a357c8d593f68a339bf9b455b5e87aa7863aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59dcbb2be526e98cfd0a3c8cf833d6cfdef0120c58b47e52fb62f56adffb1d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:25Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.400624 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.400665 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.400685 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.400718 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.400739 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:25Z","lastTransitionTime":"2026-02-17T15:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.433667 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:25Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.433757 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.434537 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.462321 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6c9480c-4161-4c38-bec1-0822c6692f6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8d4091ef21fb9fef52dafcd7f1d0e865ff57652fcb75d0ba1e16361bcb81f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8d4091ef21fb9fef52dafcd7f1d0e865ff57652fcb75d0ba1e16361bcb81f44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26ac79dab2ec2e8e379a62382daa37e5c1feaa0666d3c6426bd9a295c64fdd5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26ac79dab2ec2e8e379a62382daa37e5c1feaa0666d3c6426bd9a295c64fdd5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43f3b959a4804631ce679ee8dd89b1fa9249892328d303865de288a5a7529af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43f3b959a4804631ce679ee8dd89b1fa9249892328d303865de288a5a7529af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cf535fc0e39f67860383b43629a84bb4608a6a5d42304c537ab91a306ed841c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cf535fc0e39f67860383b43629a84bb4608a6a5d42304c537ab91a306ed841c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx4nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:25Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.483160 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca38b6e7-b21c-453d-8b6c-a163dac84b35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14df09051221e795ef203b228b1f61d67e86d8052d81b4853a27d50d2b6e64bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://383650c9e8169aa5621d731ebcbfdd1ace0491ad4e7931fca1f6b595e0e782b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k8v8k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:25Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.506907 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.507016 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.507038 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.507072 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.507092 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:25Z","lastTransitionTime":"2026-02-17T15:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.511324 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748f02a-e3dd-47c7-b89d-b472c718e593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80ab3de82f2a3f22425c34c9b4abcbc925a7076e3f2ce3b952f10aeb856e1c09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c263e6c0445a0badadcbc5b50c370fd4ee9a4d0cb3e535e3d7944e938cbea4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58ee49f9d112bd2fe6a3cc5f499d1be9d4c51f2741ffb9bf24754a46a0a12814\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b04c73bfd5eadf6c1e436f6a7150074ee8357cef79b0e040c1d9f3809aab13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9e729fa5a68d07a0f7e4a86114ed39e4128428e5a21c2f3f113f869adc9fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26a9d62d12c66018649ffcb84c69e20f1c08f3241bdb02ba4306b08dbe5ec49a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84285376e3391c3ff95b82b22d09c3f0482b993cbcdb226ed8e86f7318a1eab7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://363a0f82d4347e522c91f27597bc03aa33f75e0399760fcc5cfdc1772eb6aabf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tgvlh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:25Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.532919 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:25Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.555492 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:25Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.576474 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6556f8ef16656338bd11e718549ef3c019e96928825ab9dc0596f24b8f43e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbc64aec6f296c59b9fb1e8c183c9f80c346f2d76620db59376c914ffcec02b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:25Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.593441 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f8pfh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13cb51e0-9eb4-4948-a9bf-93cddaa429fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e67e9f34fe5e5e9f272673e47a80dfec89a2832289e719b09d5a13399412b2ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkcvd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f8pfh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:25Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.610836 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.610877 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.610889 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.610904 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.610915 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:25Z","lastTransitionTime":"2026-02-17T15:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.613413 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-msgfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18916d6d-e063-40a0-816f-554f95cd2956\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d94a7bfe9ebc3fcec167acc2f840374566394d9425801a71bd3626ce196ee3a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-msgfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:25Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.635325 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efd34c89-7350-4ce0-83d9-302614df88f7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa3ef5d82c776e482d3da2d223d74423393c75b813707483fadca8cfbb5ed3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://695c70a36ec8a626d22b6dc04fdaad77e3e1f27a035ce6f62b96afe1f2c29361\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2611c9a878eac336beeea637370ce7fe47a5a80a6f29002cb2fb79d4637a1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d0e25e29d8f9c5146809e50f50a20c537f5ddecea1b902928a94870b5d44ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68d1439ead0f87e8cde6925c6db2cfde8a7fe89c6e5afaf719868740138742df\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:54:16Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 15:54:01.029442 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:54:01.030078 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2660512818/tls.crt::/tmp/serving-cert-2660512818/tls.key\\\\\\\"\\\\nI0217 15:54:16.361222 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:54:16.370125 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:54:16.370169 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:54:16.370202 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:54:16.370212 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:54:16.383437 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:54:16.383473 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383482 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:54:16.383494 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:54:16.383498 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:54:16.383502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:54:16.383616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:54:16.393934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://715d799f5e1732f88175b90bad28450b9c5148e89bf47ac3e47f9585acf3b392\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:53:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:25Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.652974 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3aaaa97d92e1acc8fe17594a75ed3e720801983ea175873486102bca899d9c04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:25Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.669065 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pr5s4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4989dd6-5d44-42b5-882c-12a10ffc7911\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://228e9f46385cedf80299c68685a8b2b94d96c41ade18eeea5de7a83c648cf704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2xc9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pr5s4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:25Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.690682 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b5cb9af7fe50ad534e758ba5647e162dfc951f41f07330e8b671427811de556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:25Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.710026 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f8pfh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13cb51e0-9eb4-4948-a9bf-93cddaa429fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e67e9f34fe5e5e9f272673e47a80dfec89a2832289e719b09d5a13399412b2ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkcvd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f8pfh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:25Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.713956 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.714021 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.714042 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.714069 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.714089 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:25Z","lastTransitionTime":"2026-02-17T15:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.729269 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-msgfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18916d6d-e063-40a0-816f-554f95cd2956\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d94a7bfe9ebc3fcec167acc2f840374566394d9425801a71bd3626ce196ee3a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-msgfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:25Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.750282 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748f02a-e3dd-47c7-b89d-b472c718e593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80ab3de82f2a3f22425c34c9b4abcbc925a7076e3f2ce3b952f10aeb856e1c09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c263e6c0445a0badadcbc5b50c370fd4ee9a4d0cb3e535e3d7944e938cbea4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58ee49f9d112bd2fe6a3cc5f499d1be9d4c51f2741ffb9bf24754a46a0a12814\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b04c73bfd5eadf6c1e436f6a7150074ee8357cef79b0e040c1d9f3809aab13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9e729fa5a68d07a0f7e4a86114ed39e4128428e5a21c2f3f113f869adc9fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26a9d62d12c66018649ffcb84c69e20f1c08f3241bdb02ba4306b08dbe5ec49a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84285376e3391c3ff95b82b22d09c3f0482b993cbcdb226ed8e86f7318a1eab7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://363a0f82d4347e522c91f27597bc03aa33f75e0399760fcc5cfdc1772eb6aabf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tgvlh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:25Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.766491 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:25Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.789011 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:25Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.810704 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6556f8ef16656338bd11e718549ef3c019e96928825ab9dc0596f24b8f43e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbc64aec6f296c59b9fb1e8c183c9f80c346f2d76620db59376c914ffcec02b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:25Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.817987 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.818063 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.818202 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.818235 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.818256 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:25Z","lastTransitionTime":"2026-02-17T15:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.831555 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efd34c89-7350-4ce0-83d9-302614df88f7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa3ef5d82c776e482d3da2d223d74423393c75b813707483fadca8cfbb5ed3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://695c70a36ec8a626d22b6dc04fdaad77e3e1f27a035ce6f62b96afe1f2c29361\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2611c9a878eac336beeea637370ce7fe47a5a80a6f29002cb2fb79d4637a1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d0e25e29d8f9c5146809e50f50a20c537f5ddecea1b902928a94870b5d44ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68d1439ead0f87e8cde6925c6db2cfde8a7fe89c6e5afaf719868740138742df\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:54:16Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 15:54:01.029442 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:54:01.030078 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2660512818/tls.crt::/tmp/serving-cert-2660512818/tls.key\\\\\\\"\\\\nI0217 15:54:16.361222 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:54:16.370125 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:54:16.370169 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:54:16.370202 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:54:16.370212 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:54:16.383437 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:54:16.383473 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383482 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:54:16.383494 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:54:16.383498 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:54:16.383502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:54:16.383616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:54:16.393934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://715d799f5e1732f88175b90bad28450b9c5148e89bf47ac3e47f9585acf3b392\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:53:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:25Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.850425 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3aaaa97d92e1acc8fe17594a75ed3e720801983ea175873486102bca899d9c04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:25Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.868945 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pr5s4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4989dd6-5d44-42b5-882c-12a10ffc7911\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://228e9f46385cedf80299c68685a8b2b94d96c41ade18eeea5de7a83c648cf704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2xc9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pr5s4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:25Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.884850 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b5cb9af7fe50ad534e758ba5647e162dfc951f41f07330e8b671427811de556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:25Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.902159 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e109410f-af42-4d80-bf58-9af3a5dde09a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd52f8fe1e994b2f877ce0843ce86d86d7674bace8c4ca163e3232248313435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b00de586738e2d759aa971e2114def8fdfeb2a25fd72f482d75b9f46ea9a3d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12c45de72b21abdab0a1073a9a1a357c8d593f68a339bf9b455b5e87aa7863aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59dcbb2be526e98cfd0a3c8cf833d6cfdef0120c58b47e52fb62f56adffb1d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:25Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.917471 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:25Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.921783 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.921840 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.921858 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.921891 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.921912 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:25Z","lastTransitionTime":"2026-02-17T15:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.939708 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6c9480c-4161-4c38-bec1-0822c6692f6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8d4091ef21fb9fef52dafcd7f1d0e865ff57652fcb75d0ba1e16361bcb81f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8d4091ef21fb9fef52dafcd7f1d0e865ff57652fcb75d0ba1e16361bcb81f44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26ac79dab2ec2e8e379a62382daa37e5c1feaa0666d3c6426bd9a295c64fdd5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26ac79dab2ec2e8e379a62382daa37e5c1feaa0666d3c6426bd9a295c64fdd5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43f3b959a4804631ce679ee8dd89b1fa9249892328d303865de288a5a7529af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43f3b959a4804631ce679ee8dd89b1fa9249892328d303865de288a5a7529af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cf535fc0e39f67860383b43629a84bb4608a6a5d42304c537ab91a306ed841c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cf535fc0e39f67860383b43629a84bb4608a6a5d42304c537ab91a306ed841c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89610759cc77f66154699ee9784109cba8ce21818125f447368e19fb6cc8cfb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89610759cc77f66154699ee9784109cba8ce21818125f447368e19fb6cc8cfb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx4nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:25Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.954509 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca38b6e7-b21c-453d-8b6c-a163dac84b35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14df09051221e795ef203b228b1f61d67e86d8052d81b4853a27d50d2b6e64bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://383650c9e8169aa5621d731ebcbfdd1ace0491ad4e7931fca1f6b595e0e782b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k8v8k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:25Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:26 crc kubenswrapper[4808]: I0217 15:54:26.025016 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:26 crc kubenswrapper[4808]: I0217 15:54:26.025095 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:26 crc kubenswrapper[4808]: I0217 15:54:26.025119 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:26 crc kubenswrapper[4808]: I0217 15:54:26.025152 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:26 crc kubenswrapper[4808]: I0217 15:54:26.025175 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:26Z","lastTransitionTime":"2026-02-17T15:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:26 crc kubenswrapper[4808]: I0217 15:54:26.105308 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 16:10:02.283555141 +0000 UTC Feb 17 15:54:26 crc kubenswrapper[4808]: I0217 15:54:26.128130 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:26 crc kubenswrapper[4808]: I0217 15:54:26.128204 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:26 crc kubenswrapper[4808]: I0217 15:54:26.128223 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:26 crc kubenswrapper[4808]: I0217 15:54:26.128254 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:26 crc kubenswrapper[4808]: I0217 15:54:26.128278 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:26Z","lastTransitionTime":"2026-02-17T15:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:26 crc kubenswrapper[4808]: I0217 15:54:26.201131 4808 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 17 15:54:26 crc kubenswrapper[4808]: I0217 15:54:26.231812 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:26 crc kubenswrapper[4808]: I0217 15:54:26.231882 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:26 crc kubenswrapper[4808]: I0217 15:54:26.231901 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:26 crc kubenswrapper[4808]: I0217 15:54:26.231931 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:26 crc kubenswrapper[4808]: I0217 15:54:26.231959 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:26Z","lastTransitionTime":"2026-02-17T15:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:26 crc kubenswrapper[4808]: I0217 15:54:26.335842 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:26 crc kubenswrapper[4808]: I0217 15:54:26.335921 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:26 crc kubenswrapper[4808]: I0217 15:54:26.335940 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:26 crc kubenswrapper[4808]: I0217 15:54:26.335968 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:26 crc kubenswrapper[4808]: I0217 15:54:26.335989 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:26Z","lastTransitionTime":"2026-02-17T15:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:26 crc kubenswrapper[4808]: I0217 15:54:26.396459 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4nl" event={"ID":"a6c9480c-4161-4c38-bec1-0822c6692f6e","Type":"ContainerStarted","Data":"53d750dff2e0aa3d65e2defbc3cdf44f48375946c7021c0b1e1056b5ed7d729e"} Feb 17 15:54:26 crc kubenswrapper[4808]: I0217 15:54:26.418415 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e109410f-af42-4d80-bf58-9af3a5dde09a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd52f8fe1e994b2f877ce0843ce86d86d7674bace8c4ca163e3232248313435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b00de586738e2d759aa971e2114def8fdfeb2a25fd72f482d75b9f46ea9a3d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12c45de72b21abdab0a1073a9a1a357c8d593f68a339bf9b455b5e87aa7863aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59dcbb2be526e98cfd0a3c8cf833d6cfdef0120c58b47e52fb62f56adffb1d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:26Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:26 crc kubenswrapper[4808]: I0217 15:54:26.438847 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:26Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:26 crc kubenswrapper[4808]: I0217 15:54:26.439144 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:26 crc kubenswrapper[4808]: I0217 15:54:26.439196 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:26 crc kubenswrapper[4808]: I0217 15:54:26.439213 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:26 crc kubenswrapper[4808]: I0217 15:54:26.439239 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:26 crc kubenswrapper[4808]: I0217 15:54:26.439260 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:26Z","lastTransitionTime":"2026-02-17T15:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:26 crc kubenswrapper[4808]: I0217 15:54:26.462557 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6c9480c-4161-4c38-bec1-0822c6692f6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53d750dff2e0aa3d65e2defbc3cdf44f48375946c7021c0b1e1056b5ed7d729e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8d4091ef21fb9fef52dafcd7f1d0e865ff57652fcb75d0ba1e16361bcb81f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8d4091ef21fb9fef52dafcd7f1d0e865ff57652fcb75d0ba1e16361bcb81f44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26ac79dab2ec2e8e379a62382daa37e5c1feaa0666d3c6426bd9a295c64fdd5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26ac79dab2ec2e8e379a62382daa37e5c1feaa0666d3c6426bd9a295c64fdd5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43f3b959a4804631ce679ee8dd89b1fa9249892328d303865de288a5a7529af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43f3b959a4804631ce679ee8dd89b1fa9249892328d303865de288a5a7529af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cf535fc0e39f67860383b43629a84bb4608a6a5d42304c537ab91a306ed841c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cf535fc0e39f67860383b43629a84bb4608a6a5d42304c537ab91a306ed841c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89610759cc77f66154699ee9784109cba8ce21818125f447368e19fb6cc8cfb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89610759cc77f66154699ee9784109cba8ce21818125f447368e19fb6cc8cfb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx4nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:26Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:26 crc kubenswrapper[4808]: I0217 15:54:26.481348 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca38b6e7-b21c-453d-8b6c-a163dac84b35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14df09051221e795ef203b228b1f61d67e86d8052d81b4853a27d50d2b6e64bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://383650c9e8169aa5621d731ebcbfdd1ace0491ad4e7931fca1f6b595e0e782b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k8v8k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:26Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:26 crc kubenswrapper[4808]: I0217 15:54:26.499273 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:26Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:26 crc kubenswrapper[4808]: I0217 15:54:26.518084 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:26Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:26 crc kubenswrapper[4808]: I0217 15:54:26.538420 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6556f8ef16656338bd11e718549ef3c019e96928825ab9dc0596f24b8f43e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbc64aec6f296c59b9fb1e8c183c9f80c346f2d76620db59376c914ffcec02b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:26Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:26 crc kubenswrapper[4808]: I0217 15:54:26.548702 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:26 crc kubenswrapper[4808]: I0217 15:54:26.548803 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:26 crc kubenswrapper[4808]: I0217 15:54:26.548832 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:26 crc kubenswrapper[4808]: I0217 15:54:26.548872 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:26 crc kubenswrapper[4808]: I0217 15:54:26.548912 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:26Z","lastTransitionTime":"2026-02-17T15:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:26 crc kubenswrapper[4808]: I0217 15:54:26.566637 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f8pfh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13cb51e0-9eb4-4948-a9bf-93cddaa429fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e67e9f34fe5e5e9f272673e47a80dfec89a2832289e719b09d5a13399412b2ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkcvd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f8pfh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:26Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:26 crc kubenswrapper[4808]: I0217 15:54:26.592952 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-msgfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18916d6d-e063-40a0-816f-554f95cd2956\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d94a7bfe9ebc3fcec167acc2f840374566394d9425801a71bd3626ce196ee3a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-msgfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:26Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:26 crc kubenswrapper[4808]: I0217 15:54:26.626888 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748f02a-e3dd-47c7-b89d-b472c718e593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80ab3de82f2a3f22425c34c9b4abcbc925a7076e3f2ce3b952f10aeb856e1c09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c263e6c0445a0badadcbc5b50c370fd4ee9a4d0cb3e535e3d7944e938cbea4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58ee49f9d112bd2fe6a3cc5f499d1be9d4c51f2741ffb9bf24754a46a0a12814\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b04c73bfd5eadf6c1e436f6a7150074ee8357cef79b0e040c1d9f3809aab13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9e729fa5a68d07a0f7e4a86114ed39e4128428e5a21c2f3f113f869adc9fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26a9d62d12c66018649ffcb84c69e20f1c08f3241bdb02ba4306b08dbe5ec49a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84285376e3391c3ff95b82b22d09c3f0482b993cbcdb226ed8e86f7318a1eab7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://363a0f82d4347e522c91f27597bc03aa33f75e0399760fcc5cfdc1772eb6aabf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tgvlh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:26Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:26 crc kubenswrapper[4808]: I0217 15:54:26.655002 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:26 crc kubenswrapper[4808]: I0217 15:54:26.655052 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:26 crc kubenswrapper[4808]: I0217 15:54:26.655071 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:26 crc kubenswrapper[4808]: I0217 15:54:26.655101 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:26 crc kubenswrapper[4808]: I0217 15:54:26.655121 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:26Z","lastTransitionTime":"2026-02-17T15:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:26 crc kubenswrapper[4808]: I0217 15:54:26.655941 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efd34c89-7350-4ce0-83d9-302614df88f7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa3ef5d82c776e482d3da2d223d74423393c75b813707483fadca8cfbb5ed3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://695c70a36ec8a626d22b6dc04fdaad77e3e1f27a035ce6f62b96afe1f2c29361\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2611c9a878eac336beeea637370ce7fe47a5a80a6f29002cb2fb79d4637a1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d0e25e29d8f9c5146809e50f50a20c537f5ddecea1b902928a94870b5d44ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68d1439ead0f87e8cde6925c6db2cfde8a7fe89c6e5afaf719868740138742df\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:54:16Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 15:54:01.029442 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:54:01.030078 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2660512818/tls.crt::/tmp/serving-cert-2660512818/tls.key\\\\\\\"\\\\nI0217 15:54:16.361222 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:54:16.370125 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:54:16.370169 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:54:16.370202 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:54:16.370212 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:54:16.383437 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:54:16.383473 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383482 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:54:16.383494 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:54:16.383498 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:54:16.383502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:54:16.383616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:54:16.393934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://715d799f5e1732f88175b90bad28450b9c5148e89bf47ac3e47f9585acf3b392\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:53:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:26Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:26 crc kubenswrapper[4808]: I0217 15:54:26.677442 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3aaaa97d92e1acc8fe17594a75ed3e720801983ea175873486102bca899d9c04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:26Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:26 crc kubenswrapper[4808]: I0217 15:54:26.696711 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pr5s4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4989dd6-5d44-42b5-882c-12a10ffc7911\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://228e9f46385cedf80299c68685a8b2b94d96c41ade18eeea5de7a83c648cf704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2xc9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pr5s4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:26Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:26 crc kubenswrapper[4808]: I0217 15:54:26.723157 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b5cb9af7fe50ad534e758ba5647e162dfc951f41f07330e8b671427811de556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:26Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:26 crc kubenswrapper[4808]: I0217 15:54:26.758923 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:26 crc kubenswrapper[4808]: I0217 15:54:26.758995 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:26 crc kubenswrapper[4808]: I0217 15:54:26.759010 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:26 crc kubenswrapper[4808]: I0217 15:54:26.759032 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:26 crc kubenswrapper[4808]: I0217 15:54:26.759047 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:26Z","lastTransitionTime":"2026-02-17T15:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:26 crc kubenswrapper[4808]: I0217 15:54:26.862431 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:26 crc kubenswrapper[4808]: I0217 15:54:26.862501 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:26 crc kubenswrapper[4808]: I0217 15:54:26.862517 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:26 crc kubenswrapper[4808]: I0217 15:54:26.862549 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:26 crc kubenswrapper[4808]: I0217 15:54:26.862587 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:26Z","lastTransitionTime":"2026-02-17T15:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:26 crc kubenswrapper[4808]: I0217 15:54:26.965500 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:26 crc kubenswrapper[4808]: I0217 15:54:26.965631 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:26 crc kubenswrapper[4808]: I0217 15:54:26.965658 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:26 crc kubenswrapper[4808]: I0217 15:54:26.965698 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:26 crc kubenswrapper[4808]: I0217 15:54:26.965726 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:26Z","lastTransitionTime":"2026-02-17T15:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:27 crc kubenswrapper[4808]: I0217 15:54:27.068743 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:27 crc kubenswrapper[4808]: I0217 15:54:27.068779 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:27 crc kubenswrapper[4808]: I0217 15:54:27.068790 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:27 crc kubenswrapper[4808]: I0217 15:54:27.068812 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:27 crc kubenswrapper[4808]: I0217 15:54:27.068826 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:27Z","lastTransitionTime":"2026-02-17T15:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:27 crc kubenswrapper[4808]: I0217 15:54:27.105717 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 08:14:10.322190307 +0000 UTC Feb 17 15:54:27 crc kubenswrapper[4808]: I0217 15:54:27.146064 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:54:27 crc kubenswrapper[4808]: E0217 15:54:27.146230 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:54:27 crc kubenswrapper[4808]: I0217 15:54:27.146618 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:54:27 crc kubenswrapper[4808]: E0217 15:54:27.146694 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:54:27 crc kubenswrapper[4808]: I0217 15:54:27.146766 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:54:27 crc kubenswrapper[4808]: E0217 15:54:27.146826 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:54:27 crc kubenswrapper[4808]: I0217 15:54:27.171147 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:27 crc kubenswrapper[4808]: I0217 15:54:27.171193 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:27 crc kubenswrapper[4808]: I0217 15:54:27.171207 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:27 crc kubenswrapper[4808]: I0217 15:54:27.171233 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:27 crc kubenswrapper[4808]: I0217 15:54:27.171249 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:27Z","lastTransitionTime":"2026-02-17T15:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:27 crc kubenswrapper[4808]: I0217 15:54:27.172310 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efd34c89-7350-4ce0-83d9-302614df88f7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa3ef5d82c776e482d3da2d223d74423393c75b813707483fadca8cfbb5ed3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://695c70a36ec8a626d22b6dc04fdaad77e3e1f27a035ce6f62b96afe1f2c29361\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2611c9a878eac336beeea637370ce7fe47a5a80a6f29002cb2fb79d4637a1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d0e25e29d8f9c5146809e50f50a20c537f5ddecea1b902928a94870b5d44ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68d1439ead0f87e8cde6925c6db2cfde8a7fe89c6e5afaf719868740138742df\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:54:16Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 15:54:01.029442 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:54:01.030078 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2660512818/tls.crt::/tmp/serving-cert-2660512818/tls.key\\\\\\\"\\\\nI0217 15:54:16.361222 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:54:16.370125 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:54:16.370169 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:54:16.370202 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:54:16.370212 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:54:16.383437 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:54:16.383473 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383482 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:54:16.383494 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:54:16.383498 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:54:16.383502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:54:16.383616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:54:16.393934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://715d799f5e1732f88175b90bad28450b9c5148e89bf47ac3e47f9585acf3b392\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:53:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:27Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:27 crc kubenswrapper[4808]: I0217 15:54:27.194525 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3aaaa97d92e1acc8fe17594a75ed3e720801983ea175873486102bca899d9c04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:27Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:27 crc kubenswrapper[4808]: I0217 15:54:27.205792 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pr5s4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4989dd6-5d44-42b5-882c-12a10ffc7911\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://228e9f46385cedf80299c68685a8b2b94d96c41ade18eeea5de7a83c648cf704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2xc9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pr5s4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:27Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:27 crc kubenswrapper[4808]: I0217 15:54:27.218902 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b5cb9af7fe50ad534e758ba5647e162dfc951f41f07330e8b671427811de556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:27Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:27 crc kubenswrapper[4808]: I0217 15:54:27.236031 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e109410f-af42-4d80-bf58-9af3a5dde09a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd52f8fe1e994b2f877ce0843ce86d86d7674bace8c4ca163e3232248313435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b00de586738e2d759aa971e2114def8fdfeb2a25fd72f482d75b9f46ea9a3d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12c45de72b21abdab0a1073a9a1a357c8d593f68a339bf9b455b5e87aa7863aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59dcbb2be526e98cfd0a3c8cf833d6cfdef0120c58b47e52fb62f56adffb1d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:27Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:27 crc kubenswrapper[4808]: I0217 15:54:27.255951 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:27Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:27 crc kubenswrapper[4808]: I0217 15:54:27.274106 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:27 crc kubenswrapper[4808]: I0217 15:54:27.274138 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:27 crc kubenswrapper[4808]: I0217 15:54:27.274149 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:27 crc kubenswrapper[4808]: I0217 15:54:27.274163 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:27 crc kubenswrapper[4808]: I0217 15:54:27.274173 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:27Z","lastTransitionTime":"2026-02-17T15:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:27 crc kubenswrapper[4808]: I0217 15:54:27.274342 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6c9480c-4161-4c38-bec1-0822c6692f6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53d750dff2e0aa3d65e2defbc3cdf44f48375946c7021c0b1e1056b5ed7d729e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8d4091ef21fb9fef52dafcd7f1d0e865ff57652fcb75d0ba1e16361bcb81f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8d4091ef21fb9fef52dafcd7f1d0e865ff57652fcb75d0ba1e16361bcb81f44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26ac79dab2ec2e8e379a62382daa37e5c1feaa0666d3c6426bd9a295c64fdd5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26ac79dab2ec2e8e379a62382daa37e5c1feaa0666d3c6426bd9a295c64fdd5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43f3b959a4804631ce679ee8dd89b1fa9249892328d303865de288a5a7529af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43f3b959a4804631ce679ee8dd89b1fa9249892328d303865de288a5a7529af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cf535fc0e39f67860383b43629a84bb4608a6a5d42304c537ab91a306ed841c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cf535fc0e39f67860383b43629a84bb4608a6a5d42304c537ab91a306ed841c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89610759cc77f66154699ee9784109cba8ce21818125f447368e19fb6cc8cfb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89610759cc77f66154699ee9784109cba8ce21818125f447368e19fb6cc8cfb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx4nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:27Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:27 crc kubenswrapper[4808]: I0217 15:54:27.288636 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca38b6e7-b21c-453d-8b6c-a163dac84b35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14df09051221e795ef203b228b1f61d67e86d8052d81b4853a27d50d2b6e64bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://383650c9e8169aa5621d731ebcbfdd1ace0491ad4e7931fca1f6b595e0e782b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k8v8k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:27Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:27 crc kubenswrapper[4808]: I0217 15:54:27.315727 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-msgfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18916d6d-e063-40a0-816f-554f95cd2956\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d94a7bfe9ebc3fcec167acc2f840374566394d9425801a71bd3626ce196ee3a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-msgfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:27Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:27 crc kubenswrapper[4808]: I0217 15:54:27.343351 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748f02a-e3dd-47c7-b89d-b472c718e593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80ab3de82f2a3f22425c34c9b4abcbc925a7076e3f2ce3b952f10aeb856e1c09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c263e6c0445a0badadcbc5b50c370fd4ee9a4d0cb3e535e3d7944e938cbea4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58ee49f9d112bd2fe6a3cc5f499d1be9d4c51f2741ffb9bf24754a46a0a12814\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b04c73bfd5eadf6c1e436f6a7150074ee8357cef79b0e040c1d9f3809aab13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9e729fa5a68d07a0f7e4a86114ed39e4128428e5a21c2f3f113f869adc9fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26a9d62d12c66018649ffcb84c69e20f1c08f3241bdb02ba4306b08dbe5ec49a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84285376e3391c3ff95b82b22d09c3f0482b993cbcdb226ed8e86f7318a1eab7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://363a0f82d4347e522c91f27597bc03aa33f75e0399760fcc5cfdc1772eb6aabf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tgvlh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:27Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:27 crc kubenswrapper[4808]: I0217 15:54:27.358483 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:27Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:27 crc kubenswrapper[4808]: I0217 15:54:27.375602 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:27Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:27 crc kubenswrapper[4808]: I0217 15:54:27.377009 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:27 crc kubenswrapper[4808]: I0217 15:54:27.377052 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:27 crc kubenswrapper[4808]: I0217 15:54:27.377065 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:27 crc kubenswrapper[4808]: I0217 15:54:27.377085 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:27 crc kubenswrapper[4808]: I0217 15:54:27.377099 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:27Z","lastTransitionTime":"2026-02-17T15:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:27 crc kubenswrapper[4808]: I0217 15:54:27.390388 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6556f8ef16656338bd11e718549ef3c019e96928825ab9dc0596f24b8f43e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbc64aec6f296c59b9fb1e8c183c9f80c346f2d76620db59376c914ffcec02b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:27Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:27 crc kubenswrapper[4808]: I0217 15:54:27.404507 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f8pfh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13cb51e0-9eb4-4948-a9bf-93cddaa429fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e67e9f34fe5e5e9f272673e47a80dfec89a2832289e719b09d5a13399412b2ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkcvd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f8pfh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:27Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:27 crc kubenswrapper[4808]: I0217 15:54:27.480935 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:27 crc kubenswrapper[4808]: I0217 15:54:27.480998 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:27 crc kubenswrapper[4808]: I0217 15:54:27.481018 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:27 crc kubenswrapper[4808]: I0217 15:54:27.481041 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:27 crc kubenswrapper[4808]: I0217 15:54:27.481057 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:27Z","lastTransitionTime":"2026-02-17T15:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:27 crc kubenswrapper[4808]: I0217 15:54:27.584219 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:27 crc kubenswrapper[4808]: I0217 15:54:27.584263 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:27 crc kubenswrapper[4808]: I0217 15:54:27.584274 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:27 crc kubenswrapper[4808]: I0217 15:54:27.584290 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:27 crc kubenswrapper[4808]: I0217 15:54:27.584304 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:27Z","lastTransitionTime":"2026-02-17T15:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:27 crc kubenswrapper[4808]: I0217 15:54:27.686873 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:27 crc kubenswrapper[4808]: I0217 15:54:27.686943 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:27 crc kubenswrapper[4808]: I0217 15:54:27.686961 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:27 crc kubenswrapper[4808]: I0217 15:54:27.686989 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:27 crc kubenswrapper[4808]: I0217 15:54:27.687009 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:27Z","lastTransitionTime":"2026-02-17T15:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:27 crc kubenswrapper[4808]: I0217 15:54:27.790353 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:27 crc kubenswrapper[4808]: I0217 15:54:27.790423 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:27 crc kubenswrapper[4808]: I0217 15:54:27.790442 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:27 crc kubenswrapper[4808]: I0217 15:54:27.790472 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:27 crc kubenswrapper[4808]: I0217 15:54:27.790491 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:27Z","lastTransitionTime":"2026-02-17T15:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:27 crc kubenswrapper[4808]: I0217 15:54:27.835121 4808 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 17 15:54:27 crc kubenswrapper[4808]: I0217 15:54:27.892968 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:27 crc kubenswrapper[4808]: I0217 15:54:27.893036 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:27 crc kubenswrapper[4808]: I0217 15:54:27.893061 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:27 crc kubenswrapper[4808]: I0217 15:54:27.893088 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:27 crc kubenswrapper[4808]: I0217 15:54:27.893109 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:27Z","lastTransitionTime":"2026-02-17T15:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:27 crc kubenswrapper[4808]: I0217 15:54:27.997164 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:27 crc kubenswrapper[4808]: I0217 15:54:27.997479 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:27 crc kubenswrapper[4808]: I0217 15:54:27.997496 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:27 crc kubenswrapper[4808]: I0217 15:54:27.997514 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:27 crc kubenswrapper[4808]: I0217 15:54:27.997525 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:27Z","lastTransitionTime":"2026-02-17T15:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:28 crc kubenswrapper[4808]: I0217 15:54:28.100854 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:28 crc kubenswrapper[4808]: I0217 15:54:28.100925 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:28 crc kubenswrapper[4808]: I0217 15:54:28.100947 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:28 crc kubenswrapper[4808]: I0217 15:54:28.100976 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:28 crc kubenswrapper[4808]: I0217 15:54:28.100998 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:28Z","lastTransitionTime":"2026-02-17T15:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:28 crc kubenswrapper[4808]: I0217 15:54:28.106183 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 03:34:49.865572005 +0000 UTC Feb 17 15:54:28 crc kubenswrapper[4808]: I0217 15:54:28.204277 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:28 crc kubenswrapper[4808]: I0217 15:54:28.204339 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:28 crc kubenswrapper[4808]: I0217 15:54:28.204358 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:28 crc kubenswrapper[4808]: I0217 15:54:28.204386 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:28 crc kubenswrapper[4808]: I0217 15:54:28.204408 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:28Z","lastTransitionTime":"2026-02-17T15:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:28 crc kubenswrapper[4808]: I0217 15:54:28.308199 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:28 crc kubenswrapper[4808]: I0217 15:54:28.308424 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:28 crc kubenswrapper[4808]: I0217 15:54:28.308513 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:28 crc kubenswrapper[4808]: I0217 15:54:28.308609 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:28 crc kubenswrapper[4808]: I0217 15:54:28.308686 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:28Z","lastTransitionTime":"2026-02-17T15:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:28 crc kubenswrapper[4808]: I0217 15:54:28.407883 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tgvlh_5748f02a-e3dd-47c7-b89d-b472c718e593/ovnkube-controller/0.log" Feb 17 15:54:28 crc kubenswrapper[4808]: I0217 15:54:28.410316 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:28 crc kubenswrapper[4808]: I0217 15:54:28.410444 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:28 crc kubenswrapper[4808]: I0217 15:54:28.410542 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:28 crc kubenswrapper[4808]: I0217 15:54:28.410660 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:28 crc kubenswrapper[4808]: I0217 15:54:28.410730 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:28Z","lastTransitionTime":"2026-02-17T15:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:28 crc kubenswrapper[4808]: I0217 15:54:28.413294 4808 generic.go:334] "Generic (PLEG): container finished" podID="5748f02a-e3dd-47c7-b89d-b472c718e593" containerID="84285376e3391c3ff95b82b22d09c3f0482b993cbcdb226ed8e86f7318a1eab7" exitCode=1 Feb 17 15:54:28 crc kubenswrapper[4808]: I0217 15:54:28.413384 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" event={"ID":"5748f02a-e3dd-47c7-b89d-b472c718e593","Type":"ContainerDied","Data":"84285376e3391c3ff95b82b22d09c3f0482b993cbcdb226ed8e86f7318a1eab7"} Feb 17 15:54:28 crc kubenswrapper[4808]: I0217 15:54:28.414225 4808 scope.go:117] "RemoveContainer" containerID="84285376e3391c3ff95b82b22d09c3f0482b993cbcdb226ed8e86f7318a1eab7" Feb 17 15:54:28 crc kubenswrapper[4808]: I0217 15:54:28.431969 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6556f8ef16656338bd11e718549ef3c019e96928825ab9dc0596f24b8f43e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbc64aec6f296c59b9fb1e8c183c9f80c346f2d76620db59376c914ffcec02b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:28Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:28 crc kubenswrapper[4808]: I0217 15:54:28.446273 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f8pfh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13cb51e0-9eb4-4948-a9bf-93cddaa429fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e67e9f34fe5e5e9f272673e47a80dfec89a2832289e719b09d5a13399412b2ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkcvd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f8pfh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:28Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:28 crc kubenswrapper[4808]: I0217 15:54:28.460799 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-msgfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18916d6d-e063-40a0-816f-554f95cd2956\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d94a7bfe9ebc3fcec167acc2f840374566394d9425801a71bd3626ce196ee3a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-msgfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:28Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:28 crc kubenswrapper[4808]: I0217 15:54:28.488896 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748f02a-e3dd-47c7-b89d-b472c718e593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80ab3de82f2a3f22425c34c9b4abcbc925a7076e3f2ce3b952f10aeb856e1c09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c263e6c0445a0badadcbc5b50c370fd4ee9a4d0cb3e535e3d7944e938cbea4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58ee49f9d112bd2fe6a3cc5f499d1be9d4c51f2741ffb9bf24754a46a0a12814\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b04c73bfd5eadf6c1e436f6a7150074ee8357cef79b0e040c1d9f3809aab13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9e729fa5a68d07a0f7e4a86114ed39e4128428e5a21c2f3f113f869adc9fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26a9d62d12c66018649ffcb84c69e20f1c08f3241bdb02ba4306b08dbe5ec49a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84285376e3391c3ff95b82b22d09c3f0482b993cbcdb226ed8e86f7318a1eab7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84285376e3391c3ff95b82b22d09c3f0482b993cbcdb226ed8e86f7318a1eab7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T15:54:27Z\\\",\\\"message\\\":\\\" reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 15:54:27.823454 6099 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0217 15:54:27.823478 6099 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0217 15:54:27.823501 6099 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0217 15:54:27.823550 6099 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0217 15:54:27.823566 6099 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0217 15:54:27.823609 6099 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0217 15:54:27.823712 6099 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 15:54:27.823793 6099 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0217 15:54:27.823869 6099 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0217 15:54:27.823886 6099 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0217 15:54:27.823927 6099 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0217 15:54:27.823948 6099 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0217 15:54:27.823967 6099 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0217 15:54:27.824263 6099 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://363a0f82d4347e522c91f27597bc03aa33f75e0399760fcc5cfdc1772eb6aabf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tgvlh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:28Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:28 crc kubenswrapper[4808]: I0217 15:54:28.506031 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:28Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:28 crc kubenswrapper[4808]: I0217 15:54:28.513532 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:28 crc kubenswrapper[4808]: I0217 15:54:28.513831 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:28 crc kubenswrapper[4808]: I0217 15:54:28.514043 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:28 crc kubenswrapper[4808]: I0217 15:54:28.514288 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:28 crc kubenswrapper[4808]: I0217 15:54:28.514474 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:28Z","lastTransitionTime":"2026-02-17T15:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:28 crc kubenswrapper[4808]: I0217 15:54:28.525192 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:28Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:28 crc kubenswrapper[4808]: I0217 15:54:28.552656 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efd34c89-7350-4ce0-83d9-302614df88f7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa3ef5d82c776e482d3da2d223d74423393c75b813707483fadca8cfbb5ed3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://695c70a36ec8a626d22b6dc04fdaad77e3e1f27a035ce6f62b96afe1f2c29361\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2611c9a878eac336beeea637370ce7fe47a5a80a6f29002cb2fb79d4637a1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d0e25e29d8f9c5146809e50f50a20c537f5ddecea1b902928a94870b5d44ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68d1439ead0f87e8cde6925c6db2cfde8a7fe89c6e5afaf719868740138742df\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:54:16Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 15:54:01.029442 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:54:01.030078 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2660512818/tls.crt::/tmp/serving-cert-2660512818/tls.key\\\\\\\"\\\\nI0217 15:54:16.361222 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:54:16.370125 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:54:16.370169 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:54:16.370202 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:54:16.370212 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:54:16.383437 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:54:16.383473 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383482 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:54:16.383494 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:54:16.383498 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:54:16.383502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:54:16.383616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:54:16.393934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://715d799f5e1732f88175b90bad28450b9c5148e89bf47ac3e47f9585acf3b392\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:53:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:28Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:28 crc kubenswrapper[4808]: I0217 15:54:28.571238 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3aaaa97d92e1acc8fe17594a75ed3e720801983ea175873486102bca899d9c04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:28Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:28 crc kubenswrapper[4808]: I0217 15:54:28.587363 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pr5s4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4989dd6-5d44-42b5-882c-12a10ffc7911\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://228e9f46385cedf80299c68685a8b2b94d96c41ade18eeea5de7a83c648cf704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2xc9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pr5s4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:28Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:28 crc kubenswrapper[4808]: I0217 15:54:28.612347 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b5cb9af7fe50ad534e758ba5647e162dfc951f41f07330e8b671427811de556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:28Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:28 crc kubenswrapper[4808]: I0217 15:54:28.618660 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:28 crc kubenswrapper[4808]: I0217 15:54:28.618701 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:28 crc kubenswrapper[4808]: I0217 15:54:28.618714 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:28 crc kubenswrapper[4808]: I0217 15:54:28.618735 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:28 crc kubenswrapper[4808]: I0217 15:54:28.618751 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:28Z","lastTransitionTime":"2026-02-17T15:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:28 crc kubenswrapper[4808]: I0217 15:54:28.636179 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e109410f-af42-4d80-bf58-9af3a5dde09a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd52f8fe1e994b2f877ce0843ce86d86d7674bace8c4ca163e3232248313435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b00de586738e2d759aa971e2114def8fdfeb2a25fd72f482d75b9f46ea9a3d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12c45de72b21abdab0a1073a9a1a357c8d593f68a339bf9b455b5e87aa7863aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59dcbb2be526e98cfd0a3c8cf833d6cfdef0120c58b47e52fb62f56adffb1d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:28Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:28 crc kubenswrapper[4808]: I0217 15:54:28.663187 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:28Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:28 crc kubenswrapper[4808]: I0217 15:54:28.685890 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6c9480c-4161-4c38-bec1-0822c6692f6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53d750dff2e0aa3d65e2defbc3cdf44f48375946c7021c0b1e1056b5ed7d729e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8d4091ef21fb9fef52dafcd7f1d0e865ff57652fcb75d0ba1e16361bcb81f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8d4091ef21fb9fef52dafcd7f1d0e865ff57652fcb75d0ba1e16361bcb81f44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26ac79dab2ec2e8e379a62382daa37e5c1feaa0666d3c6426bd9a295c64fdd5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26ac79dab2ec2e8e379a62382daa37e5c1feaa0666d3c6426bd9a295c64fdd5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43f3b959a4804631ce679ee8dd89b1fa9249892328d303865de288a5a7529af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43f3b959a4804631ce679ee8dd89b1fa9249892328d303865de288a5a7529af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cf535fc0e39f67860383b43629a84bb4608a6a5d42304c537ab91a306ed841c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cf535fc0e39f67860383b43629a84bb4608a6a5d42304c537ab91a306ed841c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89610759cc77f66154699ee9784109cba8ce21818125f447368e19fb6cc8cfb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89610759cc77f66154699ee9784109cba8ce21818125f447368e19fb6cc8cfb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx4nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:28Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:28 crc kubenswrapper[4808]: I0217 15:54:28.703825 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca38b6e7-b21c-453d-8b6c-a163dac84b35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14df09051221e795ef203b228b1f61d67e86d8052d81b4853a27d50d2b6e64bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://383650c9e8169aa5621d731ebcbfdd1ace0491ad4e7931fca1f6b595e0e782b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k8v8k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:28Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:28 crc kubenswrapper[4808]: I0217 15:54:28.723288 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:28 crc kubenswrapper[4808]: I0217 15:54:28.723355 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:28 crc kubenswrapper[4808]: I0217 15:54:28.723375 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:28 crc kubenswrapper[4808]: I0217 15:54:28.723407 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:28 crc kubenswrapper[4808]: I0217 15:54:28.723429 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:28Z","lastTransitionTime":"2026-02-17T15:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:28 crc kubenswrapper[4808]: I0217 15:54:28.827370 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:28 crc kubenswrapper[4808]: I0217 15:54:28.827433 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:28 crc kubenswrapper[4808]: I0217 15:54:28.827445 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:28 crc kubenswrapper[4808]: I0217 15:54:28.827462 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:28 crc kubenswrapper[4808]: I0217 15:54:28.827472 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:28Z","lastTransitionTime":"2026-02-17T15:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:28 crc kubenswrapper[4808]: I0217 15:54:28.931319 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:28 crc kubenswrapper[4808]: I0217 15:54:28.931410 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:28 crc kubenswrapper[4808]: I0217 15:54:28.931427 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:28 crc kubenswrapper[4808]: I0217 15:54:28.931457 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:28 crc kubenswrapper[4808]: I0217 15:54:28.931470 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:28Z","lastTransitionTime":"2026-02-17T15:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.045153 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.045193 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.045206 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.045224 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.045238 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:29Z","lastTransitionTime":"2026-02-17T15:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.106887 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 14:22:14.441856303 +0000 UTC Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.145222 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.145276 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.145378 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:54:29 crc kubenswrapper[4808]: E0217 15:54:29.145490 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:54:29 crc kubenswrapper[4808]: E0217 15:54:29.145624 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:54:29 crc kubenswrapper[4808]: E0217 15:54:29.145745 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.147347 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.147376 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.147386 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.147399 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.147410 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:29Z","lastTransitionTime":"2026-02-17T15:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.250260 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.250324 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.250335 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.250353 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.250365 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:29Z","lastTransitionTime":"2026-02-17T15:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.353681 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.353731 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.353741 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.353759 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.353772 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:29Z","lastTransitionTime":"2026-02-17T15:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.421383 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tgvlh_5748f02a-e3dd-47c7-b89d-b472c718e593/ovnkube-controller/0.log" Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.425125 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" event={"ID":"5748f02a-e3dd-47c7-b89d-b472c718e593","Type":"ContainerStarted","Data":"efef33a328c17ebb52448542ea1a70587b2bd3219b0f9bbd3eec8074885d14d2"} Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.425777 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.446538 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:29Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.456773 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.456815 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.456831 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.456851 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.456863 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:29Z","lastTransitionTime":"2026-02-17T15:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.462614 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:29Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.477493 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6556f8ef16656338bd11e718549ef3c019e96928825ab9dc0596f24b8f43e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbc64aec6f296c59b9fb1e8c183c9f80c346f2d76620db59376c914ffcec02b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:29Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.494381 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f8pfh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13cb51e0-9eb4-4948-a9bf-93cddaa429fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e67e9f34fe5e5e9f272673e47a80dfec89a2832289e719b09d5a13399412b2ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkcvd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f8pfh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:29Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.515928 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-msgfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18916d6d-e063-40a0-816f-554f95cd2956\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d94a7bfe9ebc3fcec167acc2f840374566394d9425801a71bd3626ce196ee3a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-msgfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:29Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.543701 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748f02a-e3dd-47c7-b89d-b472c718e593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80ab3de82f2a3f22425c34c9b4abcbc925a7076e3f2ce3b952f10aeb856e1c09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c263e6c0445a0badadcbc5b50c370fd4ee9a4d0cb3e535e3d7944e938cbea4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58ee49f9d112bd2fe6a3cc5f499d1be9d4c51f2741ffb9bf24754a46a0a12814\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b04c73bfd5eadf6c1e436f6a7150074ee8357cef79b0e040c1d9f3809aab13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9e729fa5a68d07a0f7e4a86114ed39e4128428e5a21c2f3f113f869adc9fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26a9d62d12c66018649ffcb84c69e20f1c08f3241bdb02ba4306b08dbe5ec49a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efef33a328c17ebb52448542ea1a70587b2bd3219b0f9bbd3eec8074885d14d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84285376e3391c3ff95b82b22d09c3f0482b993cbcdb226ed8e86f7318a1eab7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T15:54:27Z\\\",\\\"message\\\":\\\" reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 15:54:27.823454 6099 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0217 15:54:27.823478 6099 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0217 15:54:27.823501 6099 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0217 15:54:27.823550 6099 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0217 15:54:27.823566 6099 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0217 15:54:27.823609 6099 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0217 15:54:27.823712 6099 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 15:54:27.823793 6099 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0217 15:54:27.823869 6099 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0217 15:54:27.823886 6099 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0217 15:54:27.823927 6099 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0217 15:54:27.823948 6099 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0217 15:54:27.823967 6099 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0217 15:54:27.824263 6099 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:24Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://363a0f82d4347e522c91f27597bc03aa33f75e0399760fcc5cfdc1772eb6aabf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tgvlh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:29Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.559377 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.559446 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.559465 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.559493 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.559528 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:29Z","lastTransitionTime":"2026-02-17T15:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.567671 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efd34c89-7350-4ce0-83d9-302614df88f7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa3ef5d82c776e482d3da2d223d74423393c75b813707483fadca8cfbb5ed3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://695c70a36ec8a626d22b6dc04fdaad77e3e1f27a035ce6f62b96afe1f2c29361\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2611c9a878eac336beeea637370ce7fe47a5a80a6f29002cb2fb79d4637a1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d0e25e29d8f9c5146809e50f50a20c537f5ddecea1b902928a94870b5d44ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68d1439ead0f87e8cde6925c6db2cfde8a7fe89c6e5afaf719868740138742df\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:54:16Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 15:54:01.029442 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:54:01.030078 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2660512818/tls.crt::/tmp/serving-cert-2660512818/tls.key\\\\\\\"\\\\nI0217 15:54:16.361222 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:54:16.370125 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:54:16.370169 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:54:16.370202 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:54:16.370212 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:54:16.383437 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:54:16.383473 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383482 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:54:16.383494 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:54:16.383498 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:54:16.383502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:54:16.383616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:54:16.393934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://715d799f5e1732f88175b90bad28450b9c5148e89bf47ac3e47f9585acf3b392\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:53:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:29Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.584435 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3aaaa97d92e1acc8fe17594a75ed3e720801983ea175873486102bca899d9c04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:29Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.598895 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pr5s4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4989dd6-5d44-42b5-882c-12a10ffc7911\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://228e9f46385cedf80299c68685a8b2b94d96c41ade18eeea5de7a83c648cf704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2xc9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pr5s4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:29Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.630075 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b5cb9af7fe50ad534e758ba5647e162dfc951f41f07330e8b671427811de556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:29Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.646234 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:29Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.662010 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6c9480c-4161-4c38-bec1-0822c6692f6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53d750dff2e0aa3d65e2defbc3cdf44f48375946c7021c0b1e1056b5ed7d729e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8d4091ef21fb9fef52dafcd7f1d0e865ff57652fcb75d0ba1e16361bcb81f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8d4091ef21fb9fef52dafcd7f1d0e865ff57652fcb75d0ba1e16361bcb81f44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26ac79dab2ec2e8e379a62382daa37e5c1feaa0666d3c6426bd9a295c64fdd5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26ac79dab2ec2e8e379a62382daa37e5c1feaa0666d3c6426bd9a295c64fdd5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43f3b959a4804631ce679ee8dd89b1fa9249892328d303865de288a5a7529af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43f3b959a4804631ce679ee8dd89b1fa9249892328d303865de288a5a7529af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cf535fc0e39f67860383b43629a84bb4608a6a5d42304c537ab91a306ed841c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cf535fc0e39f67860383b43629a84bb4608a6a5d42304c537ab91a306ed841c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89610759cc77f66154699ee9784109cba8ce21818125f447368e19fb6cc8cfb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89610759cc77f66154699ee9784109cba8ce21818125f447368e19fb6cc8cfb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx4nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:29Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.664846 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.664888 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.664902 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.664923 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.664938 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:29Z","lastTransitionTime":"2026-02-17T15:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.676235 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca38b6e7-b21c-453d-8b6c-a163dac84b35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14df09051221e795ef203b228b1f61d67e86d8052d81b4853a27d50d2b6e64bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://383650c9e8169aa5621d731ebcbfdd1ace0491ad4e7931fca1f6b595e0e782b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k8v8k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:29Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.693215 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e109410f-af42-4d80-bf58-9af3a5dde09a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd52f8fe1e994b2f877ce0843ce86d86d7674bace8c4ca163e3232248313435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b00de586738e2d759aa971e2114def8fdfeb2a25fd72f482d75b9f46ea9a3d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12c45de72b21abdab0a1073a9a1a357c8d593f68a339bf9b455b5e87aa7863aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59dcbb2be526e98cfd0a3c8cf833d6cfdef0120c58b47e52fb62f56adffb1d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:29Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.769077 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.769124 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.769142 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.769166 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.769184 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:29Z","lastTransitionTime":"2026-02-17T15:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.872524 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.872625 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.872644 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.872676 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.872694 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:29Z","lastTransitionTime":"2026-02-17T15:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.976385 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.976426 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.976438 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.976458 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.976471 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:29Z","lastTransitionTime":"2026-02-17T15:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.080111 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.080217 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.080238 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.080276 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.080300 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:30Z","lastTransitionTime":"2026-02-17T15:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.107075 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 18:39:35.608313609 +0000 UTC Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.184550 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.184657 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.184678 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.184708 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.184726 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:30Z","lastTransitionTime":"2026-02-17T15:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.287340 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.287426 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.287452 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.287486 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.287512 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:30Z","lastTransitionTime":"2026-02-17T15:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.391706 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.391806 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.391830 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.391862 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.391883 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:30Z","lastTransitionTime":"2026-02-17T15:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.433015 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tgvlh_5748f02a-e3dd-47c7-b89d-b472c718e593/ovnkube-controller/1.log" Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.434280 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tgvlh_5748f02a-e3dd-47c7-b89d-b472c718e593/ovnkube-controller/0.log" Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.440057 4808 generic.go:334] "Generic (PLEG): container finished" podID="5748f02a-e3dd-47c7-b89d-b472c718e593" containerID="efef33a328c17ebb52448542ea1a70587b2bd3219b0f9bbd3eec8074885d14d2" exitCode=1 Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.440134 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" event={"ID":"5748f02a-e3dd-47c7-b89d-b472c718e593","Type":"ContainerDied","Data":"efef33a328c17ebb52448542ea1a70587b2bd3219b0f9bbd3eec8074885d14d2"} Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.440218 4808 scope.go:117] "RemoveContainer" containerID="84285376e3391c3ff95b82b22d09c3f0482b993cbcdb226ed8e86f7318a1eab7" Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.441655 4808 scope.go:117] "RemoveContainer" containerID="efef33a328c17ebb52448542ea1a70587b2bd3219b0f9bbd3eec8074885d14d2" Feb 17 15:54:30 crc kubenswrapper[4808]: E0217 15:54:30.442034 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-tgvlh_openshift-ovn-kubernetes(5748f02a-e3dd-47c7-b89d-b472c718e593)\"" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" podUID="5748f02a-e3dd-47c7-b89d-b472c718e593" Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.463774 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b5cb9af7fe50ad534e758ba5647e162dfc951f41f07330e8b671427811de556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:30Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.490826 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca38b6e7-b21c-453d-8b6c-a163dac84b35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14df09051221e795ef203b228b1f61d67e86d8052d81b4853a27d50d2b6e64bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://383650c9e8169aa5621d731ebcbfdd1ace0491ad4e7931fca1f6b595e0e782b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k8v8k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:30Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.496257 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.496330 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.496350 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.496378 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.496398 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:30Z","lastTransitionTime":"2026-02-17T15:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.519463 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e109410f-af42-4d80-bf58-9af3a5dde09a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd52f8fe1e994b2f877ce0843ce86d86d7674bace8c4ca163e3232248313435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b00de586738e2d759aa971e2114def8fdfeb2a25fd72f482d75b9f46ea9a3d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12c45de72b21abdab0a1073a9a1a357c8d593f68a339bf9b455b5e87aa7863aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59dcbb2be526e98cfd0a3c8cf833d6cfdef0120c58b47e52fb62f56adffb1d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:30Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.546963 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:30Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.574222 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6c9480c-4161-4c38-bec1-0822c6692f6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53d750dff2e0aa3d65e2defbc3cdf44f48375946c7021c0b1e1056b5ed7d729e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8d4091ef21fb9fef52dafcd7f1d0e865ff57652fcb75d0ba1e16361bcb81f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8d4091ef21fb9fef52dafcd7f1d0e865ff57652fcb75d0ba1e16361bcb81f44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26ac79dab2ec2e8e379a62382daa37e5c1feaa0666d3c6426bd9a295c64fdd5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26ac79dab2ec2e8e379a62382daa37e5c1feaa0666d3c6426bd9a295c64fdd5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43f3b959a4804631ce679ee8dd89b1fa9249892328d303865de288a5a7529af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43f3b959a4804631ce679ee8dd89b1fa9249892328d303865de288a5a7529af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cf535fc0e39f67860383b43629a84bb4608a6a5d42304c537ab91a306ed841c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cf535fc0e39f67860383b43629a84bb4608a6a5d42304c537ab91a306ed841c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89610759cc77f66154699ee9784109cba8ce21818125f447368e19fb6cc8cfb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89610759cc77f66154699ee9784109cba8ce21818125f447368e19fb6cc8cfb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx4nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:30Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.595974 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:30Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.600980 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.601024 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.601038 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.601057 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.601071 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:30Z","lastTransitionTime":"2026-02-17T15:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.619443 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6556f8ef16656338bd11e718549ef3c019e96928825ab9dc0596f24b8f43e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbc64aec6f296c59b9fb1e8c183c9f80c346f2d76620db59376c914ffcec02b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:30Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.637098 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f8pfh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13cb51e0-9eb4-4948-a9bf-93cddaa429fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e67e9f34fe5e5e9f272673e47a80dfec89a2832289e719b09d5a13399412b2ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkcvd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f8pfh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:30Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.660025 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-msgfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18916d6d-e063-40a0-816f-554f95cd2956\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d94a7bfe9ebc3fcec167acc2f840374566394d9425801a71bd3626ce196ee3a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-msgfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:30Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.687366 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748f02a-e3dd-47c7-b89d-b472c718e593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80ab3de82f2a3f22425c34c9b4abcbc925a7076e3f2ce3b952f10aeb856e1c09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c263e6c0445a0badadcbc5b50c370fd4ee9a4d0cb3e535e3d7944e938cbea4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58ee49f9d112bd2fe6a3cc5f499d1be9d4c51f2741ffb9bf24754a46a0a12814\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b04c73bfd5eadf6c1e436f6a7150074ee8357cef79b0e040c1d9f3809aab13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9e729fa5a68d07a0f7e4a86114ed39e4128428e5a21c2f3f113f869adc9fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26a9d62d12c66018649ffcb84c69e20f1c08f3241bdb02ba4306b08dbe5ec49a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efef33a328c17ebb52448542ea1a70587b2bd3219b0f9bbd3eec8074885d14d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84285376e3391c3ff95b82b22d09c3f0482b993cbcdb226ed8e86f7318a1eab7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T15:54:27Z\\\",\\\"message\\\":\\\" reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 15:54:27.823454 6099 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0217 15:54:27.823478 6099 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0217 15:54:27.823501 6099 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0217 15:54:27.823550 6099 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0217 15:54:27.823566 6099 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0217 15:54:27.823609 6099 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0217 15:54:27.823712 6099 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 15:54:27.823793 6099 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0217 15:54:27.823869 6099 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0217 15:54:27.823886 6099 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0217 15:54:27.823927 6099 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0217 15:54:27.823948 6099 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0217 15:54:27.823967 6099 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0217 15:54:27.824263 6099 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:24Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efef33a328c17ebb52448542ea1a70587b2bd3219b0f9bbd3eec8074885d14d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T15:54:29Z\\\",\\\"message\\\":\\\"false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.138:50051:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {97419c58-41c7-41d7-a137-a446f0c7eeb3}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0217 15:54:29.419850 6225 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0217 15:54:29.420431 6225 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-daemon]} name:Service_openshift-machine-config-operator/machine-config-daemon_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.43:8798: 10.217.4.43:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {a36f6289-d09f-43f8-8a8a-c9d2cc11eb0d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0217 15:54:29.420614 6225 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://363a0f82d4347e522c91f27597bc03aa33f75e0399760fcc5cfdc1772eb6aabf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tgvlh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:30Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.705648 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.705949 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.706183 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.706325 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.706419 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:30Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.706446 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:30Z","lastTransitionTime":"2026-02-17T15:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.728850 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pr5s4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4989dd6-5d44-42b5-882c-12a10ffc7911\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://228e9f46385cedf80299c68685a8b2b94d96c41ade18eeea5de7a83c648cf704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2xc9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pr5s4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:30Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.755631 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efd34c89-7350-4ce0-83d9-302614df88f7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa3ef5d82c776e482d3da2d223d74423393c75b813707483fadca8cfbb5ed3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://695c70a36ec8a626d22b6dc04fdaad77e3e1f27a035ce6f62b96afe1f2c29361\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2611c9a878eac336beeea637370ce7fe47a5a80a6f29002cb2fb79d4637a1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d0e25e29d8f9c5146809e50f50a20c537f5ddecea1b902928a94870b5d44ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68d1439ead0f87e8cde6925c6db2cfde8a7fe89c6e5afaf719868740138742df\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:54:16Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 15:54:01.029442 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:54:01.030078 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2660512818/tls.crt::/tmp/serving-cert-2660512818/tls.key\\\\\\\"\\\\nI0217 15:54:16.361222 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:54:16.370125 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:54:16.370169 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:54:16.370202 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:54:16.370212 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:54:16.383437 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:54:16.383473 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383482 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:54:16.383494 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:54:16.383498 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:54:16.383502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:54:16.383616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:54:16.393934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://715d799f5e1732f88175b90bad28450b9c5148e89bf47ac3e47f9585acf3b392\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:53:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:30Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.776354 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3aaaa97d92e1acc8fe17594a75ed3e720801983ea175873486102bca899d9c04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:30Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.817174 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.817277 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.817298 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.817327 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.817346 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:30Z","lastTransitionTime":"2026-02-17T15:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.920529 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.920658 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.920684 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.920714 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.920735 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:30Z","lastTransitionTime":"2026-02-17T15:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.957563 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-86pl6"] Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.958203 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-86pl6" Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.962885 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.963542 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.980451 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3aaaa97d92e1acc8fe17594a75ed3e720801983ea175873486102bca899d9c04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:30Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.994996 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pr5s4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4989dd6-5d44-42b5-882c-12a10ffc7911\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://228e9f46385cedf80299c68685a8b2b94d96c41ade18eeea5de7a83c648cf704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2xc9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pr5s4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:30Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.018692 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efd34c89-7350-4ce0-83d9-302614df88f7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa3ef5d82c776e482d3da2d223d74423393c75b813707483fadca8cfbb5ed3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://695c70a36ec8a626d22b6dc04fdaad77e3e1f27a035ce6f62b96afe1f2c29361\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2611c9a878eac336beeea637370ce7fe47a5a80a6f29002cb2fb79d4637a1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d0e25e29d8f9c5146809e50f50a20c537f5ddecea1b902928a94870b5d44ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68d1439ead0f87e8cde6925c6db2cfde8a7fe89c6e5afaf719868740138742df\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:54:16Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 15:54:01.029442 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:54:01.030078 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2660512818/tls.crt::/tmp/serving-cert-2660512818/tls.key\\\\\\\"\\\\nI0217 15:54:16.361222 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:54:16.370125 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:54:16.370169 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:54:16.370202 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:54:16.370212 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:54:16.383437 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:54:16.383473 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383482 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:54:16.383494 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:54:16.383498 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:54:16.383502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:54:16.383616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:54:16.393934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://715d799f5e1732f88175b90bad28450b9c5148e89bf47ac3e47f9585acf3b392\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:53:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:31Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.023997 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.024083 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.024110 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.024151 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.024185 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:31Z","lastTransitionTime":"2026-02-17T15:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.040244 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b5cb9af7fe50ad534e758ba5647e162dfc951f41f07330e8b671427811de556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:31Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.062964 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6c9480c-4161-4c38-bec1-0822c6692f6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53d750dff2e0aa3d65e2defbc3cdf44f48375946c7021c0b1e1056b5ed7d729e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8d4091ef21fb9fef52dafcd7f1d0e865ff57652fcb75d0ba1e16361bcb81f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8d4091ef21fb9fef52dafcd7f1d0e865ff57652fcb75d0ba1e16361bcb81f44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26ac79dab2ec2e8e379a62382daa37e5c1feaa0666d3c6426bd9a295c64fdd5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26ac79dab2ec2e8e379a62382daa37e5c1feaa0666d3c6426bd9a295c64fdd5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43f3b959a4804631ce679ee8dd89b1fa9249892328d303865de288a5a7529af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43f3b959a4804631ce679ee8dd89b1fa9249892328d303865de288a5a7529af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cf535fc0e39f67860383b43629a84bb4608a6a5d42304c537ab91a306ed841c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cf535fc0e39f67860383b43629a84bb4608a6a5d42304c537ab91a306ed841c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89610759cc77f66154699ee9784109cba8ce21818125f447368e19fb6cc8cfb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89610759cc77f66154699ee9784109cba8ce21818125f447368e19fb6cc8cfb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx4nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:31Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.077624 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/067d21e4-9618-42af-bb01-1ea41d1bd7ef-env-overrides\") pod \"ovnkube-control-plane-749d76644c-86pl6\" (UID: \"067d21e4-9618-42af-bb01-1ea41d1bd7ef\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-86pl6" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.077924 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/067d21e4-9618-42af-bb01-1ea41d1bd7ef-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-86pl6\" (UID: \"067d21e4-9618-42af-bb01-1ea41d1bd7ef\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-86pl6" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.077985 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjv2r\" (UniqueName: \"kubernetes.io/projected/067d21e4-9618-42af-bb01-1ea41d1bd7ef-kube-api-access-mjv2r\") pod \"ovnkube-control-plane-749d76644c-86pl6\" (UID: \"067d21e4-9618-42af-bb01-1ea41d1bd7ef\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-86pl6" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.078047 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/067d21e4-9618-42af-bb01-1ea41d1bd7ef-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-86pl6\" (UID: \"067d21e4-9618-42af-bb01-1ea41d1bd7ef\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-86pl6" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.081538 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca38b6e7-b21c-453d-8b6c-a163dac84b35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14df09051221e795ef203b228b1f61d67e86d8052d81b4853a27d50d2b6e64bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://383650c9e8169aa5621d731ebcbfdd1ace0491ad4e7931fca1f6b595e0e782b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k8v8k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:31Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.101217 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e109410f-af42-4d80-bf58-9af3a5dde09a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd52f8fe1e994b2f877ce0843ce86d86d7674bace8c4ca163e3232248313435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b00de586738e2d759aa971e2114def8fdfeb2a25fd72f482d75b9f46ea9a3d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12c45de72b21abdab0a1073a9a1a357c8d593f68a339bf9b455b5e87aa7863aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59dcbb2be526e98cfd0a3c8cf833d6cfdef0120c58b47e52fb62f56adffb1d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:31Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.107818 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 21:44:17.686530551 +0000 UTC Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.121610 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:31Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.127376 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.127448 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.127467 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.127495 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.127514 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:31Z","lastTransitionTime":"2026-02-17T15:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.145620 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.145648 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:54:31 crc kubenswrapper[4808]: E0217 15:54:31.145797 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:54:31 crc kubenswrapper[4808]: E0217 15:54:31.145969 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.147067 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:31Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.149450 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:54:31 crc kubenswrapper[4808]: E0217 15:54:31.149616 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.165869 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:31Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.178863 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/067d21e4-9618-42af-bb01-1ea41d1bd7ef-env-overrides\") pod \"ovnkube-control-plane-749d76644c-86pl6\" (UID: \"067d21e4-9618-42af-bb01-1ea41d1bd7ef\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-86pl6" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.178951 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/067d21e4-9618-42af-bb01-1ea41d1bd7ef-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-86pl6\" (UID: \"067d21e4-9618-42af-bb01-1ea41d1bd7ef\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-86pl6" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.178973 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjv2r\" (UniqueName: \"kubernetes.io/projected/067d21e4-9618-42af-bb01-1ea41d1bd7ef-kube-api-access-mjv2r\") pod \"ovnkube-control-plane-749d76644c-86pl6\" (UID: \"067d21e4-9618-42af-bb01-1ea41d1bd7ef\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-86pl6" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.179004 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/067d21e4-9618-42af-bb01-1ea41d1bd7ef-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-86pl6\" (UID: \"067d21e4-9618-42af-bb01-1ea41d1bd7ef\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-86pl6" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.179978 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/067d21e4-9618-42af-bb01-1ea41d1bd7ef-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-86pl6\" (UID: \"067d21e4-9618-42af-bb01-1ea41d1bd7ef\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-86pl6" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.180312 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/067d21e4-9618-42af-bb01-1ea41d1bd7ef-env-overrides\") pod \"ovnkube-control-plane-749d76644c-86pl6\" (UID: \"067d21e4-9618-42af-bb01-1ea41d1bd7ef\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-86pl6" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.194923 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6556f8ef16656338bd11e718549ef3c019e96928825ab9dc0596f24b8f43e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbc64aec6f296c59b9fb1e8c183c9f80c346f2d76620db59376c914ffcec02b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:31Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.195395 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/067d21e4-9618-42af-bb01-1ea41d1bd7ef-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-86pl6\" (UID: \"067d21e4-9618-42af-bb01-1ea41d1bd7ef\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-86pl6" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.207904 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjv2r\" (UniqueName: \"kubernetes.io/projected/067d21e4-9618-42af-bb01-1ea41d1bd7ef-kube-api-access-mjv2r\") pod \"ovnkube-control-plane-749d76644c-86pl6\" (UID: \"067d21e4-9618-42af-bb01-1ea41d1bd7ef\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-86pl6" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.211286 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f8pfh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13cb51e0-9eb4-4948-a9bf-93cddaa429fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e67e9f34fe5e5e9f272673e47a80dfec89a2832289e719b09d5a13399412b2ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkcvd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f8pfh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:31Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.230401 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.230481 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.230510 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.230551 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.230607 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:31Z","lastTransitionTime":"2026-02-17T15:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.234848 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-msgfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18916d6d-e063-40a0-816f-554f95cd2956\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d94a7bfe9ebc3fcec167acc2f840374566394d9425801a71bd3626ce196ee3a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-msgfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:31Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.268160 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748f02a-e3dd-47c7-b89d-b472c718e593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80ab3de82f2a3f22425c34c9b4abcbc925a7076e3f2ce3b952f10aeb856e1c09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c263e6c0445a0badadcbc5b50c370fd4ee9a4d0cb3e535e3d7944e938cbea4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58ee49f9d112bd2fe6a3cc5f499d1be9d4c51f2741ffb9bf24754a46a0a12814\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b04c73bfd5eadf6c1e436f6a7150074ee8357cef79b0e040c1d9f3809aab13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9e729fa5a68d07a0f7e4a86114ed39e4128428e5a21c2f3f113f869adc9fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26a9d62d12c66018649ffcb84c69e20f1c08f3241bdb02ba4306b08dbe5ec49a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efef33a328c17ebb52448542ea1a70587b2bd3219b0f9bbd3eec8074885d14d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84285376e3391c3ff95b82b22d09c3f0482b993cbcdb226ed8e86f7318a1eab7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T15:54:27Z\\\",\\\"message\\\":\\\" reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 15:54:27.823454 6099 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0217 15:54:27.823478 6099 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0217 15:54:27.823501 6099 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0217 15:54:27.823550 6099 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0217 15:54:27.823566 6099 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0217 15:54:27.823609 6099 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0217 15:54:27.823712 6099 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 15:54:27.823793 6099 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0217 15:54:27.823869 6099 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0217 15:54:27.823886 6099 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0217 15:54:27.823927 6099 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0217 15:54:27.823948 6099 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0217 15:54:27.823967 6099 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0217 15:54:27.824263 6099 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:24Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efef33a328c17ebb52448542ea1a70587b2bd3219b0f9bbd3eec8074885d14d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T15:54:29Z\\\",\\\"message\\\":\\\"false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.138:50051:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {97419c58-41c7-41d7-a137-a446f0c7eeb3}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0217 15:54:29.419850 6225 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0217 15:54:29.420431 6225 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-daemon]} name:Service_openshift-machine-config-operator/machine-config-daemon_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.43:8798: 10.217.4.43:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {a36f6289-d09f-43f8-8a8a-c9d2cc11eb0d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0217 15:54:29.420614 6225 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://363a0f82d4347e522c91f27597bc03aa33f75e0399760fcc5cfdc1772eb6aabf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tgvlh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:31Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.282777 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-86pl6" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.288103 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-86pl6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"067d21e4-9618-42af-bb01-1ea41d1bd7ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjv2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjv2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-86pl6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:31Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.333528 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.333619 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.333650 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.333680 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.333702 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:31Z","lastTransitionTime":"2026-02-17T15:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.437210 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.437273 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.437296 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.437333 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.437357 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:31Z","lastTransitionTime":"2026-02-17T15:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.460039 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-86pl6" event={"ID":"067d21e4-9618-42af-bb01-1ea41d1bd7ef","Type":"ContainerStarted","Data":"78819e453ccbb6cd63323c69e65a42d589263d8890f5f1c2679def34a5786d56"} Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.463404 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tgvlh_5748f02a-e3dd-47c7-b89d-b472c718e593/ovnkube-controller/1.log" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.468755 4808 scope.go:117] "RemoveContainer" containerID="efef33a328c17ebb52448542ea1a70587b2bd3219b0f9bbd3eec8074885d14d2" Feb 17 15:54:31 crc kubenswrapper[4808]: E0217 15:54:31.468952 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-tgvlh_openshift-ovn-kubernetes(5748f02a-e3dd-47c7-b89d-b472c718e593)\"" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" podUID="5748f02a-e3dd-47c7-b89d-b472c718e593" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.490246 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6c9480c-4161-4c38-bec1-0822c6692f6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53d750dff2e0aa3d65e2defbc3cdf44f48375946c7021c0b1e1056b5ed7d729e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8d4091ef21fb9fef52dafcd7f1d0e865ff57652fcb75d0ba1e16361bcb81f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8d4091ef21fb9fef52dafcd7f1d0e865ff57652fcb75d0ba1e16361bcb81f44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26ac79dab2ec2e8e379a62382daa37e5c1feaa0666d3c6426bd9a295c64fdd5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26ac79dab2ec2e8e379a62382daa37e5c1feaa0666d3c6426bd9a295c64fdd5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43f3b959a4804631ce679ee8dd89b1fa9249892328d303865de288a5a7529af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43f3b959a4804631ce679ee8dd89b1fa9249892328d303865de288a5a7529af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cf535fc0e39f67860383b43629a84bb4608a6a5d42304c537ab91a306ed841c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cf535fc0e39f67860383b43629a84bb4608a6a5d42304c537ab91a306ed841c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89610759cc77f66154699ee9784109cba8ce21818125f447368e19fb6cc8cfb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89610759cc77f66154699ee9784109cba8ce21818125f447368e19fb6cc8cfb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx4nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:31Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.506079 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca38b6e7-b21c-453d-8b6c-a163dac84b35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14df09051221e795ef203b228b1f61d67e86d8052d81b4853a27d50d2b6e64bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://383650c9e8169aa5621d731ebcbfdd1ace0491ad4e7931fca1f6b595e0e782b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k8v8k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:31Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.524439 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e109410f-af42-4d80-bf58-9af3a5dde09a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd52f8fe1e994b2f877ce0843ce86d86d7674bace8c4ca163e3232248313435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b00de586738e2d759aa971e2114def8fdfeb2a25fd72f482d75b9f46ea9a3d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12c45de72b21abdab0a1073a9a1a357c8d593f68a339bf9b455b5e87aa7863aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59dcbb2be526e98cfd0a3c8cf833d6cfdef0120c58b47e52fb62f56adffb1d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:31Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.539106 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:31Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.543386 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.543417 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.543428 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.543443 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.543452 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:31Z","lastTransitionTime":"2026-02-17T15:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.556249 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:31Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.570526 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:31Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.584294 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6556f8ef16656338bd11e718549ef3c019e96928825ab9dc0596f24b8f43e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbc64aec6f296c59b9fb1e8c183c9f80c346f2d76620db59376c914ffcec02b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:31Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.596127 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f8pfh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13cb51e0-9eb4-4948-a9bf-93cddaa429fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e67e9f34fe5e5e9f272673e47a80dfec89a2832289e719b09d5a13399412b2ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkcvd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f8pfh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:31Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.611739 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-msgfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18916d6d-e063-40a0-816f-554f95cd2956\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d94a7bfe9ebc3fcec167acc2f840374566394d9425801a71bd3626ce196ee3a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-msgfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:31Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.644326 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748f02a-e3dd-47c7-b89d-b472c718e593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80ab3de82f2a3f22425c34c9b4abcbc925a7076e3f2ce3b952f10aeb856e1c09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c263e6c0445a0badadcbc5b50c370fd4ee9a4d0cb3e535e3d7944e938cbea4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58ee49f9d112bd2fe6a3cc5f499d1be9d4c51f2741ffb9bf24754a46a0a12814\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b04c73bfd5eadf6c1e436f6a7150074ee8357cef79b0e040c1d9f3809aab13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9e729fa5a68d07a0f7e4a86114ed39e4128428e5a21c2f3f113f869adc9fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26a9d62d12c66018649ffcb84c69e20f1c08f3241bdb02ba4306b08dbe5ec49a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efef33a328c17ebb52448542ea1a70587b2bd3219b0f9bbd3eec8074885d14d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efef33a328c17ebb52448542ea1a70587b2bd3219b0f9bbd3eec8074885d14d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T15:54:29Z\\\",\\\"message\\\":\\\"false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.138:50051:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {97419c58-41c7-41d7-a137-a446f0c7eeb3}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0217 15:54:29.419850 6225 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0217 15:54:29.420431 6225 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-daemon]} name:Service_openshift-machine-config-operator/machine-config-daemon_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.43:8798: 10.217.4.43:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {a36f6289-d09f-43f8-8a8a-c9d2cc11eb0d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0217 15:54:29.420614 6225 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-tgvlh_openshift-ovn-kubernetes(5748f02a-e3dd-47c7-b89d-b472c718e593)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://363a0f82d4347e522c91f27597bc03aa33f75e0399760fcc5cfdc1772eb6aabf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tgvlh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:31Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.646183 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.646242 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.646255 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.646279 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.646295 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:31Z","lastTransitionTime":"2026-02-17T15:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.658387 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-86pl6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"067d21e4-9618-42af-bb01-1ea41d1bd7ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjv2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjv2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-86pl6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:31Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.672653 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3aaaa97d92e1acc8fe17594a75ed3e720801983ea175873486102bca899d9c04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:31Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.683844 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pr5s4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4989dd6-5d44-42b5-882c-12a10ffc7911\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://228e9f46385cedf80299c68685a8b2b94d96c41ade18eeea5de7a83c648cf704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2xc9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pr5s4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:31Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.698406 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efd34c89-7350-4ce0-83d9-302614df88f7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa3ef5d82c776e482d3da2d223d74423393c75b813707483fadca8cfbb5ed3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://695c70a36ec8a626d22b6dc04fdaad77e3e1f27a035ce6f62b96afe1f2c29361\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2611c9a878eac336beeea637370ce7fe47a5a80a6f29002cb2fb79d4637a1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d0e25e29d8f9c5146809e50f50a20c537f5ddecea1b902928a94870b5d44ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68d1439ead0f87e8cde6925c6db2cfde8a7fe89c6e5afaf719868740138742df\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:54:16Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 15:54:01.029442 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:54:01.030078 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2660512818/tls.crt::/tmp/serving-cert-2660512818/tls.key\\\\\\\"\\\\nI0217 15:54:16.361222 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:54:16.370125 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:54:16.370169 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:54:16.370202 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:54:16.370212 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:54:16.383437 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:54:16.383473 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383482 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:54:16.383494 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:54:16.383498 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:54:16.383502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:54:16.383616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:54:16.393934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://715d799f5e1732f88175b90bad28450b9c5148e89bf47ac3e47f9585acf3b392\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:53:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:31Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.711877 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b5cb9af7fe50ad534e758ba5647e162dfc951f41f07330e8b671427811de556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:31Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.729827 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-z8tn8"] Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.730504 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:54:31 crc kubenswrapper[4808]: E0217 15:54:31.730633 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z8tn8" podUID="b88c3e5f-7390-477c-ae74-aced26a8ddf9" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.745369 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e109410f-af42-4d80-bf58-9af3a5dde09a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd52f8fe1e994b2f877ce0843ce86d86d7674bace8c4ca163e3232248313435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b00de586738e2d759aa971e2114def8fdfeb2a25fd72f482d75b9f46ea9a3d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12c45de72b21abdab0a1073a9a1a357c8d593f68a339bf9b455b5e87aa7863aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59dcbb2be526e98cfd0a3c8cf833d6cfdef0120c58b47e52fb62f56adffb1d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:31Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.749568 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.749608 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.749617 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.749632 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.749641 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:31Z","lastTransitionTime":"2026-02-17T15:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.760894 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:31Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.777652 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6c9480c-4161-4c38-bec1-0822c6692f6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53d750dff2e0aa3d65e2defbc3cdf44f48375946c7021c0b1e1056b5ed7d729e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8d4091ef21fb9fef52dafcd7f1d0e865ff57652fcb75d0ba1e16361bcb81f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8d4091ef21fb9fef52dafcd7f1d0e865ff57652fcb75d0ba1e16361bcb81f44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26ac79dab2ec2e8e379a62382daa37e5c1feaa0666d3c6426bd9a295c64fdd5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26ac79dab2ec2e8e379a62382daa37e5c1feaa0666d3c6426bd9a295c64fdd5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43f3b959a4804631ce679ee8dd89b1fa9249892328d303865de288a5a7529af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43f3b959a4804631ce679ee8dd89b1fa9249892328d303865de288a5a7529af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cf535fc0e39f67860383b43629a84bb4608a6a5d42304c537ab91a306ed841c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cf535fc0e39f67860383b43629a84bb4608a6a5d42304c537ab91a306ed841c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89610759cc77f66154699ee9784109cba8ce21818125f447368e19fb6cc8cfb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89610759cc77f66154699ee9784109cba8ce21818125f447368e19fb6cc8cfb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx4nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:31Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.789697 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca38b6e7-b21c-453d-8b6c-a163dac84b35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14df09051221e795ef203b228b1f61d67e86d8052d81b4853a27d50d2b6e64bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://383650c9e8169aa5621d731ebcbfdd1ace0491ad4e7931fca1f6b595e0e782b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k8v8k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:31Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.804425 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:31Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.820074 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:31Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.836708 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6556f8ef16656338bd11e718549ef3c019e96928825ab9dc0596f24b8f43e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbc64aec6f296c59b9fb1e8c183c9f80c346f2d76620db59376c914ffcec02b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:31Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.846337 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f8pfh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13cb51e0-9eb4-4948-a9bf-93cddaa429fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e67e9f34fe5e5e9f272673e47a80dfec89a2832289e719b09d5a13399412b2ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkcvd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f8pfh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:31Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.852187 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.852233 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.852245 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.852263 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.852293 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:31Z","lastTransitionTime":"2026-02-17T15:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.867048 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-msgfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18916d6d-e063-40a0-816f-554f95cd2956\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d94a7bfe9ebc3fcec167acc2f840374566394d9425801a71bd3626ce196ee3a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-msgfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:31Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.888509 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b88c3e5f-7390-477c-ae74-aced26a8ddf9-metrics-certs\") pod \"network-metrics-daemon-z8tn8\" (UID: \"b88c3e5f-7390-477c-ae74-aced26a8ddf9\") " pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.888561 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8f79s\" (UniqueName: \"kubernetes.io/projected/b88c3e5f-7390-477c-ae74-aced26a8ddf9-kube-api-access-8f79s\") pod \"network-metrics-daemon-z8tn8\" (UID: \"b88c3e5f-7390-477c-ae74-aced26a8ddf9\") " pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.895361 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748f02a-e3dd-47c7-b89d-b472c718e593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80ab3de82f2a3f22425c34c9b4abcbc925a7076e3f2ce3b952f10aeb856e1c09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c263e6c0445a0badadcbc5b50c370fd4ee9a4d0cb3e535e3d7944e938cbea4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58ee49f9d112bd2fe6a3cc5f499d1be9d4c51f2741ffb9bf24754a46a0a12814\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b04c73bfd5eadf6c1e436f6a7150074ee8357cef79b0e040c1d9f3809aab13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9e729fa5a68d07a0f7e4a86114ed39e4128428e5a21c2f3f113f869adc9fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26a9d62d12c66018649ffcb84c69e20f1c08f3241bdb02ba4306b08dbe5ec49a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efef33a328c17ebb52448542ea1a70587b2bd3219b0f9bbd3eec8074885d14d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efef33a328c17ebb52448542ea1a70587b2bd3219b0f9bbd3eec8074885d14d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T15:54:29Z\\\",\\\"message\\\":\\\"false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.138:50051:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {97419c58-41c7-41d7-a137-a446f0c7eeb3}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0217 15:54:29.419850 6225 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0217 15:54:29.420431 6225 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-daemon]} name:Service_openshift-machine-config-operator/machine-config-daemon_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.43:8798: 10.217.4.43:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {a36f6289-d09f-43f8-8a8a-c9d2cc11eb0d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0217 15:54:29.420614 6225 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-tgvlh_openshift-ovn-kubernetes(5748f02a-e3dd-47c7-b89d-b472c718e593)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://363a0f82d4347e522c91f27597bc03aa33f75e0399760fcc5cfdc1772eb6aabf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tgvlh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:31Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.909313 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-86pl6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"067d21e4-9618-42af-bb01-1ea41d1bd7ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjv2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjv2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-86pl6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:31Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.930482 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efd34c89-7350-4ce0-83d9-302614df88f7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa3ef5d82c776e482d3da2d223d74423393c75b813707483fadca8cfbb5ed3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://695c70a36ec8a626d22b6dc04fdaad77e3e1f27a035ce6f62b96afe1f2c29361\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2611c9a878eac336beeea637370ce7fe47a5a80a6f29002cb2fb79d4637a1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d0e25e29d8f9c5146809e50f50a20c537f5ddecea1b902928a94870b5d44ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68d1439ead0f87e8cde6925c6db2cfde8a7fe89c6e5afaf719868740138742df\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:54:16Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 15:54:01.029442 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:54:01.030078 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2660512818/tls.crt::/tmp/serving-cert-2660512818/tls.key\\\\\\\"\\\\nI0217 15:54:16.361222 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:54:16.370125 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:54:16.370169 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:54:16.370202 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:54:16.370212 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:54:16.383437 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:54:16.383473 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383482 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:54:16.383494 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:54:16.383498 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:54:16.383502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:54:16.383616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:54:16.393934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://715d799f5e1732f88175b90bad28450b9c5148e89bf47ac3e47f9585acf3b392\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:53:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:31Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.942492 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3aaaa97d92e1acc8fe17594a75ed3e720801983ea175873486102bca899d9c04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:31Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.954753 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.954789 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.954800 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.954817 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.954829 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:31Z","lastTransitionTime":"2026-02-17T15:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.955278 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pr5s4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4989dd6-5d44-42b5-882c-12a10ffc7911\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://228e9f46385cedf80299c68685a8b2b94d96c41ade18eeea5de7a83c648cf704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2xc9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pr5s4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:31Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.965470 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-z8tn8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b88c3e5f-7390-477c-ae74-aced26a8ddf9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8f79s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8f79s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-z8tn8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:31Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.979294 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b5cb9af7fe50ad534e758ba5647e162dfc951f41f07330e8b671427811de556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:31Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.989804 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b88c3e5f-7390-477c-ae74-aced26a8ddf9-metrics-certs\") pod \"network-metrics-daemon-z8tn8\" (UID: \"b88c3e5f-7390-477c-ae74-aced26a8ddf9\") " pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.989857 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8f79s\" (UniqueName: \"kubernetes.io/projected/b88c3e5f-7390-477c-ae74-aced26a8ddf9-kube-api-access-8f79s\") pod \"network-metrics-daemon-z8tn8\" (UID: \"b88c3e5f-7390-477c-ae74-aced26a8ddf9\") " pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:54:31 crc kubenswrapper[4808]: E0217 15:54:31.990105 4808 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 15:54:31 crc kubenswrapper[4808]: E0217 15:54:31.990236 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b88c3e5f-7390-477c-ae74-aced26a8ddf9-metrics-certs podName:b88c3e5f-7390-477c-ae74-aced26a8ddf9 nodeName:}" failed. No retries permitted until 2026-02-17 15:54:32.490202458 +0000 UTC m=+36.006561571 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b88c3e5f-7390-477c-ae74-aced26a8ddf9-metrics-certs") pod "network-metrics-daemon-z8tn8" (UID: "b88c3e5f-7390-477c-ae74-aced26a8ddf9") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.009935 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8f79s\" (UniqueName: \"kubernetes.io/projected/b88c3e5f-7390-477c-ae74-aced26a8ddf9-kube-api-access-8f79s\") pod \"network-metrics-daemon-z8tn8\" (UID: \"b88c3e5f-7390-477c-ae74-aced26a8ddf9\") " pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.057969 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.058032 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.058044 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.058066 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.058082 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:32Z","lastTransitionTime":"2026-02-17T15:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.108512 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 23:02:57.486171536 +0000 UTC Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.161646 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.161728 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.161762 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.161813 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.161842 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:32Z","lastTransitionTime":"2026-02-17T15:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.264792 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.264860 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.264884 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.264910 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.264929 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:32Z","lastTransitionTime":"2026-02-17T15:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.367378 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.367439 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.367451 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.367468 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.367479 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:32Z","lastTransitionTime":"2026-02-17T15:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.471049 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.471115 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.471133 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.471160 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.471186 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:32Z","lastTransitionTime":"2026-02-17T15:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.474506 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-86pl6" event={"ID":"067d21e4-9618-42af-bb01-1ea41d1bd7ef","Type":"ContainerStarted","Data":"ded2fa969b96132c1a5953da41b9418ec78621261888216b3854bc3cacb7bca6"} Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.474615 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-86pl6" event={"ID":"067d21e4-9618-42af-bb01-1ea41d1bd7ef","Type":"ContainerStarted","Data":"bcb207e998564484db273e9e68e20e49fb986fc4644b656e17b5c3fea9fb4eb1"} Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.496164 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b88c3e5f-7390-477c-ae74-aced26a8ddf9-metrics-certs\") pod \"network-metrics-daemon-z8tn8\" (UID: \"b88c3e5f-7390-477c-ae74-aced26a8ddf9\") " pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:54:32 crc kubenswrapper[4808]: E0217 15:54:32.496407 4808 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 15:54:32 crc kubenswrapper[4808]: E0217 15:54:32.496480 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b88c3e5f-7390-477c-ae74-aced26a8ddf9-metrics-certs podName:b88c3e5f-7390-477c-ae74-aced26a8ddf9 nodeName:}" failed. No retries permitted until 2026-02-17 15:54:33.496462419 +0000 UTC m=+37.012821502 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b88c3e5f-7390-477c-ae74-aced26a8ddf9-metrics-certs") pod "network-metrics-daemon-z8tn8" (UID: "b88c3e5f-7390-477c-ae74-aced26a8ddf9") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.497220 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efd34c89-7350-4ce0-83d9-302614df88f7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa3ef5d82c776e482d3da2d223d74423393c75b813707483fadca8cfbb5ed3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://695c70a36ec8a626d22b6dc04fdaad77e3e1f27a035ce6f62b96afe1f2c29361\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2611c9a878eac336beeea637370ce7fe47a5a80a6f29002cb2fb79d4637a1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d0e25e29d8f9c5146809e50f50a20c537f5ddecea1b902928a94870b5d44ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68d1439ead0f87e8cde6925c6db2cfde8a7fe89c6e5afaf719868740138742df\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:54:16Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 15:54:01.029442 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:54:01.030078 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2660512818/tls.crt::/tmp/serving-cert-2660512818/tls.key\\\\\\\"\\\\nI0217 15:54:16.361222 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:54:16.370125 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:54:16.370169 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:54:16.370202 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:54:16.370212 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:54:16.383437 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:54:16.383473 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383482 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:54:16.383494 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:54:16.383498 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:54:16.383502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:54:16.383616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:54:16.393934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://715d799f5e1732f88175b90bad28450b9c5148e89bf47ac3e47f9585acf3b392\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:53:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:32Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.515509 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3aaaa97d92e1acc8fe17594a75ed3e720801983ea175873486102bca899d9c04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:32Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.530630 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pr5s4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4989dd6-5d44-42b5-882c-12a10ffc7911\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://228e9f46385cedf80299c68685a8b2b94d96c41ade18eeea5de7a83c648cf704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2xc9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pr5s4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:32Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.551008 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-z8tn8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b88c3e5f-7390-477c-ae74-aced26a8ddf9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8f79s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8f79s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-z8tn8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:32Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.569003 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b5cb9af7fe50ad534e758ba5647e162dfc951f41f07330e8b671427811de556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:32Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.574367 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.574417 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.574438 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.574466 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.574485 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:32Z","lastTransitionTime":"2026-02-17T15:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.586352 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:32Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.607207 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6c9480c-4161-4c38-bec1-0822c6692f6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53d750dff2e0aa3d65e2defbc3cdf44f48375946c7021c0b1e1056b5ed7d729e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8d4091ef21fb9fef52dafcd7f1d0e865ff57652fcb75d0ba1e16361bcb81f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8d4091ef21fb9fef52dafcd7f1d0e865ff57652fcb75d0ba1e16361bcb81f44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26ac79dab2ec2e8e379a62382daa37e5c1feaa0666d3c6426bd9a295c64fdd5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26ac79dab2ec2e8e379a62382daa37e5c1feaa0666d3c6426bd9a295c64fdd5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43f3b959a4804631ce679ee8dd89b1fa9249892328d303865de288a5a7529af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43f3b959a4804631ce679ee8dd89b1fa9249892328d303865de288a5a7529af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cf535fc0e39f67860383b43629a84bb4608a6a5d42304c537ab91a306ed841c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cf535fc0e39f67860383b43629a84bb4608a6a5d42304c537ab91a306ed841c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89610759cc77f66154699ee9784109cba8ce21818125f447368e19fb6cc8cfb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89610759cc77f66154699ee9784109cba8ce21818125f447368e19fb6cc8cfb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx4nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:32Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.622329 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca38b6e7-b21c-453d-8b6c-a163dac84b35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14df09051221e795ef203b228b1f61d67e86d8052d81b4853a27d50d2b6e64bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://383650c9e8169aa5621d731ebcbfdd1ace0491ad4e7931fca1f6b595e0e782b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k8v8k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:32Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.637903 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e109410f-af42-4d80-bf58-9af3a5dde09a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd52f8fe1e994b2f877ce0843ce86d86d7674bace8c4ca163e3232248313435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b00de586738e2d759aa971e2114def8fdfeb2a25fd72f482d75b9f46ea9a3d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12c45de72b21abdab0a1073a9a1a357c8d593f68a339bf9b455b5e87aa7863aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59dcbb2be526e98cfd0a3c8cf833d6cfdef0120c58b47e52fb62f56adffb1d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:32Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.657744 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:32Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.677358 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.678312 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.678335 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.678361 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.678381 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:32Z","lastTransitionTime":"2026-02-17T15:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.680566 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:32Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.699764 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6556f8ef16656338bd11e718549ef3c019e96928825ab9dc0596f24b8f43e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbc64aec6f296c59b9fb1e8c183c9f80c346f2d76620db59376c914ffcec02b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:32Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.714463 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f8pfh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13cb51e0-9eb4-4948-a9bf-93cddaa429fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e67e9f34fe5e5e9f272673e47a80dfec89a2832289e719b09d5a13399412b2ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkcvd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f8pfh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:32Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.734203 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-msgfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18916d6d-e063-40a0-816f-554f95cd2956\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d94a7bfe9ebc3fcec167acc2f840374566394d9425801a71bd3626ce196ee3a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-msgfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:32Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.754042 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748f02a-e3dd-47c7-b89d-b472c718e593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80ab3de82f2a3f22425c34c9b4abcbc925a7076e3f2ce3b952f10aeb856e1c09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c263e6c0445a0badadcbc5b50c370fd4ee9a4d0cb3e535e3d7944e938cbea4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58ee49f9d112bd2fe6a3cc5f499d1be9d4c51f2741ffb9bf24754a46a0a12814\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b04c73bfd5eadf6c1e436f6a7150074ee8357cef79b0e040c1d9f3809aab13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9e729fa5a68d07a0f7e4a86114ed39e4128428e5a21c2f3f113f869adc9fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26a9d62d12c66018649ffcb84c69e20f1c08f3241bdb02ba4306b08dbe5ec49a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efef33a328c17ebb52448542ea1a70587b2bd3219b0f9bbd3eec8074885d14d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efef33a328c17ebb52448542ea1a70587b2bd3219b0f9bbd3eec8074885d14d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T15:54:29Z\\\",\\\"message\\\":\\\"false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.138:50051:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {97419c58-41c7-41d7-a137-a446f0c7eeb3}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0217 15:54:29.419850 6225 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0217 15:54:29.420431 6225 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-daemon]} name:Service_openshift-machine-config-operator/machine-config-daemon_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.43:8798: 10.217.4.43:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {a36f6289-d09f-43f8-8a8a-c9d2cc11eb0d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0217 15:54:29.420614 6225 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-tgvlh_openshift-ovn-kubernetes(5748f02a-e3dd-47c7-b89d-b472c718e593)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://363a0f82d4347e522c91f27597bc03aa33f75e0399760fcc5cfdc1772eb6aabf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tgvlh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:32Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.768712 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-86pl6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"067d21e4-9618-42af-bb01-1ea41d1bd7ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcb207e998564484db273e9e68e20e49fb986fc4644b656e17b5c3fea9fb4eb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjv2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded2fa969b96132c1a5953da41b9418ec78621261888216b3854bc3cacb7bca6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjv2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-86pl6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:32Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.782612 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.782669 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.782683 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.782702 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.782716 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:32Z","lastTransitionTime":"2026-02-17T15:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.885249 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.885330 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.885350 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.885385 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.885407 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:32Z","lastTransitionTime":"2026-02-17T15:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.900531 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:54:32 crc kubenswrapper[4808]: E0217 15:54:32.900776 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:54:48.900731393 +0000 UTC m=+52.417090496 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.988687 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.988742 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.988761 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.988790 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.988810 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:32Z","lastTransitionTime":"2026-02-17T15:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.002781 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.002876 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.002931 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.002989 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:54:33 crc kubenswrapper[4808]: E0217 15:54:33.003125 4808 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 15:54:33 crc kubenswrapper[4808]: E0217 15:54:33.003145 4808 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 15:54:33 crc kubenswrapper[4808]: E0217 15:54:33.003177 4808 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 15:54:33 crc kubenswrapper[4808]: E0217 15:54:33.003184 4808 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 15:54:33 crc kubenswrapper[4808]: E0217 15:54:33.003210 4808 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:54:33 crc kubenswrapper[4808]: E0217 15:54:33.003125 4808 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 15:54:33 crc kubenswrapper[4808]: E0217 15:54:33.003280 4808 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 15:54:33 crc kubenswrapper[4808]: E0217 15:54:33.003306 4808 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:54:33 crc kubenswrapper[4808]: E0217 15:54:33.003244 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 15:54:49.003221962 +0000 UTC m=+52.519581075 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 15:54:33 crc kubenswrapper[4808]: E0217 15:54:33.003398 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-17 15:54:49.003358346 +0000 UTC m=+52.519717609 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:54:33 crc kubenswrapper[4808]: E0217 15:54:33.003433 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 15:54:49.003417717 +0000 UTC m=+52.519776820 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 15:54:33 crc kubenswrapper[4808]: E0217 15:54:33.003454 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-17 15:54:49.003443678 +0000 UTC m=+52.519802781 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.008779 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.008838 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.008858 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.008884 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.008905 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:33Z","lastTransitionTime":"2026-02-17T15:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:33 crc kubenswrapper[4808]: E0217 15:54:33.038481 4808 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7379f6dd-5937-4d60-901f-8c9dc45481b3\\\",\\\"systemUUID\\\":\\\"8fe3bc97-dd01-4038-9ff9-743e71f8162b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:33Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.043711 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.043785 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.043809 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.043849 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.043875 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:33Z","lastTransitionTime":"2026-02-17T15:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:33 crc kubenswrapper[4808]: E0217 15:54:33.060017 4808 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7379f6dd-5937-4d60-901f-8c9dc45481b3\\\",\\\"systemUUID\\\":\\\"8fe3bc97-dd01-4038-9ff9-743e71f8162b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:33Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.064216 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.064492 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.064783 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.065029 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.065236 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:33Z","lastTransitionTime":"2026-02-17T15:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:33 crc kubenswrapper[4808]: E0217 15:54:33.078229 4808 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7379f6dd-5937-4d60-901f-8c9dc45481b3\\\",\\\"systemUUID\\\":\\\"8fe3bc97-dd01-4038-9ff9-743e71f8162b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:33Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.083467 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.083555 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.083568 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.083602 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.083614 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:33Z","lastTransitionTime":"2026-02-17T15:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:33 crc kubenswrapper[4808]: E0217 15:54:33.104521 4808 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7379f6dd-5937-4d60-901f-8c9dc45481b3\\\",\\\"systemUUID\\\":\\\"8fe3bc97-dd01-4038-9ff9-743e71f8162b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:33Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.109287 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 22:29:14.417366291 +0000 UTC Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.110148 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.110207 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.110229 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.110259 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.110284 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:33Z","lastTransitionTime":"2026-02-17T15:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:33 crc kubenswrapper[4808]: E0217 15:54:33.124242 4808 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7379f6dd-5937-4d60-901f-8c9dc45481b3\\\",\\\"systemUUID\\\":\\\"8fe3bc97-dd01-4038-9ff9-743e71f8162b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:33Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:33 crc kubenswrapper[4808]: E0217 15:54:33.124474 4808 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.127322 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.127376 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.127392 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.127414 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.127430 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:33Z","lastTransitionTime":"2026-02-17T15:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.145143 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.145177 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.145217 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.145177 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:54:33 crc kubenswrapper[4808]: E0217 15:54:33.145352 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:54:33 crc kubenswrapper[4808]: E0217 15:54:33.145714 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:54:33 crc kubenswrapper[4808]: E0217 15:54:33.145987 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:54:33 crc kubenswrapper[4808]: E0217 15:54:33.145992 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z8tn8" podUID="b88c3e5f-7390-477c-ae74-aced26a8ddf9" Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.231219 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.231295 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.231315 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.231346 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.231371 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:33Z","lastTransitionTime":"2026-02-17T15:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.335931 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.336041 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.336071 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.336110 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.336148 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:33Z","lastTransitionTime":"2026-02-17T15:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.439922 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.440028 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.440057 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.440094 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.440120 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:33Z","lastTransitionTime":"2026-02-17T15:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.509993 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b88c3e5f-7390-477c-ae74-aced26a8ddf9-metrics-certs\") pod \"network-metrics-daemon-z8tn8\" (UID: \"b88c3e5f-7390-477c-ae74-aced26a8ddf9\") " pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:54:33 crc kubenswrapper[4808]: E0217 15:54:33.510240 4808 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 15:54:33 crc kubenswrapper[4808]: E0217 15:54:33.510357 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b88c3e5f-7390-477c-ae74-aced26a8ddf9-metrics-certs podName:b88c3e5f-7390-477c-ae74-aced26a8ddf9 nodeName:}" failed. No retries permitted until 2026-02-17 15:54:35.510325475 +0000 UTC m=+39.026684578 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b88c3e5f-7390-477c-ae74-aced26a8ddf9-metrics-certs") pod "network-metrics-daemon-z8tn8" (UID: "b88c3e5f-7390-477c-ae74-aced26a8ddf9") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.543640 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.543708 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.543729 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.543755 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.543774 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:33Z","lastTransitionTime":"2026-02-17T15:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.647462 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.647536 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.647556 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.647617 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.647639 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:33Z","lastTransitionTime":"2026-02-17T15:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.751265 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.751413 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.751440 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.751472 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.751491 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:33Z","lastTransitionTime":"2026-02-17T15:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.855522 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.855647 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.855671 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.855700 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.855719 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:33Z","lastTransitionTime":"2026-02-17T15:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.959266 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.959353 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.959381 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.959418 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.959450 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:33Z","lastTransitionTime":"2026-02-17T15:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:34 crc kubenswrapper[4808]: I0217 15:54:34.063167 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:34 crc kubenswrapper[4808]: I0217 15:54:34.063269 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:34 crc kubenswrapper[4808]: I0217 15:54:34.063288 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:34 crc kubenswrapper[4808]: I0217 15:54:34.063318 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:34 crc kubenswrapper[4808]: I0217 15:54:34.063340 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:34Z","lastTransitionTime":"2026-02-17T15:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:34 crc kubenswrapper[4808]: I0217 15:54:34.110125 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 13:37:50.567919637 +0000 UTC Feb 17 15:54:34 crc kubenswrapper[4808]: I0217 15:54:34.167084 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:34 crc kubenswrapper[4808]: I0217 15:54:34.167154 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:34 crc kubenswrapper[4808]: I0217 15:54:34.167168 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:34 crc kubenswrapper[4808]: I0217 15:54:34.167197 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:34 crc kubenswrapper[4808]: I0217 15:54:34.167216 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:34Z","lastTransitionTime":"2026-02-17T15:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:34 crc kubenswrapper[4808]: I0217 15:54:34.271030 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:34 crc kubenswrapper[4808]: I0217 15:54:34.271505 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:34 crc kubenswrapper[4808]: I0217 15:54:34.271729 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:34 crc kubenswrapper[4808]: I0217 15:54:34.271947 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:34 crc kubenswrapper[4808]: I0217 15:54:34.272155 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:34Z","lastTransitionTime":"2026-02-17T15:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:34 crc kubenswrapper[4808]: I0217 15:54:34.375458 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:34 crc kubenswrapper[4808]: I0217 15:54:34.375522 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:34 crc kubenswrapper[4808]: I0217 15:54:34.375535 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:34 crc kubenswrapper[4808]: I0217 15:54:34.375611 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:34 crc kubenswrapper[4808]: I0217 15:54:34.375624 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:34Z","lastTransitionTime":"2026-02-17T15:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:34 crc kubenswrapper[4808]: I0217 15:54:34.479302 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:34 crc kubenswrapper[4808]: I0217 15:54:34.479399 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:34 crc kubenswrapper[4808]: I0217 15:54:34.479413 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:34 crc kubenswrapper[4808]: I0217 15:54:34.479443 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:34 crc kubenswrapper[4808]: I0217 15:54:34.479462 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:34Z","lastTransitionTime":"2026-02-17T15:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:34 crc kubenswrapper[4808]: I0217 15:54:34.583566 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:34 crc kubenswrapper[4808]: I0217 15:54:34.583675 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:34 crc kubenswrapper[4808]: I0217 15:54:34.583699 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:34 crc kubenswrapper[4808]: I0217 15:54:34.583730 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:34 crc kubenswrapper[4808]: I0217 15:54:34.583753 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:34Z","lastTransitionTime":"2026-02-17T15:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:34 crc kubenswrapper[4808]: I0217 15:54:34.698058 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:34 crc kubenswrapper[4808]: I0217 15:54:34.698162 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:34 crc kubenswrapper[4808]: I0217 15:54:34.698186 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:34 crc kubenswrapper[4808]: I0217 15:54:34.698222 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:34 crc kubenswrapper[4808]: I0217 15:54:34.698247 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:34Z","lastTransitionTime":"2026-02-17T15:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:34 crc kubenswrapper[4808]: I0217 15:54:34.801678 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:34 crc kubenswrapper[4808]: I0217 15:54:34.801724 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:34 crc kubenswrapper[4808]: I0217 15:54:34.801736 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:34 crc kubenswrapper[4808]: I0217 15:54:34.801755 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:34 crc kubenswrapper[4808]: I0217 15:54:34.801769 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:34Z","lastTransitionTime":"2026-02-17T15:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:34 crc kubenswrapper[4808]: I0217 15:54:34.905686 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:34 crc kubenswrapper[4808]: I0217 15:54:34.905753 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:34 crc kubenswrapper[4808]: I0217 15:54:34.905771 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:34 crc kubenswrapper[4808]: I0217 15:54:34.905796 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:34 crc kubenswrapper[4808]: I0217 15:54:34.905814 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:34Z","lastTransitionTime":"2026-02-17T15:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.009785 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.009845 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.009863 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.009895 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.009914 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:35Z","lastTransitionTime":"2026-02-17T15:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.111046 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 10:37:55.356080023 +0000 UTC Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.113797 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.113887 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.113908 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.113940 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.113961 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:35Z","lastTransitionTime":"2026-02-17T15:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.145833 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.145908 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.145833 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:54:35 crc kubenswrapper[4808]: E0217 15:54:35.146083 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:54:35 crc kubenswrapper[4808]: E0217 15:54:35.146179 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:54:35 crc kubenswrapper[4808]: E0217 15:54:35.146283 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z8tn8" podUID="b88c3e5f-7390-477c-ae74-aced26a8ddf9" Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.146682 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:54:35 crc kubenswrapper[4808]: E0217 15:54:35.146884 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.217195 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.217272 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.217298 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.217334 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.217363 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:35Z","lastTransitionTime":"2026-02-17T15:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.301504 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.320923 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.320991 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.321013 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.321039 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.321059 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:35Z","lastTransitionTime":"2026-02-17T15:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.321967 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e109410f-af42-4d80-bf58-9af3a5dde09a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd52f8fe1e994b2f877ce0843ce86d86d7674bace8c4ca163e3232248313435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b00de586738e2d759aa971e2114def8fdfeb2a25fd72f482d75b9f46ea9a3d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12c45de72b21abdab0a1073a9a1a357c8d593f68a339bf9b455b5e87aa7863aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59dcbb2be526e98cfd0a3c8cf833d6cfdef0120c58b47e52fb62f56adffb1d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:35Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.343047 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:35Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.367155 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6c9480c-4161-4c38-bec1-0822c6692f6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53d750dff2e0aa3d65e2defbc3cdf44f48375946c7021c0b1e1056b5ed7d729e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8d4091ef21fb9fef52dafcd7f1d0e865ff57652fcb75d0ba1e16361bcb81f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8d4091ef21fb9fef52dafcd7f1d0e865ff57652fcb75d0ba1e16361bcb81f44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26ac79dab2ec2e8e379a62382daa37e5c1feaa0666d3c6426bd9a295c64fdd5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26ac79dab2ec2e8e379a62382daa37e5c1feaa0666d3c6426bd9a295c64fdd5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43f3b959a4804631ce679ee8dd89b1fa9249892328d303865de288a5a7529af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43f3b959a4804631ce679ee8dd89b1fa9249892328d303865de288a5a7529af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cf535fc0e39f67860383b43629a84bb4608a6a5d42304c537ab91a306ed841c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cf535fc0e39f67860383b43629a84bb4608a6a5d42304c537ab91a306ed841c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89610759cc77f66154699ee9784109cba8ce21818125f447368e19fb6cc8cfb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89610759cc77f66154699ee9784109cba8ce21818125f447368e19fb6cc8cfb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx4nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:35Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.389034 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca38b6e7-b21c-453d-8b6c-a163dac84b35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14df09051221e795ef203b228b1f61d67e86d8052d81b4853a27d50d2b6e64bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://383650c9e8169aa5621d731ebcbfdd1ace0491ad4e7931fca1f6b595e0e782b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k8v8k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:35Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.412430 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6556f8ef16656338bd11e718549ef3c019e96928825ab9dc0596f24b8f43e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbc64aec6f296c59b9fb1e8c183c9f80c346f2d76620db59376c914ffcec02b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:35Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.425632 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.425691 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.425710 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.425739 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.425761 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:35Z","lastTransitionTime":"2026-02-17T15:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.428704 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f8pfh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13cb51e0-9eb4-4948-a9bf-93cddaa429fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e67e9f34fe5e5e9f272673e47a80dfec89a2832289e719b09d5a13399412b2ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkcvd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f8pfh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:35Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.448895 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-msgfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18916d6d-e063-40a0-816f-554f95cd2956\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d94a7bfe9ebc3fcec167acc2f840374566394d9425801a71bd3626ce196ee3a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-msgfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:35Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.481932 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748f02a-e3dd-47c7-b89d-b472c718e593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80ab3de82f2a3f22425c34c9b4abcbc925a7076e3f2ce3b952f10aeb856e1c09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c263e6c0445a0badadcbc5b50c370fd4ee9a4d0cb3e535e3d7944e938cbea4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58ee49f9d112bd2fe6a3cc5f499d1be9d4c51f2741ffb9bf24754a46a0a12814\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b04c73bfd5eadf6c1e436f6a7150074ee8357cef79b0e040c1d9f3809aab13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9e729fa5a68d07a0f7e4a86114ed39e4128428e5a21c2f3f113f869adc9fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26a9d62d12c66018649ffcb84c69e20f1c08f3241bdb02ba4306b08dbe5ec49a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efef33a328c17ebb52448542ea1a70587b2bd3219b0f9bbd3eec8074885d14d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efef33a328c17ebb52448542ea1a70587b2bd3219b0f9bbd3eec8074885d14d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T15:54:29Z\\\",\\\"message\\\":\\\"false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.138:50051:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {97419c58-41c7-41d7-a137-a446f0c7eeb3}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0217 15:54:29.419850 6225 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0217 15:54:29.420431 6225 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-daemon]} name:Service_openshift-machine-config-operator/machine-config-daemon_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.43:8798: 10.217.4.43:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {a36f6289-d09f-43f8-8a8a-c9d2cc11eb0d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0217 15:54:29.420614 6225 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-tgvlh_openshift-ovn-kubernetes(5748f02a-e3dd-47c7-b89d-b472c718e593)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://363a0f82d4347e522c91f27597bc03aa33f75e0399760fcc5cfdc1772eb6aabf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tgvlh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:35Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.507157 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:35Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.527284 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:35Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.529539 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.529615 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.529634 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.529661 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.529681 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:35Z","lastTransitionTime":"2026-02-17T15:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.533351 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b88c3e5f-7390-477c-ae74-aced26a8ddf9-metrics-certs\") pod \"network-metrics-daemon-z8tn8\" (UID: \"b88c3e5f-7390-477c-ae74-aced26a8ddf9\") " pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:54:35 crc kubenswrapper[4808]: E0217 15:54:35.533681 4808 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 15:54:35 crc kubenswrapper[4808]: E0217 15:54:35.533820 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b88c3e5f-7390-477c-ae74-aced26a8ddf9-metrics-certs podName:b88c3e5f-7390-477c-ae74-aced26a8ddf9 nodeName:}" failed. No retries permitted until 2026-02-17 15:54:39.533785554 +0000 UTC m=+43.050144657 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b88c3e5f-7390-477c-ae74-aced26a8ddf9-metrics-certs") pod "network-metrics-daemon-z8tn8" (UID: "b88c3e5f-7390-477c-ae74-aced26a8ddf9") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.550443 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-86pl6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"067d21e4-9618-42af-bb01-1ea41d1bd7ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcb207e998564484db273e9e68e20e49fb986fc4644b656e17b5c3fea9fb4eb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjv2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded2fa969b96132c1a5953da41b9418ec78621261888216b3854bc3cacb7bca6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjv2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-86pl6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:35Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.569558 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-z8tn8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b88c3e5f-7390-477c-ae74-aced26a8ddf9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8f79s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8f79s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-z8tn8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:35Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.595231 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efd34c89-7350-4ce0-83d9-302614df88f7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa3ef5d82c776e482d3da2d223d74423393c75b813707483fadca8cfbb5ed3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://695c70a36ec8a626d22b6dc04fdaad77e3e1f27a035ce6f62b96afe1f2c29361\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2611c9a878eac336beeea637370ce7fe47a5a80a6f29002cb2fb79d4637a1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d0e25e29d8f9c5146809e50f50a20c537f5ddecea1b902928a94870b5d44ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68d1439ead0f87e8cde6925c6db2cfde8a7fe89c6e5afaf719868740138742df\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:54:16Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 15:54:01.029442 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:54:01.030078 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2660512818/tls.crt::/tmp/serving-cert-2660512818/tls.key\\\\\\\"\\\\nI0217 15:54:16.361222 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:54:16.370125 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:54:16.370169 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:54:16.370202 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:54:16.370212 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:54:16.383437 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:54:16.383473 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383482 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:54:16.383494 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:54:16.383498 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:54:16.383502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:54:16.383616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:54:16.393934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://715d799f5e1732f88175b90bad28450b9c5148e89bf47ac3e47f9585acf3b392\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:53:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:35Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.612117 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3aaaa97d92e1acc8fe17594a75ed3e720801983ea175873486102bca899d9c04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:35Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.628739 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pr5s4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4989dd6-5d44-42b5-882c-12a10ffc7911\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://228e9f46385cedf80299c68685a8b2b94d96c41ade18eeea5de7a83c648cf704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2xc9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pr5s4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:35Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.634281 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.634351 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.634376 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.634407 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.634426 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:35Z","lastTransitionTime":"2026-02-17T15:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.653507 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b5cb9af7fe50ad534e758ba5647e162dfc951f41f07330e8b671427811de556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:35Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.738695 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.738782 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.738801 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.738832 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.738862 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:35Z","lastTransitionTime":"2026-02-17T15:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.842040 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.842115 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.842132 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.842159 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.842176 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:35Z","lastTransitionTime":"2026-02-17T15:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.946004 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.946075 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.946092 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.946120 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.946143 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:35Z","lastTransitionTime":"2026-02-17T15:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:36 crc kubenswrapper[4808]: I0217 15:54:36.050309 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:36 crc kubenswrapper[4808]: I0217 15:54:36.050403 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:36 crc kubenswrapper[4808]: I0217 15:54:36.050435 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:36 crc kubenswrapper[4808]: I0217 15:54:36.050506 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:36 crc kubenswrapper[4808]: I0217 15:54:36.050528 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:36Z","lastTransitionTime":"2026-02-17T15:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:36 crc kubenswrapper[4808]: I0217 15:54:36.112275 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 13:19:55.89890822 +0000 UTC Feb 17 15:54:36 crc kubenswrapper[4808]: I0217 15:54:36.156864 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:36 crc kubenswrapper[4808]: I0217 15:54:36.156909 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:36 crc kubenswrapper[4808]: I0217 15:54:36.156919 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:36 crc kubenswrapper[4808]: I0217 15:54:36.156935 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:36 crc kubenswrapper[4808]: I0217 15:54:36.156946 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:36Z","lastTransitionTime":"2026-02-17T15:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:36 crc kubenswrapper[4808]: I0217 15:54:36.260923 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:36 crc kubenswrapper[4808]: I0217 15:54:36.260993 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:36 crc kubenswrapper[4808]: I0217 15:54:36.261012 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:36 crc kubenswrapper[4808]: I0217 15:54:36.261040 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:36 crc kubenswrapper[4808]: I0217 15:54:36.261057 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:36Z","lastTransitionTime":"2026-02-17T15:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:36 crc kubenswrapper[4808]: I0217 15:54:36.364530 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:36 crc kubenswrapper[4808]: I0217 15:54:36.364634 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:36 crc kubenswrapper[4808]: I0217 15:54:36.364657 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:36 crc kubenswrapper[4808]: I0217 15:54:36.364687 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:36 crc kubenswrapper[4808]: I0217 15:54:36.364710 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:36Z","lastTransitionTime":"2026-02-17T15:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:36 crc kubenswrapper[4808]: I0217 15:54:36.467640 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:36 crc kubenswrapper[4808]: I0217 15:54:36.467716 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:36 crc kubenswrapper[4808]: I0217 15:54:36.467735 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:36 crc kubenswrapper[4808]: I0217 15:54:36.467764 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:36 crc kubenswrapper[4808]: I0217 15:54:36.467784 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:36Z","lastTransitionTime":"2026-02-17T15:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:36 crc kubenswrapper[4808]: I0217 15:54:36.571098 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:36 crc kubenswrapper[4808]: I0217 15:54:36.571168 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:36 crc kubenswrapper[4808]: I0217 15:54:36.571179 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:36 crc kubenswrapper[4808]: I0217 15:54:36.571207 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:36 crc kubenswrapper[4808]: I0217 15:54:36.571222 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:36Z","lastTransitionTime":"2026-02-17T15:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:36 crc kubenswrapper[4808]: I0217 15:54:36.674744 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:36 crc kubenswrapper[4808]: I0217 15:54:36.674834 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:36 crc kubenswrapper[4808]: I0217 15:54:36.674866 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:36 crc kubenswrapper[4808]: I0217 15:54:36.674902 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:36 crc kubenswrapper[4808]: I0217 15:54:36.674927 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:36Z","lastTransitionTime":"2026-02-17T15:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:36 crc kubenswrapper[4808]: I0217 15:54:36.778408 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:36 crc kubenswrapper[4808]: I0217 15:54:36.778484 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:36 crc kubenswrapper[4808]: I0217 15:54:36.778508 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:36 crc kubenswrapper[4808]: I0217 15:54:36.778542 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:36 crc kubenswrapper[4808]: I0217 15:54:36.778565 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:36Z","lastTransitionTime":"2026-02-17T15:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:36 crc kubenswrapper[4808]: I0217 15:54:36.881861 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:36 crc kubenswrapper[4808]: I0217 15:54:36.881960 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:36 crc kubenswrapper[4808]: I0217 15:54:36.881991 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:36 crc kubenswrapper[4808]: I0217 15:54:36.882030 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:36 crc kubenswrapper[4808]: I0217 15:54:36.882055 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:36Z","lastTransitionTime":"2026-02-17T15:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:36 crc kubenswrapper[4808]: I0217 15:54:36.985768 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:36 crc kubenswrapper[4808]: I0217 15:54:36.985844 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:36 crc kubenswrapper[4808]: I0217 15:54:36.985866 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:36 crc kubenswrapper[4808]: I0217 15:54:36.985933 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:36 crc kubenswrapper[4808]: I0217 15:54:36.985957 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:36Z","lastTransitionTime":"2026-02-17T15:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:37 crc kubenswrapper[4808]: I0217 15:54:37.089218 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:37 crc kubenswrapper[4808]: I0217 15:54:37.089503 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:37 crc kubenswrapper[4808]: I0217 15:54:37.089538 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:37 crc kubenswrapper[4808]: I0217 15:54:37.089612 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:37 crc kubenswrapper[4808]: I0217 15:54:37.089659 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:37Z","lastTransitionTime":"2026-02-17T15:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:37 crc kubenswrapper[4808]: I0217 15:54:37.113250 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 12:18:52.192705736 +0000 UTC Feb 17 15:54:37 crc kubenswrapper[4808]: I0217 15:54:37.145958 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:54:37 crc kubenswrapper[4808]: I0217 15:54:37.146017 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:54:37 crc kubenswrapper[4808]: I0217 15:54:37.145958 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:54:37 crc kubenswrapper[4808]: I0217 15:54:37.146239 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:54:37 crc kubenswrapper[4808]: E0217 15:54:37.146254 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:54:37 crc kubenswrapper[4808]: E0217 15:54:37.146387 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:54:37 crc kubenswrapper[4808]: E0217 15:54:37.146629 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:54:37 crc kubenswrapper[4808]: E0217 15:54:37.146749 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z8tn8" podUID="b88c3e5f-7390-477c-ae74-aced26a8ddf9" Feb 17 15:54:37 crc kubenswrapper[4808]: I0217 15:54:37.173211 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efd34c89-7350-4ce0-83d9-302614df88f7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa3ef5d82c776e482d3da2d223d74423393c75b813707483fadca8cfbb5ed3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://695c70a36ec8a626d22b6dc04fdaad77e3e1f27a035ce6f62b96afe1f2c29361\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2611c9a878eac336beeea637370ce7fe47a5a80a6f29002cb2fb79d4637a1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d0e25e29d8f9c5146809e50f50a20c537f5ddecea1b902928a94870b5d44ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68d1439ead0f87e8cde6925c6db2cfde8a7fe89c6e5afaf719868740138742df\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:54:16Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 15:54:01.029442 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:54:01.030078 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2660512818/tls.crt::/tmp/serving-cert-2660512818/tls.key\\\\\\\"\\\\nI0217 15:54:16.361222 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:54:16.370125 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:54:16.370169 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:54:16.370202 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:54:16.370212 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:54:16.383437 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:54:16.383473 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383482 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:54:16.383494 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:54:16.383498 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:54:16.383502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:54:16.383616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:54:16.393934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://715d799f5e1732f88175b90bad28450b9c5148e89bf47ac3e47f9585acf3b392\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:53:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:37Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:37 crc kubenswrapper[4808]: I0217 15:54:37.191228 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3aaaa97d92e1acc8fe17594a75ed3e720801983ea175873486102bca899d9c04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:37Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:37 crc kubenswrapper[4808]: I0217 15:54:37.193629 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:37 crc kubenswrapper[4808]: I0217 15:54:37.193698 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:37 crc kubenswrapper[4808]: I0217 15:54:37.193720 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:37 crc kubenswrapper[4808]: I0217 15:54:37.193752 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:37 crc kubenswrapper[4808]: I0217 15:54:37.193774 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:37Z","lastTransitionTime":"2026-02-17T15:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:37 crc kubenswrapper[4808]: I0217 15:54:37.205727 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pr5s4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4989dd6-5d44-42b5-882c-12a10ffc7911\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://228e9f46385cedf80299c68685a8b2b94d96c41ade18eeea5de7a83c648cf704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2xc9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pr5s4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:37Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:37 crc kubenswrapper[4808]: I0217 15:54:37.225925 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-z8tn8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b88c3e5f-7390-477c-ae74-aced26a8ddf9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8f79s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8f79s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-z8tn8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:37Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:37 crc kubenswrapper[4808]: I0217 15:54:37.250223 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b5cb9af7fe50ad534e758ba5647e162dfc951f41f07330e8b671427811de556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:37Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:37 crc kubenswrapper[4808]: I0217 15:54:37.268564 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e109410f-af42-4d80-bf58-9af3a5dde09a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd52f8fe1e994b2f877ce0843ce86d86d7674bace8c4ca163e3232248313435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b00de586738e2d759aa971e2114def8fdfeb2a25fd72f482d75b9f46ea9a3d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12c45de72b21abdab0a1073a9a1a357c8d593f68a339bf9b455b5e87aa7863aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59dcbb2be526e98cfd0a3c8cf833d6cfdef0120c58b47e52fb62f56adffb1d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:37Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:37 crc kubenswrapper[4808]: I0217 15:54:37.286948 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:37Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:37 crc kubenswrapper[4808]: I0217 15:54:37.296093 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:37 crc kubenswrapper[4808]: I0217 15:54:37.296140 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:37 crc kubenswrapper[4808]: I0217 15:54:37.296152 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:37 crc kubenswrapper[4808]: I0217 15:54:37.296173 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:37 crc kubenswrapper[4808]: I0217 15:54:37.296188 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:37Z","lastTransitionTime":"2026-02-17T15:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:37 crc kubenswrapper[4808]: I0217 15:54:37.307557 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6c9480c-4161-4c38-bec1-0822c6692f6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53d750dff2e0aa3d65e2defbc3cdf44f48375946c7021c0b1e1056b5ed7d729e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8d4091ef21fb9fef52dafcd7f1d0e865ff57652fcb75d0ba1e16361bcb81f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8d4091ef21fb9fef52dafcd7f1d0e865ff57652fcb75d0ba1e16361bcb81f44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26ac79dab2ec2e8e379a62382daa37e5c1feaa0666d3c6426bd9a295c64fdd5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26ac79dab2ec2e8e379a62382daa37e5c1feaa0666d3c6426bd9a295c64fdd5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43f3b959a4804631ce679ee8dd89b1fa9249892328d303865de288a5a7529af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43f3b959a4804631ce679ee8dd89b1fa9249892328d303865de288a5a7529af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cf535fc0e39f67860383b43629a84bb4608a6a5d42304c537ab91a306ed841c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cf535fc0e39f67860383b43629a84bb4608a6a5d42304c537ab91a306ed841c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89610759cc77f66154699ee9784109cba8ce21818125f447368e19fb6cc8cfb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89610759cc77f66154699ee9784109cba8ce21818125f447368e19fb6cc8cfb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx4nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:37Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:37 crc kubenswrapper[4808]: I0217 15:54:37.324784 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca38b6e7-b21c-453d-8b6c-a163dac84b35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14df09051221e795ef203b228b1f61d67e86d8052d81b4853a27d50d2b6e64bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://383650c9e8169aa5621d731ebcbfdd1ace0491ad4e7931fca1f6b595e0e782b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k8v8k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:37Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:37 crc kubenswrapper[4808]: I0217 15:54:37.348872 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:37Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:37 crc kubenswrapper[4808]: I0217 15:54:37.374813 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:37Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:37 crc kubenswrapper[4808]: I0217 15:54:37.395623 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6556f8ef16656338bd11e718549ef3c019e96928825ab9dc0596f24b8f43e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbc64aec6f296c59b9fb1e8c183c9f80c346f2d76620db59376c914ffcec02b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:37Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:37 crc kubenswrapper[4808]: I0217 15:54:37.399224 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:37 crc kubenswrapper[4808]: I0217 15:54:37.399270 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:37 crc kubenswrapper[4808]: I0217 15:54:37.399284 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:37 crc kubenswrapper[4808]: I0217 15:54:37.399308 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:37 crc kubenswrapper[4808]: I0217 15:54:37.399323 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:37Z","lastTransitionTime":"2026-02-17T15:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:37 crc kubenswrapper[4808]: I0217 15:54:37.406518 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f8pfh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13cb51e0-9eb4-4948-a9bf-93cddaa429fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e67e9f34fe5e5e9f272673e47a80dfec89a2832289e719b09d5a13399412b2ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkcvd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f8pfh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:37Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:37 crc kubenswrapper[4808]: I0217 15:54:37.420693 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-msgfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18916d6d-e063-40a0-816f-554f95cd2956\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d94a7bfe9ebc3fcec167acc2f840374566394d9425801a71bd3626ce196ee3a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-msgfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:37Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:37 crc kubenswrapper[4808]: I0217 15:54:37.452649 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748f02a-e3dd-47c7-b89d-b472c718e593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80ab3de82f2a3f22425c34c9b4abcbc925a7076e3f2ce3b952f10aeb856e1c09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c263e6c0445a0badadcbc5b50c370fd4ee9a4d0cb3e535e3d7944e938cbea4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58ee49f9d112bd2fe6a3cc5f499d1be9d4c51f2741ffb9bf24754a46a0a12814\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b04c73bfd5eadf6c1e436f6a7150074ee8357cef79b0e040c1d9f3809aab13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9e729fa5a68d07a0f7e4a86114ed39e4128428e5a21c2f3f113f869adc9fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26a9d62d12c66018649ffcb84c69e20f1c08f3241bdb02ba4306b08dbe5ec49a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efef33a328c17ebb52448542ea1a70587b2bd3219b0f9bbd3eec8074885d14d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efef33a328c17ebb52448542ea1a70587b2bd3219b0f9bbd3eec8074885d14d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T15:54:29Z\\\",\\\"message\\\":\\\"false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.138:50051:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {97419c58-41c7-41d7-a137-a446f0c7eeb3}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0217 15:54:29.419850 6225 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0217 15:54:29.420431 6225 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-daemon]} name:Service_openshift-machine-config-operator/machine-config-daemon_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.43:8798: 10.217.4.43:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {a36f6289-d09f-43f8-8a8a-c9d2cc11eb0d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0217 15:54:29.420614 6225 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-tgvlh_openshift-ovn-kubernetes(5748f02a-e3dd-47c7-b89d-b472c718e593)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://363a0f82d4347e522c91f27597bc03aa33f75e0399760fcc5cfdc1772eb6aabf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tgvlh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:37Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:37 crc kubenswrapper[4808]: I0217 15:54:37.472092 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-86pl6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"067d21e4-9618-42af-bb01-1ea41d1bd7ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcb207e998564484db273e9e68e20e49fb986fc4644b656e17b5c3fea9fb4eb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjv2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded2fa969b96132c1a5953da41b9418ec78621261888216b3854bc3cacb7bca6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjv2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-86pl6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:37Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:37 crc kubenswrapper[4808]: I0217 15:54:37.502185 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:37 crc kubenswrapper[4808]: I0217 15:54:37.502267 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:37 crc kubenswrapper[4808]: I0217 15:54:37.502301 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:37 crc kubenswrapper[4808]: I0217 15:54:37.502326 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:37 crc kubenswrapper[4808]: I0217 15:54:37.502339 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:37Z","lastTransitionTime":"2026-02-17T15:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:37 crc kubenswrapper[4808]: I0217 15:54:37.606071 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:37 crc kubenswrapper[4808]: I0217 15:54:37.606151 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:37 crc kubenswrapper[4808]: I0217 15:54:37.606173 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:37 crc kubenswrapper[4808]: I0217 15:54:37.606209 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:37 crc kubenswrapper[4808]: I0217 15:54:37.606232 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:37Z","lastTransitionTime":"2026-02-17T15:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:37 crc kubenswrapper[4808]: I0217 15:54:37.709942 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:37 crc kubenswrapper[4808]: I0217 15:54:37.709989 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:37 crc kubenswrapper[4808]: I0217 15:54:37.710004 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:37 crc kubenswrapper[4808]: I0217 15:54:37.710023 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:37 crc kubenswrapper[4808]: I0217 15:54:37.710038 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:37Z","lastTransitionTime":"2026-02-17T15:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:37 crc kubenswrapper[4808]: I0217 15:54:37.813432 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:37 crc kubenswrapper[4808]: I0217 15:54:37.813546 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:37 crc kubenswrapper[4808]: I0217 15:54:37.813563 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:37 crc kubenswrapper[4808]: I0217 15:54:37.813608 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:37 crc kubenswrapper[4808]: I0217 15:54:37.813627 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:37Z","lastTransitionTime":"2026-02-17T15:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:37 crc kubenswrapper[4808]: I0217 15:54:37.917233 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:37 crc kubenswrapper[4808]: I0217 15:54:37.917290 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:37 crc kubenswrapper[4808]: I0217 15:54:37.917308 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:37 crc kubenswrapper[4808]: I0217 15:54:37.917330 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:37 crc kubenswrapper[4808]: I0217 15:54:37.917345 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:37Z","lastTransitionTime":"2026-02-17T15:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:38 crc kubenswrapper[4808]: I0217 15:54:38.021222 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:38 crc kubenswrapper[4808]: I0217 15:54:38.021485 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:38 crc kubenswrapper[4808]: I0217 15:54:38.021600 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:38 crc kubenswrapper[4808]: I0217 15:54:38.021696 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:38 crc kubenswrapper[4808]: I0217 15:54:38.021863 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:38Z","lastTransitionTime":"2026-02-17T15:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:38 crc kubenswrapper[4808]: I0217 15:54:38.113447 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 19:54:37.878158138 +0000 UTC Feb 17 15:54:38 crc kubenswrapper[4808]: I0217 15:54:38.125350 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:38 crc kubenswrapper[4808]: I0217 15:54:38.125637 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:38 crc kubenswrapper[4808]: I0217 15:54:38.125800 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:38 crc kubenswrapper[4808]: I0217 15:54:38.125867 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:38 crc kubenswrapper[4808]: I0217 15:54:38.125892 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:38Z","lastTransitionTime":"2026-02-17T15:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:38 crc kubenswrapper[4808]: I0217 15:54:38.229333 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:38 crc kubenswrapper[4808]: I0217 15:54:38.229384 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:38 crc kubenswrapper[4808]: I0217 15:54:38.229403 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:38 crc kubenswrapper[4808]: I0217 15:54:38.229430 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:38 crc kubenswrapper[4808]: I0217 15:54:38.229453 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:38Z","lastTransitionTime":"2026-02-17T15:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:38 crc kubenswrapper[4808]: I0217 15:54:38.334936 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:38 crc kubenswrapper[4808]: I0217 15:54:38.335038 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:38 crc kubenswrapper[4808]: I0217 15:54:38.335057 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:38 crc kubenswrapper[4808]: I0217 15:54:38.335085 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:38 crc kubenswrapper[4808]: I0217 15:54:38.335104 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:38Z","lastTransitionTime":"2026-02-17T15:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:38 crc kubenswrapper[4808]: I0217 15:54:38.439720 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:38 crc kubenswrapper[4808]: I0217 15:54:38.440159 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:38 crc kubenswrapper[4808]: I0217 15:54:38.440299 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:38 crc kubenswrapper[4808]: I0217 15:54:38.440464 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:38 crc kubenswrapper[4808]: I0217 15:54:38.440656 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:38Z","lastTransitionTime":"2026-02-17T15:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:38 crc kubenswrapper[4808]: I0217 15:54:38.544990 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:38 crc kubenswrapper[4808]: I0217 15:54:38.545061 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:38 crc kubenswrapper[4808]: I0217 15:54:38.545080 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:38 crc kubenswrapper[4808]: I0217 15:54:38.545111 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:38 crc kubenswrapper[4808]: I0217 15:54:38.545132 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:38Z","lastTransitionTime":"2026-02-17T15:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:38 crc kubenswrapper[4808]: I0217 15:54:38.647802 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:38 crc kubenswrapper[4808]: I0217 15:54:38.647854 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:38 crc kubenswrapper[4808]: I0217 15:54:38.647868 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:38 crc kubenswrapper[4808]: I0217 15:54:38.647890 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:38 crc kubenswrapper[4808]: I0217 15:54:38.647902 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:38Z","lastTransitionTime":"2026-02-17T15:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:38 crc kubenswrapper[4808]: I0217 15:54:38.750749 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:38 crc kubenswrapper[4808]: I0217 15:54:38.750822 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:38 crc kubenswrapper[4808]: I0217 15:54:38.750841 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:38 crc kubenswrapper[4808]: I0217 15:54:38.750869 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:38 crc kubenswrapper[4808]: I0217 15:54:38.750887 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:38Z","lastTransitionTime":"2026-02-17T15:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:38 crc kubenswrapper[4808]: I0217 15:54:38.853540 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:38 crc kubenswrapper[4808]: I0217 15:54:38.853606 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:38 crc kubenswrapper[4808]: I0217 15:54:38.853616 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:38 crc kubenswrapper[4808]: I0217 15:54:38.853631 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:38 crc kubenswrapper[4808]: I0217 15:54:38.853640 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:38Z","lastTransitionTime":"2026-02-17T15:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:38 crc kubenswrapper[4808]: I0217 15:54:38.956862 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:38 crc kubenswrapper[4808]: I0217 15:54:38.956923 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:38 crc kubenswrapper[4808]: I0217 15:54:38.956939 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:38 crc kubenswrapper[4808]: I0217 15:54:38.956959 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:38 crc kubenswrapper[4808]: I0217 15:54:38.956972 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:38Z","lastTransitionTime":"2026-02-17T15:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:39 crc kubenswrapper[4808]: I0217 15:54:39.060028 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:39 crc kubenswrapper[4808]: I0217 15:54:39.060090 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:39 crc kubenswrapper[4808]: I0217 15:54:39.060105 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:39 crc kubenswrapper[4808]: I0217 15:54:39.060130 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:39 crc kubenswrapper[4808]: I0217 15:54:39.060147 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:39Z","lastTransitionTime":"2026-02-17T15:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:39 crc kubenswrapper[4808]: I0217 15:54:39.114455 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 20:24:10.760533693 +0000 UTC Feb 17 15:54:39 crc kubenswrapper[4808]: I0217 15:54:39.144907 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:54:39 crc kubenswrapper[4808]: I0217 15:54:39.145036 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:54:39 crc kubenswrapper[4808]: I0217 15:54:39.145062 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:54:39 crc kubenswrapper[4808]: E0217 15:54:39.145255 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:54:39 crc kubenswrapper[4808]: E0217 15:54:39.145144 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:54:39 crc kubenswrapper[4808]: I0217 15:54:39.145325 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:54:39 crc kubenswrapper[4808]: E0217 15:54:39.145397 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:54:39 crc kubenswrapper[4808]: E0217 15:54:39.145475 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z8tn8" podUID="b88c3e5f-7390-477c-ae74-aced26a8ddf9" Feb 17 15:54:39 crc kubenswrapper[4808]: I0217 15:54:39.163169 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:39 crc kubenswrapper[4808]: I0217 15:54:39.163212 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:39 crc kubenswrapper[4808]: I0217 15:54:39.163227 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:39 crc kubenswrapper[4808]: I0217 15:54:39.163247 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:39 crc kubenswrapper[4808]: I0217 15:54:39.163261 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:39Z","lastTransitionTime":"2026-02-17T15:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:39 crc kubenswrapper[4808]: I0217 15:54:39.266482 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:39 crc kubenswrapper[4808]: I0217 15:54:39.266744 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:39 crc kubenswrapper[4808]: I0217 15:54:39.266772 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:39 crc kubenswrapper[4808]: I0217 15:54:39.266810 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:39 crc kubenswrapper[4808]: I0217 15:54:39.266832 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:39Z","lastTransitionTime":"2026-02-17T15:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:39 crc kubenswrapper[4808]: I0217 15:54:39.370260 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:39 crc kubenswrapper[4808]: I0217 15:54:39.370338 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:39 crc kubenswrapper[4808]: I0217 15:54:39.370355 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:39 crc kubenswrapper[4808]: I0217 15:54:39.370379 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:39 crc kubenswrapper[4808]: I0217 15:54:39.370400 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:39Z","lastTransitionTime":"2026-02-17T15:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:39 crc kubenswrapper[4808]: I0217 15:54:39.474866 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:39 crc kubenswrapper[4808]: I0217 15:54:39.474908 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:39 crc kubenswrapper[4808]: I0217 15:54:39.474917 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:39 crc kubenswrapper[4808]: I0217 15:54:39.474936 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:39 crc kubenswrapper[4808]: I0217 15:54:39.474947 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:39Z","lastTransitionTime":"2026-02-17T15:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:39 crc kubenswrapper[4808]: I0217 15:54:39.578912 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:39 crc kubenswrapper[4808]: I0217 15:54:39.579005 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:39 crc kubenswrapper[4808]: I0217 15:54:39.579029 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:39 crc kubenswrapper[4808]: I0217 15:54:39.579068 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:39 crc kubenswrapper[4808]: I0217 15:54:39.579091 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:39Z","lastTransitionTime":"2026-02-17T15:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:39 crc kubenswrapper[4808]: I0217 15:54:39.590758 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b88c3e5f-7390-477c-ae74-aced26a8ddf9-metrics-certs\") pod \"network-metrics-daemon-z8tn8\" (UID: \"b88c3e5f-7390-477c-ae74-aced26a8ddf9\") " pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:54:39 crc kubenswrapper[4808]: E0217 15:54:39.591035 4808 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 15:54:39 crc kubenswrapper[4808]: E0217 15:54:39.591190 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b88c3e5f-7390-477c-ae74-aced26a8ddf9-metrics-certs podName:b88c3e5f-7390-477c-ae74-aced26a8ddf9 nodeName:}" failed. No retries permitted until 2026-02-17 15:54:47.591150218 +0000 UTC m=+51.107509481 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b88c3e5f-7390-477c-ae74-aced26a8ddf9-metrics-certs") pod "network-metrics-daemon-z8tn8" (UID: "b88c3e5f-7390-477c-ae74-aced26a8ddf9") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 15:54:39 crc kubenswrapper[4808]: I0217 15:54:39.682433 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:39 crc kubenswrapper[4808]: I0217 15:54:39.682527 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:39 crc kubenswrapper[4808]: I0217 15:54:39.682547 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:39 crc kubenswrapper[4808]: I0217 15:54:39.682611 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:39 crc kubenswrapper[4808]: I0217 15:54:39.682630 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:39Z","lastTransitionTime":"2026-02-17T15:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:39 crc kubenswrapper[4808]: I0217 15:54:39.785331 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:39 crc kubenswrapper[4808]: I0217 15:54:39.785380 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:39 crc kubenswrapper[4808]: I0217 15:54:39.785389 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:39 crc kubenswrapper[4808]: I0217 15:54:39.785405 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:39 crc kubenswrapper[4808]: I0217 15:54:39.785415 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:39Z","lastTransitionTime":"2026-02-17T15:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:39 crc kubenswrapper[4808]: I0217 15:54:39.888145 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:39 crc kubenswrapper[4808]: I0217 15:54:39.888208 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:39 crc kubenswrapper[4808]: I0217 15:54:39.888227 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:39 crc kubenswrapper[4808]: I0217 15:54:39.888252 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:39 crc kubenswrapper[4808]: I0217 15:54:39.888269 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:39Z","lastTransitionTime":"2026-02-17T15:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:39 crc kubenswrapper[4808]: I0217 15:54:39.991226 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:39 crc kubenswrapper[4808]: I0217 15:54:39.991291 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:39 crc kubenswrapper[4808]: I0217 15:54:39.991308 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:39 crc kubenswrapper[4808]: I0217 15:54:39.991332 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:39 crc kubenswrapper[4808]: I0217 15:54:39.991350 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:39Z","lastTransitionTime":"2026-02-17T15:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:40 crc kubenswrapper[4808]: I0217 15:54:40.093892 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:40 crc kubenswrapper[4808]: I0217 15:54:40.093956 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:40 crc kubenswrapper[4808]: I0217 15:54:40.093978 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:40 crc kubenswrapper[4808]: I0217 15:54:40.094005 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:40 crc kubenswrapper[4808]: I0217 15:54:40.094024 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:40Z","lastTransitionTime":"2026-02-17T15:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:40 crc kubenswrapper[4808]: I0217 15:54:40.114621 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 17:09:34.343156238 +0000 UTC Feb 17 15:54:40 crc kubenswrapper[4808]: I0217 15:54:40.197279 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:40 crc kubenswrapper[4808]: I0217 15:54:40.197356 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:40 crc kubenswrapper[4808]: I0217 15:54:40.197386 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:40 crc kubenswrapper[4808]: I0217 15:54:40.197414 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:40 crc kubenswrapper[4808]: I0217 15:54:40.197432 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:40Z","lastTransitionTime":"2026-02-17T15:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:40 crc kubenswrapper[4808]: I0217 15:54:40.300045 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:40 crc kubenswrapper[4808]: I0217 15:54:40.300094 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:40 crc kubenswrapper[4808]: I0217 15:54:40.300105 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:40 crc kubenswrapper[4808]: I0217 15:54:40.300125 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:40 crc kubenswrapper[4808]: I0217 15:54:40.300135 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:40Z","lastTransitionTime":"2026-02-17T15:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:40 crc kubenswrapper[4808]: I0217 15:54:40.402888 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:40 crc kubenswrapper[4808]: I0217 15:54:40.403000 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:40 crc kubenswrapper[4808]: I0217 15:54:40.403021 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:40 crc kubenswrapper[4808]: I0217 15:54:40.403052 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:40 crc kubenswrapper[4808]: I0217 15:54:40.403070 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:40Z","lastTransitionTime":"2026-02-17T15:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:40 crc kubenswrapper[4808]: I0217 15:54:40.506039 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:40 crc kubenswrapper[4808]: I0217 15:54:40.506093 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:40 crc kubenswrapper[4808]: I0217 15:54:40.506106 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:40 crc kubenswrapper[4808]: I0217 15:54:40.506123 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:40 crc kubenswrapper[4808]: I0217 15:54:40.506136 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:40Z","lastTransitionTime":"2026-02-17T15:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:40 crc kubenswrapper[4808]: I0217 15:54:40.609257 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:40 crc kubenswrapper[4808]: I0217 15:54:40.609321 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:40 crc kubenswrapper[4808]: I0217 15:54:40.609341 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:40 crc kubenswrapper[4808]: I0217 15:54:40.609368 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:40 crc kubenswrapper[4808]: I0217 15:54:40.609389 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:40Z","lastTransitionTime":"2026-02-17T15:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:40 crc kubenswrapper[4808]: I0217 15:54:40.713043 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:40 crc kubenswrapper[4808]: I0217 15:54:40.713146 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:40 crc kubenswrapper[4808]: I0217 15:54:40.713172 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:40 crc kubenswrapper[4808]: I0217 15:54:40.713209 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:40 crc kubenswrapper[4808]: I0217 15:54:40.713236 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:40Z","lastTransitionTime":"2026-02-17T15:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:40 crc kubenswrapper[4808]: I0217 15:54:40.816102 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:40 crc kubenswrapper[4808]: I0217 15:54:40.816156 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:40 crc kubenswrapper[4808]: I0217 15:54:40.816169 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:40 crc kubenswrapper[4808]: I0217 15:54:40.816190 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:40 crc kubenswrapper[4808]: I0217 15:54:40.816204 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:40Z","lastTransitionTime":"2026-02-17T15:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:40 crc kubenswrapper[4808]: I0217 15:54:40.919511 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:40 crc kubenswrapper[4808]: I0217 15:54:40.919561 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:40 crc kubenswrapper[4808]: I0217 15:54:40.919588 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:40 crc kubenswrapper[4808]: I0217 15:54:40.919609 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:40 crc kubenswrapper[4808]: I0217 15:54:40.919622 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:40Z","lastTransitionTime":"2026-02-17T15:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:41 crc kubenswrapper[4808]: I0217 15:54:41.022344 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:41 crc kubenswrapper[4808]: I0217 15:54:41.022396 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:41 crc kubenswrapper[4808]: I0217 15:54:41.022411 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:41 crc kubenswrapper[4808]: I0217 15:54:41.022428 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:41 crc kubenswrapper[4808]: I0217 15:54:41.022441 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:41Z","lastTransitionTime":"2026-02-17T15:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:41 crc kubenswrapper[4808]: I0217 15:54:41.115652 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 21:05:27.135852185 +0000 UTC Feb 17 15:54:41 crc kubenswrapper[4808]: I0217 15:54:41.125027 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:41 crc kubenswrapper[4808]: I0217 15:54:41.125075 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:41 crc kubenswrapper[4808]: I0217 15:54:41.125092 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:41 crc kubenswrapper[4808]: I0217 15:54:41.125114 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:41 crc kubenswrapper[4808]: I0217 15:54:41.125136 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:41Z","lastTransitionTime":"2026-02-17T15:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:41 crc kubenswrapper[4808]: I0217 15:54:41.145662 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:54:41 crc kubenswrapper[4808]: I0217 15:54:41.145755 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:54:41 crc kubenswrapper[4808]: E0217 15:54:41.145909 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:54:41 crc kubenswrapper[4808]: E0217 15:54:41.146174 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:54:41 crc kubenswrapper[4808]: I0217 15:54:41.146386 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:54:41 crc kubenswrapper[4808]: I0217 15:54:41.146448 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:54:41 crc kubenswrapper[4808]: E0217 15:54:41.146844 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z8tn8" podUID="b88c3e5f-7390-477c-ae74-aced26a8ddf9" Feb 17 15:54:41 crc kubenswrapper[4808]: E0217 15:54:41.147077 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:54:41 crc kubenswrapper[4808]: I0217 15:54:41.227101 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:41 crc kubenswrapper[4808]: I0217 15:54:41.227141 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:41 crc kubenswrapper[4808]: I0217 15:54:41.227151 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:41 crc kubenswrapper[4808]: I0217 15:54:41.227168 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:41 crc kubenswrapper[4808]: I0217 15:54:41.227178 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:41Z","lastTransitionTime":"2026-02-17T15:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:41 crc kubenswrapper[4808]: I0217 15:54:41.330112 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:41 crc kubenswrapper[4808]: I0217 15:54:41.330765 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:41 crc kubenswrapper[4808]: I0217 15:54:41.330937 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:41 crc kubenswrapper[4808]: I0217 15:54:41.331114 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:41 crc kubenswrapper[4808]: I0217 15:54:41.331281 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:41Z","lastTransitionTime":"2026-02-17T15:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:41 crc kubenswrapper[4808]: I0217 15:54:41.434659 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:41 crc kubenswrapper[4808]: I0217 15:54:41.434701 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:41 crc kubenswrapper[4808]: I0217 15:54:41.434712 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:41 crc kubenswrapper[4808]: I0217 15:54:41.434729 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:41 crc kubenswrapper[4808]: I0217 15:54:41.434748 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:41Z","lastTransitionTime":"2026-02-17T15:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:41 crc kubenswrapper[4808]: I0217 15:54:41.537291 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:41 crc kubenswrapper[4808]: I0217 15:54:41.537722 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:41 crc kubenswrapper[4808]: I0217 15:54:41.537833 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:41 crc kubenswrapper[4808]: I0217 15:54:41.537932 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:41 crc kubenswrapper[4808]: I0217 15:54:41.538035 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:41Z","lastTransitionTime":"2026-02-17T15:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:41 crc kubenswrapper[4808]: I0217 15:54:41.641009 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:41 crc kubenswrapper[4808]: I0217 15:54:41.641052 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:41 crc kubenswrapper[4808]: I0217 15:54:41.641065 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:41 crc kubenswrapper[4808]: I0217 15:54:41.641081 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:41 crc kubenswrapper[4808]: I0217 15:54:41.641093 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:41Z","lastTransitionTime":"2026-02-17T15:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:41 crc kubenswrapper[4808]: I0217 15:54:41.744245 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:41 crc kubenswrapper[4808]: I0217 15:54:41.744287 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:41 crc kubenswrapper[4808]: I0217 15:54:41.744299 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:41 crc kubenswrapper[4808]: I0217 15:54:41.744316 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:41 crc kubenswrapper[4808]: I0217 15:54:41.744329 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:41Z","lastTransitionTime":"2026-02-17T15:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:41 crc kubenswrapper[4808]: I0217 15:54:41.847272 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:41 crc kubenswrapper[4808]: I0217 15:54:41.847345 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:41 crc kubenswrapper[4808]: I0217 15:54:41.847364 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:41 crc kubenswrapper[4808]: I0217 15:54:41.847392 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:41 crc kubenswrapper[4808]: I0217 15:54:41.847410 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:41Z","lastTransitionTime":"2026-02-17T15:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:41 crc kubenswrapper[4808]: I0217 15:54:41.950999 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:41 crc kubenswrapper[4808]: I0217 15:54:41.951045 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:41 crc kubenswrapper[4808]: I0217 15:54:41.951062 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:41 crc kubenswrapper[4808]: I0217 15:54:41.951082 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:41 crc kubenswrapper[4808]: I0217 15:54:41.951100 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:41Z","lastTransitionTime":"2026-02-17T15:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:42 crc kubenswrapper[4808]: I0217 15:54:42.055218 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:42 crc kubenswrapper[4808]: I0217 15:54:42.055336 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:42 crc kubenswrapper[4808]: I0217 15:54:42.055355 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:42 crc kubenswrapper[4808]: I0217 15:54:42.055383 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:42 crc kubenswrapper[4808]: I0217 15:54:42.055404 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:42Z","lastTransitionTime":"2026-02-17T15:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:42 crc kubenswrapper[4808]: I0217 15:54:42.115807 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 17:44:45.505893387 +0000 UTC Feb 17 15:54:42 crc kubenswrapper[4808]: I0217 15:54:42.158844 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:42 crc kubenswrapper[4808]: I0217 15:54:42.158915 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:42 crc kubenswrapper[4808]: I0217 15:54:42.158939 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:42 crc kubenswrapper[4808]: I0217 15:54:42.159045 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:42 crc kubenswrapper[4808]: I0217 15:54:42.159071 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:42Z","lastTransitionTime":"2026-02-17T15:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:42 crc kubenswrapper[4808]: I0217 15:54:42.262734 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:42 crc kubenswrapper[4808]: I0217 15:54:42.263029 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:42 crc kubenswrapper[4808]: I0217 15:54:42.263115 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:42 crc kubenswrapper[4808]: I0217 15:54:42.263290 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:42 crc kubenswrapper[4808]: I0217 15:54:42.263377 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:42Z","lastTransitionTime":"2026-02-17T15:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:42 crc kubenswrapper[4808]: I0217 15:54:42.369603 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:42 crc kubenswrapper[4808]: I0217 15:54:42.369657 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:42 crc kubenswrapper[4808]: I0217 15:54:42.369669 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:42 crc kubenswrapper[4808]: I0217 15:54:42.369686 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:42 crc kubenswrapper[4808]: I0217 15:54:42.369698 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:42Z","lastTransitionTime":"2026-02-17T15:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:42 crc kubenswrapper[4808]: I0217 15:54:42.473401 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:42 crc kubenswrapper[4808]: I0217 15:54:42.473775 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:42 crc kubenswrapper[4808]: I0217 15:54:42.473799 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:42 crc kubenswrapper[4808]: I0217 15:54:42.473822 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:42 crc kubenswrapper[4808]: I0217 15:54:42.473839 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:42Z","lastTransitionTime":"2026-02-17T15:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:42 crc kubenswrapper[4808]: I0217 15:54:42.577508 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:42 crc kubenswrapper[4808]: I0217 15:54:42.577552 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:42 crc kubenswrapper[4808]: I0217 15:54:42.577561 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:42 crc kubenswrapper[4808]: I0217 15:54:42.577593 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:42 crc kubenswrapper[4808]: I0217 15:54:42.577603 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:42Z","lastTransitionTime":"2026-02-17T15:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:42 crc kubenswrapper[4808]: I0217 15:54:42.683859 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:42 crc kubenswrapper[4808]: I0217 15:54:42.684202 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:42 crc kubenswrapper[4808]: I0217 15:54:42.684288 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:42 crc kubenswrapper[4808]: I0217 15:54:42.684380 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:42 crc kubenswrapper[4808]: I0217 15:54:42.684464 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:42Z","lastTransitionTime":"2026-02-17T15:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:42 crc kubenswrapper[4808]: I0217 15:54:42.787318 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:42 crc kubenswrapper[4808]: I0217 15:54:42.787673 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:42 crc kubenswrapper[4808]: I0217 15:54:42.787825 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:42 crc kubenswrapper[4808]: I0217 15:54:42.787955 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:42 crc kubenswrapper[4808]: I0217 15:54:42.788076 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:42Z","lastTransitionTime":"2026-02-17T15:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:42 crc kubenswrapper[4808]: I0217 15:54:42.890968 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:42 crc kubenswrapper[4808]: I0217 15:54:42.891014 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:42 crc kubenswrapper[4808]: I0217 15:54:42.891023 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:42 crc kubenswrapper[4808]: I0217 15:54:42.891040 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:42 crc kubenswrapper[4808]: I0217 15:54:42.891051 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:42Z","lastTransitionTime":"2026-02-17T15:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:42 crc kubenswrapper[4808]: I0217 15:54:42.993985 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:42 crc kubenswrapper[4808]: I0217 15:54:42.994021 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:42 crc kubenswrapper[4808]: I0217 15:54:42.994030 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:42 crc kubenswrapper[4808]: I0217 15:54:42.994043 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:42 crc kubenswrapper[4808]: I0217 15:54:42.994051 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:42Z","lastTransitionTime":"2026-02-17T15:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.097694 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.097786 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.097807 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.098428 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.098705 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:43Z","lastTransitionTime":"2026-02-17T15:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.117192 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 12:10:00.748856459 +0000 UTC Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.145364 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.145555 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.145779 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.145879 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:54:43 crc kubenswrapper[4808]: E0217 15:54:43.145862 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:54:43 crc kubenswrapper[4808]: E0217 15:54:43.146151 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:54:43 crc kubenswrapper[4808]: E0217 15:54:43.146291 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z8tn8" podUID="b88c3e5f-7390-477c-ae74-aced26a8ddf9" Feb 17 15:54:43 crc kubenswrapper[4808]: E0217 15:54:43.146432 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.202294 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.202361 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.202387 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.202421 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.202450 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:43Z","lastTransitionTime":"2026-02-17T15:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.305500 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.305555 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.305567 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.305608 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.305623 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:43Z","lastTransitionTime":"2026-02-17T15:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.322221 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.322293 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.322312 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.322341 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.322362 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:43Z","lastTransitionTime":"2026-02-17T15:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:43 crc kubenswrapper[4808]: E0217 15:54:43.345821 4808 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7379f6dd-5937-4d60-901f-8c9dc45481b3\\\",\\\"systemUUID\\\":\\\"8fe3bc97-dd01-4038-9ff9-743e71f8162b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:43Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.352081 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.352150 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.352169 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.352197 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.352217 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:43Z","lastTransitionTime":"2026-02-17T15:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:43 crc kubenswrapper[4808]: E0217 15:54:43.373811 4808 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7379f6dd-5937-4d60-901f-8c9dc45481b3\\\",\\\"systemUUID\\\":\\\"8fe3bc97-dd01-4038-9ff9-743e71f8162b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:43Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.379334 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.379470 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.379491 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.379522 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.379547 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:43Z","lastTransitionTime":"2026-02-17T15:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:43 crc kubenswrapper[4808]: E0217 15:54:43.393209 4808 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7379f6dd-5937-4d60-901f-8c9dc45481b3\\\",\\\"systemUUID\\\":\\\"8fe3bc97-dd01-4038-9ff9-743e71f8162b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:43Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.401358 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.401450 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.401474 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.401512 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.401538 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:43Z","lastTransitionTime":"2026-02-17T15:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:43 crc kubenswrapper[4808]: E0217 15:54:43.416995 4808 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7379f6dd-5937-4d60-901f-8c9dc45481b3\\\",\\\"systemUUID\\\":\\\"8fe3bc97-dd01-4038-9ff9-743e71f8162b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:43Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.423412 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.423489 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.423508 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.423538 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.423560 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:43Z","lastTransitionTime":"2026-02-17T15:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:43 crc kubenswrapper[4808]: E0217 15:54:43.441334 4808 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7379f6dd-5937-4d60-901f-8c9dc45481b3\\\",\\\"systemUUID\\\":\\\"8fe3bc97-dd01-4038-9ff9-743e71f8162b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:43Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:43 crc kubenswrapper[4808]: E0217 15:54:43.441610 4808 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.445069 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.445138 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.445156 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.445181 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.445198 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:43Z","lastTransitionTime":"2026-02-17T15:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.548772 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.548851 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.548881 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.548911 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.548931 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:43Z","lastTransitionTime":"2026-02-17T15:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.658713 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.658785 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.658800 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.658844 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.658855 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:43Z","lastTransitionTime":"2026-02-17T15:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.762443 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.762489 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.762499 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.762515 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.762529 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:43Z","lastTransitionTime":"2026-02-17T15:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.871619 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.871658 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.871669 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.871687 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.871698 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:43Z","lastTransitionTime":"2026-02-17T15:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.974899 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.974969 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.974982 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.975002 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.975015 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:43Z","lastTransitionTime":"2026-02-17T15:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:44 crc kubenswrapper[4808]: I0217 15:54:44.077766 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:44 crc kubenswrapper[4808]: I0217 15:54:44.077804 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:44 crc kubenswrapper[4808]: I0217 15:54:44.077812 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:44 crc kubenswrapper[4808]: I0217 15:54:44.077826 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:44 crc kubenswrapper[4808]: I0217 15:54:44.077834 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:44Z","lastTransitionTime":"2026-02-17T15:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:44 crc kubenswrapper[4808]: I0217 15:54:44.117438 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 10:37:13.588781464 +0000 UTC Feb 17 15:54:44 crc kubenswrapper[4808]: I0217 15:54:44.180427 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:44 crc kubenswrapper[4808]: I0217 15:54:44.180488 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:44 crc kubenswrapper[4808]: I0217 15:54:44.180500 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:44 crc kubenswrapper[4808]: I0217 15:54:44.180522 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:44 crc kubenswrapper[4808]: I0217 15:54:44.180537 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:44Z","lastTransitionTime":"2026-02-17T15:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:44 crc kubenswrapper[4808]: I0217 15:54:44.283795 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:44 crc kubenswrapper[4808]: I0217 15:54:44.283834 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:44 crc kubenswrapper[4808]: I0217 15:54:44.283843 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:44 crc kubenswrapper[4808]: I0217 15:54:44.283857 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:44 crc kubenswrapper[4808]: I0217 15:54:44.283866 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:44Z","lastTransitionTime":"2026-02-17T15:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:44 crc kubenswrapper[4808]: I0217 15:54:44.386795 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:44 crc kubenswrapper[4808]: I0217 15:54:44.386954 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:44 crc kubenswrapper[4808]: I0217 15:54:44.386986 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:44 crc kubenswrapper[4808]: I0217 15:54:44.387019 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:44 crc kubenswrapper[4808]: I0217 15:54:44.387044 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:44Z","lastTransitionTime":"2026-02-17T15:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:44 crc kubenswrapper[4808]: I0217 15:54:44.489517 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:44 crc kubenswrapper[4808]: I0217 15:54:44.489558 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:44 crc kubenswrapper[4808]: I0217 15:54:44.489601 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:44 crc kubenswrapper[4808]: I0217 15:54:44.489620 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:44 crc kubenswrapper[4808]: I0217 15:54:44.489632 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:44Z","lastTransitionTime":"2026-02-17T15:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:44 crc kubenswrapper[4808]: I0217 15:54:44.592791 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:44 crc kubenswrapper[4808]: I0217 15:54:44.592871 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:44 crc kubenswrapper[4808]: I0217 15:54:44.592883 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:44 crc kubenswrapper[4808]: I0217 15:54:44.592901 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:44 crc kubenswrapper[4808]: I0217 15:54:44.592913 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:44Z","lastTransitionTime":"2026-02-17T15:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:44 crc kubenswrapper[4808]: I0217 15:54:44.695964 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:44 crc kubenswrapper[4808]: I0217 15:54:44.696007 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:44 crc kubenswrapper[4808]: I0217 15:54:44.696016 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:44 crc kubenswrapper[4808]: I0217 15:54:44.696031 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:44 crc kubenswrapper[4808]: I0217 15:54:44.696044 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:44Z","lastTransitionTime":"2026-02-17T15:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:44 crc kubenswrapper[4808]: I0217 15:54:44.799116 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:44 crc kubenswrapper[4808]: I0217 15:54:44.799160 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:44 crc kubenswrapper[4808]: I0217 15:54:44.799168 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:44 crc kubenswrapper[4808]: I0217 15:54:44.799185 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:44 crc kubenswrapper[4808]: I0217 15:54:44.799194 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:44Z","lastTransitionTime":"2026-02-17T15:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:44 crc kubenswrapper[4808]: I0217 15:54:44.901839 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:44 crc kubenswrapper[4808]: I0217 15:54:44.901881 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:44 crc kubenswrapper[4808]: I0217 15:54:44.901895 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:44 crc kubenswrapper[4808]: I0217 15:54:44.901913 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:44 crc kubenswrapper[4808]: I0217 15:54:44.901925 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:44Z","lastTransitionTime":"2026-02-17T15:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:45 crc kubenswrapper[4808]: I0217 15:54:45.004636 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:45 crc kubenswrapper[4808]: I0217 15:54:45.004677 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:45 crc kubenswrapper[4808]: I0217 15:54:45.004690 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:45 crc kubenswrapper[4808]: I0217 15:54:45.004705 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:45 crc kubenswrapper[4808]: I0217 15:54:45.004714 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:45Z","lastTransitionTime":"2026-02-17T15:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:45 crc kubenswrapper[4808]: I0217 15:54:45.107291 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:45 crc kubenswrapper[4808]: I0217 15:54:45.107326 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:45 crc kubenswrapper[4808]: I0217 15:54:45.107335 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:45 crc kubenswrapper[4808]: I0217 15:54:45.107348 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:45 crc kubenswrapper[4808]: I0217 15:54:45.107359 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:45Z","lastTransitionTime":"2026-02-17T15:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:45 crc kubenswrapper[4808]: I0217 15:54:45.118449 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 17:58:10.264282351 +0000 UTC Feb 17 15:54:45 crc kubenswrapper[4808]: I0217 15:54:45.145073 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:54:45 crc kubenswrapper[4808]: E0217 15:54:45.145657 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:54:45 crc kubenswrapper[4808]: I0217 15:54:45.145839 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:54:45 crc kubenswrapper[4808]: I0217 15:54:45.145889 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:54:45 crc kubenswrapper[4808]: I0217 15:54:45.145983 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:54:45 crc kubenswrapper[4808]: E0217 15:54:45.146165 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z8tn8" podUID="b88c3e5f-7390-477c-ae74-aced26a8ddf9" Feb 17 15:54:45 crc kubenswrapper[4808]: E0217 15:54:45.146965 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:54:45 crc kubenswrapper[4808]: E0217 15:54:45.147133 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:54:45 crc kubenswrapper[4808]: I0217 15:54:45.210168 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:45 crc kubenswrapper[4808]: I0217 15:54:45.210216 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:45 crc kubenswrapper[4808]: I0217 15:54:45.210227 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:45 crc kubenswrapper[4808]: I0217 15:54:45.210246 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:45 crc kubenswrapper[4808]: I0217 15:54:45.210259 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:45Z","lastTransitionTime":"2026-02-17T15:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:45 crc kubenswrapper[4808]: I0217 15:54:45.312877 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:45 crc kubenswrapper[4808]: I0217 15:54:45.312936 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:45 crc kubenswrapper[4808]: I0217 15:54:45.312955 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:45 crc kubenswrapper[4808]: I0217 15:54:45.312979 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:45 crc kubenswrapper[4808]: I0217 15:54:45.312996 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:45Z","lastTransitionTime":"2026-02-17T15:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:45 crc kubenswrapper[4808]: I0217 15:54:45.416881 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:45 crc kubenswrapper[4808]: I0217 15:54:45.416954 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:45 crc kubenswrapper[4808]: I0217 15:54:45.416972 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:45 crc kubenswrapper[4808]: I0217 15:54:45.416997 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:45 crc kubenswrapper[4808]: I0217 15:54:45.417015 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:45Z","lastTransitionTime":"2026-02-17T15:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:45 crc kubenswrapper[4808]: I0217 15:54:45.520664 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:45 crc kubenswrapper[4808]: I0217 15:54:45.520739 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:45 crc kubenswrapper[4808]: I0217 15:54:45.520762 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:45 crc kubenswrapper[4808]: I0217 15:54:45.520809 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:45 crc kubenswrapper[4808]: I0217 15:54:45.520836 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:45Z","lastTransitionTime":"2026-02-17T15:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:45 crc kubenswrapper[4808]: I0217 15:54:45.626039 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:45 crc kubenswrapper[4808]: I0217 15:54:45.626100 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:45 crc kubenswrapper[4808]: I0217 15:54:45.626117 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:45 crc kubenswrapper[4808]: I0217 15:54:45.626138 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:45 crc kubenswrapper[4808]: I0217 15:54:45.626154 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:45Z","lastTransitionTime":"2026-02-17T15:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:45 crc kubenswrapper[4808]: I0217 15:54:45.729375 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:45 crc kubenswrapper[4808]: I0217 15:54:45.729445 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:45 crc kubenswrapper[4808]: I0217 15:54:45.729462 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:45 crc kubenswrapper[4808]: I0217 15:54:45.729487 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:45 crc kubenswrapper[4808]: I0217 15:54:45.729506 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:45Z","lastTransitionTime":"2026-02-17T15:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:45 crc kubenswrapper[4808]: I0217 15:54:45.832376 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:45 crc kubenswrapper[4808]: I0217 15:54:45.832451 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:45 crc kubenswrapper[4808]: I0217 15:54:45.832482 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:45 crc kubenswrapper[4808]: I0217 15:54:45.832516 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:45 crc kubenswrapper[4808]: I0217 15:54:45.832541 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:45Z","lastTransitionTime":"2026-02-17T15:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:45 crc kubenswrapper[4808]: I0217 15:54:45.935016 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:45 crc kubenswrapper[4808]: I0217 15:54:45.935071 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:45 crc kubenswrapper[4808]: I0217 15:54:45.935085 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:45 crc kubenswrapper[4808]: I0217 15:54:45.935106 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:45 crc kubenswrapper[4808]: I0217 15:54:45.935118 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:45Z","lastTransitionTime":"2026-02-17T15:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.039756 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.039816 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.039835 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.039860 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.039879 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:46Z","lastTransitionTime":"2026-02-17T15:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.119132 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 05:52:53.426421425 +0000 UTC Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.144191 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.144273 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.144295 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.144326 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.144347 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:46Z","lastTransitionTime":"2026-02-17T15:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.145973 4808 scope.go:117] "RemoveContainer" containerID="efef33a328c17ebb52448542ea1a70587b2bd3219b0f9bbd3eec8074885d14d2" Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.248701 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.249142 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.249172 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.249197 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.249213 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:46Z","lastTransitionTime":"2026-02-17T15:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.352330 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.352390 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.352415 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.352448 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.352473 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:46Z","lastTransitionTime":"2026-02-17T15:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.455771 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.455836 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.455858 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.455885 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.455904 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:46Z","lastTransitionTime":"2026-02-17T15:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.536154 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tgvlh_5748f02a-e3dd-47c7-b89d-b472c718e593/ovnkube-controller/1.log" Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.540671 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" event={"ID":"5748f02a-e3dd-47c7-b89d-b472c718e593","Type":"ContainerStarted","Data":"5d307d637e95a78d79b622b1de7d0ed293b2e0e690f6b661e6f8ed1c3ab91673"} Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.541985 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.558902 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f8pfh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13cb51e0-9eb4-4948-a9bf-93cddaa429fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e67e9f34fe5e5e9f272673e47a80dfec89a2832289e719b09d5a13399412b2ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkcvd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f8pfh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:46Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.560792 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.560871 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.560893 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.560926 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.560948 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:46Z","lastTransitionTime":"2026-02-17T15:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.589867 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-msgfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18916d6d-e063-40a0-816f-554f95cd2956\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d94a7bfe9ebc3fcec167acc2f840374566394d9425801a71bd3626ce196ee3a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-msgfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:46Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.627333 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748f02a-e3dd-47c7-b89d-b472c718e593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80ab3de82f2a3f22425c34c9b4abcbc925a7076e3f2ce3b952f10aeb856e1c09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c263e6c0445a0badadcbc5b50c370fd4ee9a4d0cb3e535e3d7944e938cbea4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58ee49f9d112bd2fe6a3cc5f499d1be9d4c51f2741ffb9bf24754a46a0a12814\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b04c73bfd5eadf6c1e436f6a7150074ee8357cef79b0e040c1d9f3809aab13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9e729fa5a68d07a0f7e4a86114ed39e4128428e5a21c2f3f113f869adc9fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26a9d62d12c66018649ffcb84c69e20f1c08f3241bdb02ba4306b08dbe5ec49a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d307d637e95a78d79b622b1de7d0ed293b2e0e690f6b661e6f8ed1c3ab91673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efef33a328c17ebb52448542ea1a70587b2bd3219b0f9bbd3eec8074885d14d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T15:54:29Z\\\",\\\"message\\\":\\\"false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.138:50051:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {97419c58-41c7-41d7-a137-a446f0c7eeb3}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0217 15:54:29.419850 6225 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0217 15:54:29.420431 6225 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-daemon]} name:Service_openshift-machine-config-operator/machine-config-daemon_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.43:8798: 10.217.4.43:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {a36f6289-d09f-43f8-8a8a-c9d2cc11eb0d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0217 15:54:29.420614 6225 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://363a0f82d4347e522c91f27597bc03aa33f75e0399760fcc5cfdc1772eb6aabf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tgvlh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:46Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.654105 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:46Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.664642 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.664716 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.664745 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.664774 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.664794 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:46Z","lastTransitionTime":"2026-02-17T15:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.678107 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:46Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.702126 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6556f8ef16656338bd11e718549ef3c019e96928825ab9dc0596f24b8f43e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbc64aec6f296c59b9fb1e8c183c9f80c346f2d76620db59376c914ffcec02b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:46Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.720016 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-86pl6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"067d21e4-9618-42af-bb01-1ea41d1bd7ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcb207e998564484db273e9e68e20e49fb986fc4644b656e17b5c3fea9fb4eb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjv2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded2fa969b96132c1a5953da41b9418ec78621261888216b3854bc3cacb7bca6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjv2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-86pl6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:46Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.744288 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efd34c89-7350-4ce0-83d9-302614df88f7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa3ef5d82c776e482d3da2d223d74423393c75b813707483fadca8cfbb5ed3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://695c70a36ec8a626d22b6dc04fdaad77e3e1f27a035ce6f62b96afe1f2c29361\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2611c9a878eac336beeea637370ce7fe47a5a80a6f29002cb2fb79d4637a1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d0e25e29d8f9c5146809e50f50a20c537f5ddecea1b902928a94870b5d44ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68d1439ead0f87e8cde6925c6db2cfde8a7fe89c6e5afaf719868740138742df\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:54:16Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 15:54:01.029442 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:54:01.030078 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2660512818/tls.crt::/tmp/serving-cert-2660512818/tls.key\\\\\\\"\\\\nI0217 15:54:16.361222 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:54:16.370125 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:54:16.370169 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:54:16.370202 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:54:16.370212 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:54:16.383437 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:54:16.383473 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383482 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:54:16.383494 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:54:16.383498 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:54:16.383502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:54:16.383616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:54:16.393934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://715d799f5e1732f88175b90bad28450b9c5148e89bf47ac3e47f9585acf3b392\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:53:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:46Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.762925 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3aaaa97d92e1acc8fe17594a75ed3e720801983ea175873486102bca899d9c04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:46Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.768793 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.768823 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.768834 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.768852 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.768863 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:46Z","lastTransitionTime":"2026-02-17T15:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.779792 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pr5s4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4989dd6-5d44-42b5-882c-12a10ffc7911\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://228e9f46385cedf80299c68685a8b2b94d96c41ade18eeea5de7a83c648cf704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2xc9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pr5s4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:46Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.793202 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-z8tn8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b88c3e5f-7390-477c-ae74-aced26a8ddf9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8f79s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8f79s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-z8tn8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:46Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.811136 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b5cb9af7fe50ad534e758ba5647e162dfc951f41f07330e8b671427811de556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:46Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.826125 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e109410f-af42-4d80-bf58-9af3a5dde09a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd52f8fe1e994b2f877ce0843ce86d86d7674bace8c4ca163e3232248313435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b00de586738e2d759aa971e2114def8fdfeb2a25fd72f482d75b9f46ea9a3d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12c45de72b21abdab0a1073a9a1a357c8d593f68a339bf9b455b5e87aa7863aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59dcbb2be526e98cfd0a3c8cf833d6cfdef0120c58b47e52fb62f56adffb1d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:46Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.839940 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:46Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.858531 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6c9480c-4161-4c38-bec1-0822c6692f6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53d750dff2e0aa3d65e2defbc3cdf44f48375946c7021c0b1e1056b5ed7d729e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8d4091ef21fb9fef52dafcd7f1d0e865ff57652fcb75d0ba1e16361bcb81f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8d4091ef21fb9fef52dafcd7f1d0e865ff57652fcb75d0ba1e16361bcb81f44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26ac79dab2ec2e8e379a62382daa37e5c1feaa0666d3c6426bd9a295c64fdd5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26ac79dab2ec2e8e379a62382daa37e5c1feaa0666d3c6426bd9a295c64fdd5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43f3b959a4804631ce679ee8dd89b1fa9249892328d303865de288a5a7529af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43f3b959a4804631ce679ee8dd89b1fa9249892328d303865de288a5a7529af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cf535fc0e39f67860383b43629a84bb4608a6a5d42304c537ab91a306ed841c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cf535fc0e39f67860383b43629a84bb4608a6a5d42304c537ab91a306ed841c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89610759cc77f66154699ee9784109cba8ce21818125f447368e19fb6cc8cfb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89610759cc77f66154699ee9784109cba8ce21818125f447368e19fb6cc8cfb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx4nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:46Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.873023 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.873107 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.873135 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.873170 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.873198 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:46Z","lastTransitionTime":"2026-02-17T15:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.874710 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca38b6e7-b21c-453d-8b6c-a163dac84b35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14df09051221e795ef203b228b1f61d67e86d8052d81b4853a27d50d2b6e64bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://383650c9e8169aa5621d731ebcbfdd1ace0491ad4e7931fca1f6b595e0e782b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k8v8k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:46Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.976282 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.976347 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.976360 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.976379 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.976393 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:46Z","lastTransitionTime":"2026-02-17T15:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.081101 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.081150 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.081159 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.081180 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.081192 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:47Z","lastTransitionTime":"2026-02-17T15:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.119526 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 07:49:28.729492608 +0000 UTC Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.144960 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.145051 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.145164 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:54:47 crc kubenswrapper[4808]: E0217 15:54:47.145159 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.145325 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:54:47 crc kubenswrapper[4808]: E0217 15:54:47.145440 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z8tn8" podUID="b88c3e5f-7390-477c-ae74-aced26a8ddf9" Feb 17 15:54:47 crc kubenswrapper[4808]: E0217 15:54:47.145315 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:54:47 crc kubenswrapper[4808]: E0217 15:54:47.145780 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.166736 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748f02a-e3dd-47c7-b89d-b472c718e593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80ab3de82f2a3f22425c34c9b4abcbc925a7076e3f2ce3b952f10aeb856e1c09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c263e6c0445a0badadcbc5b50c370fd4ee9a4d0cb3e535e3d7944e938cbea4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58ee49f9d112bd2fe6a3cc5f499d1be9d4c51f2741ffb9bf24754a46a0a12814\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b04c73bfd5eadf6c1e436f6a7150074ee8357cef79b0e040c1d9f3809aab13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9e729fa5a68d07a0f7e4a86114ed39e4128428e5a21c2f3f113f869adc9fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26a9d62d12c66018649ffcb84c69e20f1c08f3241bdb02ba4306b08dbe5ec49a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d307d637e95a78d79b622b1de7d0ed293b2e0e690f6b661e6f8ed1c3ab91673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efef33a328c17ebb52448542ea1a70587b2bd3219b0f9bbd3eec8074885d14d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T15:54:29Z\\\",\\\"message\\\":\\\"false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.138:50051:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {97419c58-41c7-41d7-a137-a446f0c7eeb3}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0217 15:54:29.419850 6225 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0217 15:54:29.420431 6225 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-daemon]} name:Service_openshift-machine-config-operator/machine-config-daemon_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.43:8798: 10.217.4.43:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {a36f6289-d09f-43f8-8a8a-c9d2cc11eb0d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0217 15:54:29.420614 6225 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://363a0f82d4347e522c91f27597bc03aa33f75e0399760fcc5cfdc1772eb6aabf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tgvlh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:47Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.187672 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.187745 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.187768 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.187798 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.187816 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:47Z","lastTransitionTime":"2026-02-17T15:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.187889 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:47Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.203671 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:47Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.218566 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6556f8ef16656338bd11e718549ef3c019e96928825ab9dc0596f24b8f43e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbc64aec6f296c59b9fb1e8c183c9f80c346f2d76620db59376c914ffcec02b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:47Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.229389 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f8pfh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13cb51e0-9eb4-4948-a9bf-93cddaa429fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e67e9f34fe5e5e9f272673e47a80dfec89a2832289e719b09d5a13399412b2ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkcvd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f8pfh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:47Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.249091 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-msgfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18916d6d-e063-40a0-816f-554f95cd2956\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d94a7bfe9ebc3fcec167acc2f840374566394d9425801a71bd3626ce196ee3a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-msgfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:47Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.261515 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-86pl6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"067d21e4-9618-42af-bb01-1ea41d1bd7ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcb207e998564484db273e9e68e20e49fb986fc4644b656e17b5c3fea9fb4eb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjv2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded2fa969b96132c1a5953da41b9418ec78621261888216b3854bc3cacb7bca6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjv2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-86pl6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:47Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.276458 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efd34c89-7350-4ce0-83d9-302614df88f7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa3ef5d82c776e482d3da2d223d74423393c75b813707483fadca8cfbb5ed3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://695c70a36ec8a626d22b6dc04fdaad77e3e1f27a035ce6f62b96afe1f2c29361\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2611c9a878eac336beeea637370ce7fe47a5a80a6f29002cb2fb79d4637a1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d0e25e29d8f9c5146809e50f50a20c537f5ddecea1b902928a94870b5d44ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68d1439ead0f87e8cde6925c6db2cfde8a7fe89c6e5afaf719868740138742df\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:54:16Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 15:54:01.029442 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:54:01.030078 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2660512818/tls.crt::/tmp/serving-cert-2660512818/tls.key\\\\\\\"\\\\nI0217 15:54:16.361222 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:54:16.370125 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:54:16.370169 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:54:16.370202 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:54:16.370212 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:54:16.383437 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:54:16.383473 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383482 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:54:16.383494 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:54:16.383498 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:54:16.383502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:54:16.383616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:54:16.393934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://715d799f5e1732f88175b90bad28450b9c5148e89bf47ac3e47f9585acf3b392\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:53:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:47Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.289329 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3aaaa97d92e1acc8fe17594a75ed3e720801983ea175873486102bca899d9c04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:47Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.290952 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.290985 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.290996 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.291012 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.291022 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:47Z","lastTransitionTime":"2026-02-17T15:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.303170 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pr5s4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4989dd6-5d44-42b5-882c-12a10ffc7911\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://228e9f46385cedf80299c68685a8b2b94d96c41ade18eeea5de7a83c648cf704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2xc9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pr5s4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:47Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.314711 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-z8tn8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b88c3e5f-7390-477c-ae74-aced26a8ddf9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8f79s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8f79s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-z8tn8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:47Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.330053 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b5cb9af7fe50ad534e758ba5647e162dfc951f41f07330e8b671427811de556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:47Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.342501 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e109410f-af42-4d80-bf58-9af3a5dde09a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd52f8fe1e994b2f877ce0843ce86d86d7674bace8c4ca163e3232248313435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b00de586738e2d759aa971e2114def8fdfeb2a25fd72f482d75b9f46ea9a3d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12c45de72b21abdab0a1073a9a1a357c8d593f68a339bf9b455b5e87aa7863aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59dcbb2be526e98cfd0a3c8cf833d6cfdef0120c58b47e52fb62f56adffb1d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:47Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.357042 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:47Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.372234 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6c9480c-4161-4c38-bec1-0822c6692f6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53d750dff2e0aa3d65e2defbc3cdf44f48375946c7021c0b1e1056b5ed7d729e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8d4091ef21fb9fef52dafcd7f1d0e865ff57652fcb75d0ba1e16361bcb81f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8d4091ef21fb9fef52dafcd7f1d0e865ff57652fcb75d0ba1e16361bcb81f44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26ac79dab2ec2e8e379a62382daa37e5c1feaa0666d3c6426bd9a295c64fdd5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26ac79dab2ec2e8e379a62382daa37e5c1feaa0666d3c6426bd9a295c64fdd5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43f3b959a4804631ce679ee8dd89b1fa9249892328d303865de288a5a7529af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43f3b959a4804631ce679ee8dd89b1fa9249892328d303865de288a5a7529af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cf535fc0e39f67860383b43629a84bb4608a6a5d42304c537ab91a306ed841c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cf535fc0e39f67860383b43629a84bb4608a6a5d42304c537ab91a306ed841c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89610759cc77f66154699ee9784109cba8ce21818125f447368e19fb6cc8cfb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89610759cc77f66154699ee9784109cba8ce21818125f447368e19fb6cc8cfb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx4nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:47Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.383822 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca38b6e7-b21c-453d-8b6c-a163dac84b35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14df09051221e795ef203b228b1f61d67e86d8052d81b4853a27d50d2b6e64bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://383650c9e8169aa5621d731ebcbfdd1ace0491ad4e7931fca1f6b595e0e782b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k8v8k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:47Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.393815 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.393872 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.393889 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.393913 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.393929 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:47Z","lastTransitionTime":"2026-02-17T15:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.497743 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.497804 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.497814 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.497833 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.497844 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:47Z","lastTransitionTime":"2026-02-17T15:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.547428 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tgvlh_5748f02a-e3dd-47c7-b89d-b472c718e593/ovnkube-controller/2.log" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.548313 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tgvlh_5748f02a-e3dd-47c7-b89d-b472c718e593/ovnkube-controller/1.log" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.552061 4808 generic.go:334] "Generic (PLEG): container finished" podID="5748f02a-e3dd-47c7-b89d-b472c718e593" containerID="5d307d637e95a78d79b622b1de7d0ed293b2e0e690f6b661e6f8ed1c3ab91673" exitCode=1 Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.552104 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" event={"ID":"5748f02a-e3dd-47c7-b89d-b472c718e593","Type":"ContainerDied","Data":"5d307d637e95a78d79b622b1de7d0ed293b2e0e690f6b661e6f8ed1c3ab91673"} Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.552157 4808 scope.go:117] "RemoveContainer" containerID="efef33a328c17ebb52448542ea1a70587b2bd3219b0f9bbd3eec8074885d14d2" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.553102 4808 scope.go:117] "RemoveContainer" containerID="5d307d637e95a78d79b622b1de7d0ed293b2e0e690f6b661e6f8ed1c3ab91673" Feb 17 15:54:47 crc kubenswrapper[4808]: E0217 15:54:47.553340 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-tgvlh_openshift-ovn-kubernetes(5748f02a-e3dd-47c7-b89d-b472c718e593)\"" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" podUID="5748f02a-e3dd-47c7-b89d-b472c718e593" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.576918 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:47Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.596240 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:47Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.602524 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.602629 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.602649 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.602678 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.602697 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:47Z","lastTransitionTime":"2026-02-17T15:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.617564 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6556f8ef16656338bd11e718549ef3c019e96928825ab9dc0596f24b8f43e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbc64aec6f296c59b9fb1e8c183c9f80c346f2d76620db59376c914ffcec02b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:47Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.633265 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f8pfh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13cb51e0-9eb4-4948-a9bf-93cddaa429fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e67e9f34fe5e5e9f272673e47a80dfec89a2832289e719b09d5a13399412b2ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkcvd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f8pfh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:47Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.653148 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-msgfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18916d6d-e063-40a0-816f-554f95cd2956\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d94a7bfe9ebc3fcec167acc2f840374566394d9425801a71bd3626ce196ee3a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-msgfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:47Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.681181 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b88c3e5f-7390-477c-ae74-aced26a8ddf9-metrics-certs\") pod \"network-metrics-daemon-z8tn8\" (UID: \"b88c3e5f-7390-477c-ae74-aced26a8ddf9\") " pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:54:47 crc kubenswrapper[4808]: E0217 15:54:47.681497 4808 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 15:54:47 crc kubenswrapper[4808]: E0217 15:54:47.681618 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b88c3e5f-7390-477c-ae74-aced26a8ddf9-metrics-certs podName:b88c3e5f-7390-477c-ae74-aced26a8ddf9 nodeName:}" failed. No retries permitted until 2026-02-17 15:55:03.681566257 +0000 UTC m=+67.197925370 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b88c3e5f-7390-477c-ae74-aced26a8ddf9-metrics-certs") pod "network-metrics-daemon-z8tn8" (UID: "b88c3e5f-7390-477c-ae74-aced26a8ddf9") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.682650 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748f02a-e3dd-47c7-b89d-b472c718e593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80ab3de82f2a3f22425c34c9b4abcbc925a7076e3f2ce3b952f10aeb856e1c09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c263e6c0445a0badadcbc5b50c370fd4ee9a4d0cb3e535e3d7944e938cbea4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58ee49f9d112bd2fe6a3cc5f499d1be9d4c51f2741ffb9bf24754a46a0a12814\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b04c73bfd5eadf6c1e436f6a7150074ee8357cef79b0e040c1d9f3809aab13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9e729fa5a68d07a0f7e4a86114ed39e4128428e5a21c2f3f113f869adc9fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26a9d62d12c66018649ffcb84c69e20f1c08f3241bdb02ba4306b08dbe5ec49a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d307d637e95a78d79b622b1de7d0ed293b2e0e690f6b661e6f8ed1c3ab91673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efef33a328c17ebb52448542ea1a70587b2bd3219b0f9bbd3eec8074885d14d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T15:54:29Z\\\",\\\"message\\\":\\\"false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.138:50051:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {97419c58-41c7-41d7-a137-a446f0c7eeb3}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0217 15:54:29.419850 6225 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0217 15:54:29.420431 6225 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-daemon]} name:Service_openshift-machine-config-operator/machine-config-daemon_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.43:8798: 10.217.4.43:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {a36f6289-d09f-43f8-8a8a-c9d2cc11eb0d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0217 15:54:29.420614 6225 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d307d637e95a78d79b622b1de7d0ed293b2e0e690f6b661e6f8ed1c3ab91673\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T15:54:47Z\\\",\\\"message\\\":\\\"s{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0217 15:54:47.336335 6443 services_controller.go:444] Built service openshift-console-operator/metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0217 15:54:47.336345 6443 services_controller.go:445] Built service openshift-console-operator/metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nF0217 15:54:47.336359 6443 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:47Z is after 2025-08-24T17:21:41Z]\\\\nI0217 15:54:47.336366 6443 services_controller.go:451] Built service openshift-consol\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://363a0f82d4347e522c91f27597bc03aa33f75e0399760fcc5cfdc1772eb6aabf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tgvlh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:47Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.705960 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.706012 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.706030 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.706054 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.706071 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:47Z","lastTransitionTime":"2026-02-17T15:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.731838 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-86pl6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"067d21e4-9618-42af-bb01-1ea41d1bd7ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcb207e998564484db273e9e68e20e49fb986fc4644b656e17b5c3fea9fb4eb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjv2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded2fa969b96132c1a5953da41b9418ec78621261888216b3854bc3cacb7bca6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjv2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-86pl6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:47Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.757545 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efd34c89-7350-4ce0-83d9-302614df88f7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa3ef5d82c776e482d3da2d223d74423393c75b813707483fadca8cfbb5ed3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://695c70a36ec8a626d22b6dc04fdaad77e3e1f27a035ce6f62b96afe1f2c29361\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2611c9a878eac336beeea637370ce7fe47a5a80a6f29002cb2fb79d4637a1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d0e25e29d8f9c5146809e50f50a20c537f5ddecea1b902928a94870b5d44ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68d1439ead0f87e8cde6925c6db2cfde8a7fe89c6e5afaf719868740138742df\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:54:16Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 15:54:01.029442 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:54:01.030078 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2660512818/tls.crt::/tmp/serving-cert-2660512818/tls.key\\\\\\\"\\\\nI0217 15:54:16.361222 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:54:16.370125 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:54:16.370169 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:54:16.370202 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:54:16.370212 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:54:16.383437 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:54:16.383473 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383482 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:54:16.383494 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:54:16.383498 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:54:16.383502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:54:16.383616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:54:16.393934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://715d799f5e1732f88175b90bad28450b9c5148e89bf47ac3e47f9585acf3b392\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:53:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:47Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.779794 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3aaaa97d92e1acc8fe17594a75ed3e720801983ea175873486102bca899d9c04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:47Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.797071 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pr5s4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4989dd6-5d44-42b5-882c-12a10ffc7911\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://228e9f46385cedf80299c68685a8b2b94d96c41ade18eeea5de7a83c648cf704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2xc9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pr5s4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:47Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.809678 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.809748 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.809775 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.809847 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.809875 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:47Z","lastTransitionTime":"2026-02-17T15:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.814083 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-z8tn8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b88c3e5f-7390-477c-ae74-aced26a8ddf9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8f79s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8f79s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-z8tn8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:47Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.837017 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b5cb9af7fe50ad534e758ba5647e162dfc951f41f07330e8b671427811de556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:47Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.851863 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e109410f-af42-4d80-bf58-9af3a5dde09a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd52f8fe1e994b2f877ce0843ce86d86d7674bace8c4ca163e3232248313435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b00de586738e2d759aa971e2114def8fdfeb2a25fd72f482d75b9f46ea9a3d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12c45de72b21abdab0a1073a9a1a357c8d593f68a339bf9b455b5e87aa7863aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59dcbb2be526e98cfd0a3c8cf833d6cfdef0120c58b47e52fb62f56adffb1d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:47Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.871487 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:47Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.896374 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6c9480c-4161-4c38-bec1-0822c6692f6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53d750dff2e0aa3d65e2defbc3cdf44f48375946c7021c0b1e1056b5ed7d729e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8d4091ef21fb9fef52dafcd7f1d0e865ff57652fcb75d0ba1e16361bcb81f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8d4091ef21fb9fef52dafcd7f1d0e865ff57652fcb75d0ba1e16361bcb81f44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26ac79dab2ec2e8e379a62382daa37e5c1feaa0666d3c6426bd9a295c64fdd5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26ac79dab2ec2e8e379a62382daa37e5c1feaa0666d3c6426bd9a295c64fdd5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43f3b959a4804631ce679ee8dd89b1fa9249892328d303865de288a5a7529af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43f3b959a4804631ce679ee8dd89b1fa9249892328d303865de288a5a7529af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cf535fc0e39f67860383b43629a84bb4608a6a5d42304c537ab91a306ed841c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cf535fc0e39f67860383b43629a84bb4608a6a5d42304c537ab91a306ed841c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89610759cc77f66154699ee9784109cba8ce21818125f447368e19fb6cc8cfb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89610759cc77f66154699ee9784109cba8ce21818125f447368e19fb6cc8cfb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx4nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:47Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.913189 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.913239 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.913252 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.913272 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.913287 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:47Z","lastTransitionTime":"2026-02-17T15:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.913307 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca38b6e7-b21c-453d-8b6c-a163dac84b35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14df09051221e795ef203b228b1f61d67e86d8052d81b4853a27d50d2b6e64bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://383650c9e8169aa5621d731ebcbfdd1ace0491ad4e7931fca1f6b595e0e782b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k8v8k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:47Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.016351 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.016425 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.016445 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.016473 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.016496 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:48Z","lastTransitionTime":"2026-02-17T15:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.119717 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.119787 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.119806 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.119755 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 18:39:02.480683182 +0000 UTC Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.119838 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.119921 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:48Z","lastTransitionTime":"2026-02-17T15:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.222829 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.222893 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.222911 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.222939 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.222959 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:48Z","lastTransitionTime":"2026-02-17T15:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.326366 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.326450 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.326475 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.326505 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.326525 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:48Z","lastTransitionTime":"2026-02-17T15:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.430354 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.430412 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.430430 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.430462 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.430480 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:48Z","lastTransitionTime":"2026-02-17T15:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.533702 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.533797 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.533829 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.533867 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.533894 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:48Z","lastTransitionTime":"2026-02-17T15:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.559370 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tgvlh_5748f02a-e3dd-47c7-b89d-b472c718e593/ovnkube-controller/2.log" Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.565533 4808 scope.go:117] "RemoveContainer" containerID="5d307d637e95a78d79b622b1de7d0ed293b2e0e690f6b661e6f8ed1c3ab91673" Feb 17 15:54:48 crc kubenswrapper[4808]: E0217 15:54:48.565912 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-tgvlh_openshift-ovn-kubernetes(5748f02a-e3dd-47c7-b89d-b472c718e593)\"" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" podUID="5748f02a-e3dd-47c7-b89d-b472c718e593" Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.589543 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3aaaa97d92e1acc8fe17594a75ed3e720801983ea175873486102bca899d9c04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:48Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.610517 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pr5s4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4989dd6-5d44-42b5-882c-12a10ffc7911\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://228e9f46385cedf80299c68685a8b2b94d96c41ade18eeea5de7a83c648cf704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2xc9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pr5s4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:48Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.633384 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-z8tn8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b88c3e5f-7390-477c-ae74-aced26a8ddf9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8f79s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8f79s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-z8tn8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:48Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.636810 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.636876 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.636887 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.636905 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.636918 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:48Z","lastTransitionTime":"2026-02-17T15:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.655568 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efd34c89-7350-4ce0-83d9-302614df88f7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa3ef5d82c776e482d3da2d223d74423393c75b813707483fadca8cfbb5ed3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://695c70a36ec8a626d22b6dc04fdaad77e3e1f27a035ce6f62b96afe1f2c29361\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2611c9a878eac336beeea637370ce7fe47a5a80a6f29002cb2fb79d4637a1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d0e25e29d8f9c5146809e50f50a20c537f5ddecea1b902928a94870b5d44ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68d1439ead0f87e8cde6925c6db2cfde8a7fe89c6e5afaf719868740138742df\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:54:16Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 15:54:01.029442 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:54:01.030078 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2660512818/tls.crt::/tmp/serving-cert-2660512818/tls.key\\\\\\\"\\\\nI0217 15:54:16.361222 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:54:16.370125 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:54:16.370169 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:54:16.370202 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:54:16.370212 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:54:16.383437 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:54:16.383473 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383482 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:54:16.383494 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:54:16.383498 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:54:16.383502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:54:16.383616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:54:16.393934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://715d799f5e1732f88175b90bad28450b9c5148e89bf47ac3e47f9585acf3b392\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:53:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:48Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.678854 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b5cb9af7fe50ad534e758ba5647e162dfc951f41f07330e8b671427811de556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:48Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.699825 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6c9480c-4161-4c38-bec1-0822c6692f6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53d750dff2e0aa3d65e2defbc3cdf44f48375946c7021c0b1e1056b5ed7d729e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8d4091ef21fb9fef52dafcd7f1d0e865ff57652fcb75d0ba1e16361bcb81f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8d4091ef21fb9fef52dafcd7f1d0e865ff57652fcb75d0ba1e16361bcb81f44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26ac79dab2ec2e8e379a62382daa37e5c1feaa0666d3c6426bd9a295c64fdd5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26ac79dab2ec2e8e379a62382daa37e5c1feaa0666d3c6426bd9a295c64fdd5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43f3b959a4804631ce679ee8dd89b1fa9249892328d303865de288a5a7529af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43f3b959a4804631ce679ee8dd89b1fa9249892328d303865de288a5a7529af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cf535fc0e39f67860383b43629a84bb4608a6a5d42304c537ab91a306ed841c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cf535fc0e39f67860383b43629a84bb4608a6a5d42304c537ab91a306ed841c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89610759cc77f66154699ee9784109cba8ce21818125f447368e19fb6cc8cfb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89610759cc77f66154699ee9784109cba8ce21818125f447368e19fb6cc8cfb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx4nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:48Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.714681 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca38b6e7-b21c-453d-8b6c-a163dac84b35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14df09051221e795ef203b228b1f61d67e86d8052d81b4853a27d50d2b6e64bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://383650c9e8169aa5621d731ebcbfdd1ace0491ad4e7931fca1f6b595e0e782b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k8v8k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:48Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.731790 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e109410f-af42-4d80-bf58-9af3a5dde09a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd52f8fe1e994b2f877ce0843ce86d86d7674bace8c4ca163e3232248313435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b00de586738e2d759aa971e2114def8fdfeb2a25fd72f482d75b9f46ea9a3d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12c45de72b21abdab0a1073a9a1a357c8d593f68a339bf9b455b5e87aa7863aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59dcbb2be526e98cfd0a3c8cf833d6cfdef0120c58b47e52fb62f56adffb1d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:48Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.740016 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.740084 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.740101 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.740122 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.740138 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:48Z","lastTransitionTime":"2026-02-17T15:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.751152 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:48Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.767454 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:48Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.784349 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:48Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.799991 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6556f8ef16656338bd11e718549ef3c019e96928825ab9dc0596f24b8f43e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbc64aec6f296c59b9fb1e8c183c9f80c346f2d76620db59376c914ffcec02b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:48Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.813765 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f8pfh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13cb51e0-9eb4-4948-a9bf-93cddaa429fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e67e9f34fe5e5e9f272673e47a80dfec89a2832289e719b09d5a13399412b2ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkcvd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f8pfh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:48Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.828170 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-msgfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18916d6d-e063-40a0-816f-554f95cd2956\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d94a7bfe9ebc3fcec167acc2f840374566394d9425801a71bd3626ce196ee3a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-msgfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:48Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.842755 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.842835 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.842858 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.842892 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.842919 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:48Z","lastTransitionTime":"2026-02-17T15:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.860103 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748f02a-e3dd-47c7-b89d-b472c718e593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80ab3de82f2a3f22425c34c9b4abcbc925a7076e3f2ce3b952f10aeb856e1c09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c263e6c0445a0badadcbc5b50c370fd4ee9a4d0cb3e535e3d7944e938cbea4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58ee49f9d112bd2fe6a3cc5f499d1be9d4c51f2741ffb9bf24754a46a0a12814\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b04c73bfd5eadf6c1e436f6a7150074ee8357cef79b0e040c1d9f3809aab13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9e729fa5a68d07a0f7e4a86114ed39e4128428e5a21c2f3f113f869adc9fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26a9d62d12c66018649ffcb84c69e20f1c08f3241bdb02ba4306b08dbe5ec49a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d307d637e95a78d79b622b1de7d0ed293b2e0e690f6b661e6f8ed1c3ab91673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d307d637e95a78d79b622b1de7d0ed293b2e0e690f6b661e6f8ed1c3ab91673\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T15:54:47Z\\\",\\\"message\\\":\\\"s{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0217 15:54:47.336335 6443 services_controller.go:444] Built service openshift-console-operator/metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0217 15:54:47.336345 6443 services_controller.go:445] Built service openshift-console-operator/metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nF0217 15:54:47.336359 6443 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:47Z is after 2025-08-24T17:21:41Z]\\\\nI0217 15:54:47.336366 6443 services_controller.go:451] Built service openshift-consol\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-tgvlh_openshift-ovn-kubernetes(5748f02a-e3dd-47c7-b89d-b472c718e593)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://363a0f82d4347e522c91f27597bc03aa33f75e0399760fcc5cfdc1772eb6aabf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tgvlh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:48Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.888117 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-86pl6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"067d21e4-9618-42af-bb01-1ea41d1bd7ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcb207e998564484db273e9e68e20e49fb986fc4644b656e17b5c3fea9fb4eb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjv2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded2fa969b96132c1a5953da41b9418ec78621261888216b3854bc3cacb7bca6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjv2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-86pl6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:48Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.945691 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.945739 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.945753 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.945774 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.945787 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:48Z","lastTransitionTime":"2026-02-17T15:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.999015 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:54:48 crc kubenswrapper[4808]: E0217 15:54:48.999323 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:55:20.999270684 +0000 UTC m=+84.515629797 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.051104 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.051137 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.051146 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.051164 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.051179 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:49Z","lastTransitionTime":"2026-02-17T15:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.101168 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.101230 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.101265 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.101296 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:54:49 crc kubenswrapper[4808]: E0217 15:54:49.101376 4808 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 15:54:49 crc kubenswrapper[4808]: E0217 15:54:49.101437 4808 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 15:54:49 crc kubenswrapper[4808]: E0217 15:54:49.101463 4808 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 15:54:49 crc kubenswrapper[4808]: E0217 15:54:49.101477 4808 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:54:49 crc kubenswrapper[4808]: E0217 15:54:49.101442 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 15:55:21.101424073 +0000 UTC m=+84.617783166 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 15:54:49 crc kubenswrapper[4808]: E0217 15:54:49.101535 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-17 15:55:21.101523366 +0000 UTC m=+84.617882449 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:54:49 crc kubenswrapper[4808]: E0217 15:54:49.101623 4808 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 15:54:49 crc kubenswrapper[4808]: E0217 15:54:49.101641 4808 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 15:54:49 crc kubenswrapper[4808]: E0217 15:54:49.101658 4808 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:54:49 crc kubenswrapper[4808]: E0217 15:54:49.101685 4808 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 15:54:49 crc kubenswrapper[4808]: E0217 15:54:49.101716 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-17 15:55:21.101699621 +0000 UTC m=+84.618058714 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:54:49 crc kubenswrapper[4808]: E0217 15:54:49.101956 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 15:55:21.101877305 +0000 UTC m=+84.618236378 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.120411 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 09:56:05.744036073 +0000 UTC Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.144812 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.144976 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.144981 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:54:49 crc kubenswrapper[4808]: E0217 15:54:49.145090 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.145123 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:54:49 crc kubenswrapper[4808]: E0217 15:54:49.145641 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z8tn8" podUID="b88c3e5f-7390-477c-ae74-aced26a8ddf9" Feb 17 15:54:49 crc kubenswrapper[4808]: E0217 15:54:49.145686 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:54:49 crc kubenswrapper[4808]: E0217 15:54:49.145279 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.157080 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.157135 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.157151 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.157175 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.157194 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:49Z","lastTransitionTime":"2026-02-17T15:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.261152 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.261546 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.261570 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.261626 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.261646 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:49Z","lastTransitionTime":"2026-02-17T15:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.364565 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.364660 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.364679 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.364704 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.364722 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:49Z","lastTransitionTime":"2026-02-17T15:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.468784 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.468850 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.468871 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.468901 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.468921 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:49Z","lastTransitionTime":"2026-02-17T15:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.571049 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.571111 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.571130 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.571155 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.571177 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:49Z","lastTransitionTime":"2026-02-17T15:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.674893 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.674942 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.674959 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.674986 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.675007 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:49Z","lastTransitionTime":"2026-02-17T15:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.742802 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.762643 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.774665 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efd34c89-7350-4ce0-83d9-302614df88f7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa3ef5d82c776e482d3da2d223d74423393c75b813707483fadca8cfbb5ed3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://695c70a36ec8a626d22b6dc04fdaad77e3e1f27a035ce6f62b96afe1f2c29361\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2611c9a878eac336beeea637370ce7fe47a5a80a6f29002cb2fb79d4637a1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d0e25e29d8f9c5146809e50f50a20c537f5ddecea1b902928a94870b5d44ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68d1439ead0f87e8cde6925c6db2cfde8a7fe89c6e5afaf719868740138742df\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:54:16Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 15:54:01.029442 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:54:01.030078 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2660512818/tls.crt::/tmp/serving-cert-2660512818/tls.key\\\\\\\"\\\\nI0217 15:54:16.361222 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:54:16.370125 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:54:16.370169 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:54:16.370202 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:54:16.370212 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:54:16.383437 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:54:16.383473 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383482 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:54:16.383494 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:54:16.383498 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:54:16.383502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:54:16.383616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:54:16.393934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://715d799f5e1732f88175b90bad28450b9c5148e89bf47ac3e47f9585acf3b392\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:53:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:49Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.780348 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.780427 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.780448 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.780477 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.780499 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:49Z","lastTransitionTime":"2026-02-17T15:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.797339 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3aaaa97d92e1acc8fe17594a75ed3e720801983ea175873486102bca899d9c04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:49Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.816736 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pr5s4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4989dd6-5d44-42b5-882c-12a10ffc7911\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://228e9f46385cedf80299c68685a8b2b94d96c41ade18eeea5de7a83c648cf704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2xc9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pr5s4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:49Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.835334 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-z8tn8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b88c3e5f-7390-477c-ae74-aced26a8ddf9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8f79s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8f79s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-z8tn8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:49Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.859104 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b5cb9af7fe50ad534e758ba5647e162dfc951f41f07330e8b671427811de556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:49Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.882707 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e109410f-af42-4d80-bf58-9af3a5dde09a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd52f8fe1e994b2f877ce0843ce86d86d7674bace8c4ca163e3232248313435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b00de586738e2d759aa971e2114def8fdfeb2a25fd72f482d75b9f46ea9a3d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12c45de72b21abdab0a1073a9a1a357c8d593f68a339bf9b455b5e87aa7863aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59dcbb2be526e98cfd0a3c8cf833d6cfdef0120c58b47e52fb62f56adffb1d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:49Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.884222 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.885548 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.885658 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.885777 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.885803 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:49Z","lastTransitionTime":"2026-02-17T15:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.903901 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:49Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.928469 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6c9480c-4161-4c38-bec1-0822c6692f6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53d750dff2e0aa3d65e2defbc3cdf44f48375946c7021c0b1e1056b5ed7d729e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8d4091ef21fb9fef52dafcd7f1d0e865ff57652fcb75d0ba1e16361bcb81f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8d4091ef21fb9fef52dafcd7f1d0e865ff57652fcb75d0ba1e16361bcb81f44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26ac79dab2ec2e8e379a62382daa37e5c1feaa0666d3c6426bd9a295c64fdd5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26ac79dab2ec2e8e379a62382daa37e5c1feaa0666d3c6426bd9a295c64fdd5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43f3b959a4804631ce679ee8dd89b1fa9249892328d303865de288a5a7529af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43f3b959a4804631ce679ee8dd89b1fa9249892328d303865de288a5a7529af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cf535fc0e39f67860383b43629a84bb4608a6a5d42304c537ab91a306ed841c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cf535fc0e39f67860383b43629a84bb4608a6a5d42304c537ab91a306ed841c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89610759cc77f66154699ee9784109cba8ce21818125f447368e19fb6cc8cfb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89610759cc77f66154699ee9784109cba8ce21818125f447368e19fb6cc8cfb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx4nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:49Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.958242 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca38b6e7-b21c-453d-8b6c-a163dac84b35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14df09051221e795ef203b228b1f61d67e86d8052d81b4853a27d50d2b6e64bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://383650c9e8169aa5621d731ebcbfdd1ace0491ad4e7931fca1f6b595e0e782b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k8v8k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:49Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.981978 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:49Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.988951 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.989031 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.989051 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.989088 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.989109 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:49Z","lastTransitionTime":"2026-02-17T15:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:50 crc kubenswrapper[4808]: I0217 15:54:50.004593 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:50Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:50 crc kubenswrapper[4808]: I0217 15:54:50.056429 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6556f8ef16656338bd11e718549ef3c019e96928825ab9dc0596f24b8f43e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbc64aec6f296c59b9fb1e8c183c9f80c346f2d76620db59376c914ffcec02b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:50Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:50 crc kubenswrapper[4808]: I0217 15:54:50.075886 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f8pfh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13cb51e0-9eb4-4948-a9bf-93cddaa429fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e67e9f34fe5e5e9f272673e47a80dfec89a2832289e719b09d5a13399412b2ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkcvd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f8pfh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:50Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:50 crc kubenswrapper[4808]: I0217 15:54:50.092534 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:50 crc kubenswrapper[4808]: I0217 15:54:50.092850 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:50 crc kubenswrapper[4808]: I0217 15:54:50.092937 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:50 crc kubenswrapper[4808]: I0217 15:54:50.093009 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:50 crc kubenswrapper[4808]: I0217 15:54:50.093081 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:50Z","lastTransitionTime":"2026-02-17T15:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:50 crc kubenswrapper[4808]: I0217 15:54:50.105795 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-msgfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18916d6d-e063-40a0-816f-554f95cd2956\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d94a7bfe9ebc3fcec167acc2f840374566394d9425801a71bd3626ce196ee3a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-msgfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:50Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:50 crc kubenswrapper[4808]: I0217 15:54:50.121495 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 04:32:05.709766168 +0000 UTC Feb 17 15:54:50 crc kubenswrapper[4808]: I0217 15:54:50.132031 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748f02a-e3dd-47c7-b89d-b472c718e593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80ab3de82f2a3f22425c34c9b4abcbc925a7076e3f2ce3b952f10aeb856e1c09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c263e6c0445a0badadcbc5b50c370fd4ee9a4d0cb3e535e3d7944e938cbea4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58ee49f9d112bd2fe6a3cc5f499d1be9d4c51f2741ffb9bf24754a46a0a12814\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b04c73bfd5eadf6c1e436f6a7150074ee8357cef79b0e040c1d9f3809aab13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9e729fa5a68d07a0f7e4a86114ed39e4128428e5a21c2f3f113f869adc9fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26a9d62d12c66018649ffcb84c69e20f1c08f3241bdb02ba4306b08dbe5ec49a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d307d637e95a78d79b622b1de7d0ed293b2e0e690f6b661e6f8ed1c3ab91673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d307d637e95a78d79b622b1de7d0ed293b2e0e690f6b661e6f8ed1c3ab91673\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T15:54:47Z\\\",\\\"message\\\":\\\"s{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0217 15:54:47.336335 6443 services_controller.go:444] Built service openshift-console-operator/metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0217 15:54:47.336345 6443 services_controller.go:445] Built service openshift-console-operator/metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nF0217 15:54:47.336359 6443 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:47Z is after 2025-08-24T17:21:41Z]\\\\nI0217 15:54:47.336366 6443 services_controller.go:451] Built service openshift-consol\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-tgvlh_openshift-ovn-kubernetes(5748f02a-e3dd-47c7-b89d-b472c718e593)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://363a0f82d4347e522c91f27597bc03aa33f75e0399760fcc5cfdc1772eb6aabf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tgvlh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:50Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:50 crc kubenswrapper[4808]: I0217 15:54:50.149625 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-86pl6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"067d21e4-9618-42af-bb01-1ea41d1bd7ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcb207e998564484db273e9e68e20e49fb986fc4644b656e17b5c3fea9fb4eb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjv2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded2fa969b96132c1a5953da41b9418ec78621261888216b3854bc3cacb7bca6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjv2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-86pl6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:50Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:50 crc kubenswrapper[4808]: I0217 15:54:50.196212 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:50 crc kubenswrapper[4808]: I0217 15:54:50.196309 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:50 crc kubenswrapper[4808]: I0217 15:54:50.196332 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:50 crc kubenswrapper[4808]: I0217 15:54:50.196370 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:50 crc kubenswrapper[4808]: I0217 15:54:50.196403 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:50Z","lastTransitionTime":"2026-02-17T15:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:50 crc kubenswrapper[4808]: I0217 15:54:50.299456 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:50 crc kubenswrapper[4808]: I0217 15:54:50.299538 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:50 crc kubenswrapper[4808]: I0217 15:54:50.299556 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:50 crc kubenswrapper[4808]: I0217 15:54:50.299610 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:50 crc kubenswrapper[4808]: I0217 15:54:50.299633 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:50Z","lastTransitionTime":"2026-02-17T15:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:50 crc kubenswrapper[4808]: I0217 15:54:50.403176 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:50 crc kubenswrapper[4808]: I0217 15:54:50.403231 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:50 crc kubenswrapper[4808]: I0217 15:54:50.403249 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:50 crc kubenswrapper[4808]: I0217 15:54:50.403275 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:50 crc kubenswrapper[4808]: I0217 15:54:50.403295 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:50Z","lastTransitionTime":"2026-02-17T15:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:50 crc kubenswrapper[4808]: I0217 15:54:50.506006 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:50 crc kubenswrapper[4808]: I0217 15:54:50.506083 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:50 crc kubenswrapper[4808]: I0217 15:54:50.506108 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:50 crc kubenswrapper[4808]: I0217 15:54:50.506148 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:50 crc kubenswrapper[4808]: I0217 15:54:50.506178 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:50Z","lastTransitionTime":"2026-02-17T15:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:50 crc kubenswrapper[4808]: I0217 15:54:50.609529 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:50 crc kubenswrapper[4808]: I0217 15:54:50.609617 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:50 crc kubenswrapper[4808]: I0217 15:54:50.609630 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:50 crc kubenswrapper[4808]: I0217 15:54:50.609651 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:50 crc kubenswrapper[4808]: I0217 15:54:50.609665 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:50Z","lastTransitionTime":"2026-02-17T15:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:50 crc kubenswrapper[4808]: I0217 15:54:50.713222 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:50 crc kubenswrapper[4808]: I0217 15:54:50.713333 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:50 crc kubenswrapper[4808]: I0217 15:54:50.713358 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:50 crc kubenswrapper[4808]: I0217 15:54:50.713389 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:50 crc kubenswrapper[4808]: I0217 15:54:50.713412 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:50Z","lastTransitionTime":"2026-02-17T15:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:50 crc kubenswrapper[4808]: I0217 15:54:50.817343 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:50 crc kubenswrapper[4808]: I0217 15:54:50.817413 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:50 crc kubenswrapper[4808]: I0217 15:54:50.817430 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:50 crc kubenswrapper[4808]: I0217 15:54:50.817461 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:50 crc kubenswrapper[4808]: I0217 15:54:50.817481 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:50Z","lastTransitionTime":"2026-02-17T15:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:50 crc kubenswrapper[4808]: I0217 15:54:50.920863 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:50 crc kubenswrapper[4808]: I0217 15:54:50.920948 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:50 crc kubenswrapper[4808]: I0217 15:54:50.920987 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:50 crc kubenswrapper[4808]: I0217 15:54:50.921028 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:50 crc kubenswrapper[4808]: I0217 15:54:50.921055 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:50Z","lastTransitionTime":"2026-02-17T15:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:51 crc kubenswrapper[4808]: I0217 15:54:51.025135 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:51 crc kubenswrapper[4808]: I0217 15:54:51.025215 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:51 crc kubenswrapper[4808]: I0217 15:54:51.025233 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:51 crc kubenswrapper[4808]: I0217 15:54:51.025264 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:51 crc kubenswrapper[4808]: I0217 15:54:51.025284 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:51Z","lastTransitionTime":"2026-02-17T15:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:51 crc kubenswrapper[4808]: I0217 15:54:51.122683 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 20:53:48.028469062 +0000 UTC Feb 17 15:54:51 crc kubenswrapper[4808]: I0217 15:54:51.129159 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:51 crc kubenswrapper[4808]: I0217 15:54:51.129230 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:51 crc kubenswrapper[4808]: I0217 15:54:51.129249 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:51 crc kubenswrapper[4808]: I0217 15:54:51.129284 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:51 crc kubenswrapper[4808]: I0217 15:54:51.129307 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:51Z","lastTransitionTime":"2026-02-17T15:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:51 crc kubenswrapper[4808]: I0217 15:54:51.145773 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:54:51 crc kubenswrapper[4808]: E0217 15:54:51.146183 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:54:51 crc kubenswrapper[4808]: I0217 15:54:51.146305 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:54:51 crc kubenswrapper[4808]: I0217 15:54:51.146403 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:54:51 crc kubenswrapper[4808]: E0217 15:54:51.146520 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:54:51 crc kubenswrapper[4808]: E0217 15:54:51.146644 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:54:51 crc kubenswrapper[4808]: I0217 15:54:51.146916 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:54:51 crc kubenswrapper[4808]: E0217 15:54:51.147216 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z8tn8" podUID="b88c3e5f-7390-477c-ae74-aced26a8ddf9" Feb 17 15:54:51 crc kubenswrapper[4808]: I0217 15:54:51.233100 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:51 crc kubenswrapper[4808]: I0217 15:54:51.233192 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:51 crc kubenswrapper[4808]: I0217 15:54:51.233217 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:51 crc kubenswrapper[4808]: I0217 15:54:51.233251 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:51 crc kubenswrapper[4808]: I0217 15:54:51.233277 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:51Z","lastTransitionTime":"2026-02-17T15:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:51 crc kubenswrapper[4808]: I0217 15:54:51.337005 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:51 crc kubenswrapper[4808]: I0217 15:54:51.337101 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:51 crc kubenswrapper[4808]: I0217 15:54:51.337128 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:51 crc kubenswrapper[4808]: I0217 15:54:51.337164 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:51 crc kubenswrapper[4808]: I0217 15:54:51.337191 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:51Z","lastTransitionTime":"2026-02-17T15:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:51 crc kubenswrapper[4808]: I0217 15:54:51.441171 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:51 crc kubenswrapper[4808]: I0217 15:54:51.441244 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:51 crc kubenswrapper[4808]: I0217 15:54:51.441263 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:51 crc kubenswrapper[4808]: I0217 15:54:51.441293 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:51 crc kubenswrapper[4808]: I0217 15:54:51.441315 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:51Z","lastTransitionTime":"2026-02-17T15:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:51 crc kubenswrapper[4808]: I0217 15:54:51.546045 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:51 crc kubenswrapper[4808]: I0217 15:54:51.546120 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:51 crc kubenswrapper[4808]: I0217 15:54:51.546144 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:51 crc kubenswrapper[4808]: I0217 15:54:51.546174 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:51 crc kubenswrapper[4808]: I0217 15:54:51.546197 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:51Z","lastTransitionTime":"2026-02-17T15:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:51 crc kubenswrapper[4808]: I0217 15:54:51.650682 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:51 crc kubenswrapper[4808]: I0217 15:54:51.650764 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:51 crc kubenswrapper[4808]: I0217 15:54:51.650784 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:51 crc kubenswrapper[4808]: I0217 15:54:51.650813 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:51 crc kubenswrapper[4808]: I0217 15:54:51.650836 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:51Z","lastTransitionTime":"2026-02-17T15:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:51 crc kubenswrapper[4808]: I0217 15:54:51.754249 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:51 crc kubenswrapper[4808]: I0217 15:54:51.754332 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:51 crc kubenswrapper[4808]: I0217 15:54:51.754345 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:51 crc kubenswrapper[4808]: I0217 15:54:51.754366 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:51 crc kubenswrapper[4808]: I0217 15:54:51.754377 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:51Z","lastTransitionTime":"2026-02-17T15:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:51 crc kubenswrapper[4808]: I0217 15:54:51.857729 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:51 crc kubenswrapper[4808]: I0217 15:54:51.857807 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:51 crc kubenswrapper[4808]: I0217 15:54:51.857825 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:51 crc kubenswrapper[4808]: I0217 15:54:51.857853 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:51 crc kubenswrapper[4808]: I0217 15:54:51.857873 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:51Z","lastTransitionTime":"2026-02-17T15:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:51 crc kubenswrapper[4808]: I0217 15:54:51.962113 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:51 crc kubenswrapper[4808]: I0217 15:54:51.962186 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:51 crc kubenswrapper[4808]: I0217 15:54:51.962206 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:51 crc kubenswrapper[4808]: I0217 15:54:51.962243 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:51 crc kubenswrapper[4808]: I0217 15:54:51.962264 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:51Z","lastTransitionTime":"2026-02-17T15:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:52 crc kubenswrapper[4808]: I0217 15:54:52.065800 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:52 crc kubenswrapper[4808]: I0217 15:54:52.065864 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:52 crc kubenswrapper[4808]: I0217 15:54:52.065883 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:52 crc kubenswrapper[4808]: I0217 15:54:52.065916 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:52 crc kubenswrapper[4808]: I0217 15:54:52.065946 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:52Z","lastTransitionTime":"2026-02-17T15:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:52 crc kubenswrapper[4808]: I0217 15:54:52.123219 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 03:04:09.428987082 +0000 UTC Feb 17 15:54:52 crc kubenswrapper[4808]: I0217 15:54:52.170455 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:52 crc kubenswrapper[4808]: I0217 15:54:52.170526 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:52 crc kubenswrapper[4808]: I0217 15:54:52.170546 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:52 crc kubenswrapper[4808]: I0217 15:54:52.170617 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:52 crc kubenswrapper[4808]: I0217 15:54:52.170647 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:52Z","lastTransitionTime":"2026-02-17T15:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:52 crc kubenswrapper[4808]: I0217 15:54:52.274166 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:52 crc kubenswrapper[4808]: I0217 15:54:52.274278 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:52 crc kubenswrapper[4808]: I0217 15:54:52.274314 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:52 crc kubenswrapper[4808]: I0217 15:54:52.274364 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:52 crc kubenswrapper[4808]: I0217 15:54:52.274392 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:52Z","lastTransitionTime":"2026-02-17T15:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:52 crc kubenswrapper[4808]: I0217 15:54:52.378041 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:52 crc kubenswrapper[4808]: I0217 15:54:52.378102 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:52 crc kubenswrapper[4808]: I0217 15:54:52.378121 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:52 crc kubenswrapper[4808]: I0217 15:54:52.378151 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:52 crc kubenswrapper[4808]: I0217 15:54:52.378169 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:52Z","lastTransitionTime":"2026-02-17T15:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:52 crc kubenswrapper[4808]: I0217 15:54:52.481514 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:52 crc kubenswrapper[4808]: I0217 15:54:52.481607 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:52 crc kubenswrapper[4808]: I0217 15:54:52.481626 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:52 crc kubenswrapper[4808]: I0217 15:54:52.481652 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:52 crc kubenswrapper[4808]: I0217 15:54:52.481671 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:52Z","lastTransitionTime":"2026-02-17T15:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:52 crc kubenswrapper[4808]: I0217 15:54:52.585101 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:52 crc kubenswrapper[4808]: I0217 15:54:52.585159 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:52 crc kubenswrapper[4808]: I0217 15:54:52.585175 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:52 crc kubenswrapper[4808]: I0217 15:54:52.585200 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:52 crc kubenswrapper[4808]: I0217 15:54:52.585218 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:52Z","lastTransitionTime":"2026-02-17T15:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:52 crc kubenswrapper[4808]: I0217 15:54:52.691850 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:52 crc kubenswrapper[4808]: I0217 15:54:52.691920 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:52 crc kubenswrapper[4808]: I0217 15:54:52.691940 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:52 crc kubenswrapper[4808]: I0217 15:54:52.691966 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:52 crc kubenswrapper[4808]: I0217 15:54:52.691986 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:52Z","lastTransitionTime":"2026-02-17T15:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:52 crc kubenswrapper[4808]: I0217 15:54:52.796063 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:52 crc kubenswrapper[4808]: I0217 15:54:52.796122 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:52 crc kubenswrapper[4808]: I0217 15:54:52.796138 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:52 crc kubenswrapper[4808]: I0217 15:54:52.796164 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:52 crc kubenswrapper[4808]: I0217 15:54:52.796182 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:52Z","lastTransitionTime":"2026-02-17T15:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:52 crc kubenswrapper[4808]: I0217 15:54:52.899392 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:52 crc kubenswrapper[4808]: I0217 15:54:52.899450 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:52 crc kubenswrapper[4808]: I0217 15:54:52.899462 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:52 crc kubenswrapper[4808]: I0217 15:54:52.899484 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:52 crc kubenswrapper[4808]: I0217 15:54:52.899500 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:52Z","lastTransitionTime":"2026-02-17T15:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.003702 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.003772 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.003789 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.003815 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.003835 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:53Z","lastTransitionTime":"2026-02-17T15:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.107518 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.107649 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.107678 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.107718 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.107753 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:53Z","lastTransitionTime":"2026-02-17T15:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.123837 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 19:57:02.489994545 +0000 UTC Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.145613 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.145611 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.145854 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:54:53 crc kubenswrapper[4808]: E0217 15:54:53.145990 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:54:53 crc kubenswrapper[4808]: E0217 15:54:53.146158 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.145817 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:54:53 crc kubenswrapper[4808]: E0217 15:54:53.146391 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:54:53 crc kubenswrapper[4808]: E0217 15:54:53.146547 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z8tn8" podUID="b88c3e5f-7390-477c-ae74-aced26a8ddf9" Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.210354 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.210417 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.210434 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.210460 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.210481 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:53Z","lastTransitionTime":"2026-02-17T15:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.313357 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.313408 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.313416 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.313433 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.313444 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:53Z","lastTransitionTime":"2026-02-17T15:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.417298 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.417351 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.417365 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.417386 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.417402 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:53Z","lastTransitionTime":"2026-02-17T15:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.520833 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.520920 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.520950 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.520988 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.521009 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:53Z","lastTransitionTime":"2026-02-17T15:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.624273 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.624334 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.624352 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.624381 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.624402 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:53Z","lastTransitionTime":"2026-02-17T15:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.729489 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.729565 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.729640 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.729676 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.729703 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:53Z","lastTransitionTime":"2026-02-17T15:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.731400 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.731474 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.731494 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.731526 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.731546 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:53Z","lastTransitionTime":"2026-02-17T15:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:53 crc kubenswrapper[4808]: E0217 15:54:53.750817 4808 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7379f6dd-5937-4d60-901f-8c9dc45481b3\\\",\\\"systemUUID\\\":\\\"8fe3bc97-dd01-4038-9ff9-743e71f8162b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:53Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.757115 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.757173 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.757193 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.757221 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.757241 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:53Z","lastTransitionTime":"2026-02-17T15:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:53 crc kubenswrapper[4808]: E0217 15:54:53.775743 4808 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7379f6dd-5937-4d60-901f-8c9dc45481b3\\\",\\\"systemUUID\\\":\\\"8fe3bc97-dd01-4038-9ff9-743e71f8162b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:53Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.782092 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.782146 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.782164 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.782183 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.782200 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:53Z","lastTransitionTime":"2026-02-17T15:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:53 crc kubenswrapper[4808]: E0217 15:54:53.797743 4808 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7379f6dd-5937-4d60-901f-8c9dc45481b3\\\",\\\"systemUUID\\\":\\\"8fe3bc97-dd01-4038-9ff9-743e71f8162b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:53Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.802785 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.802830 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.802847 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.802867 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.802880 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:53Z","lastTransitionTime":"2026-02-17T15:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:53 crc kubenswrapper[4808]: E0217 15:54:53.818696 4808 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7379f6dd-5937-4d60-901f-8c9dc45481b3\\\",\\\"systemUUID\\\":\\\"8fe3bc97-dd01-4038-9ff9-743e71f8162b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:53Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.823272 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.823333 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.823352 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.823379 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.823399 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:53Z","lastTransitionTime":"2026-02-17T15:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:53 crc kubenswrapper[4808]: E0217 15:54:53.839894 4808 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7379f6dd-5937-4d60-901f-8c9dc45481b3\\\",\\\"systemUUID\\\":\\\"8fe3bc97-dd01-4038-9ff9-743e71f8162b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:53Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:53 crc kubenswrapper[4808]: E0217 15:54:53.840115 4808 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.842344 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.842403 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.842422 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.842443 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.842461 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:53Z","lastTransitionTime":"2026-02-17T15:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.945985 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.946052 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.946070 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.946097 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.946117 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:53Z","lastTransitionTime":"2026-02-17T15:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:54 crc kubenswrapper[4808]: I0217 15:54:54.049819 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:54 crc kubenswrapper[4808]: I0217 15:54:54.049896 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:54 crc kubenswrapper[4808]: I0217 15:54:54.049912 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:54 crc kubenswrapper[4808]: I0217 15:54:54.049937 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:54 crc kubenswrapper[4808]: I0217 15:54:54.049958 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:54Z","lastTransitionTime":"2026-02-17T15:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:54 crc kubenswrapper[4808]: I0217 15:54:54.124718 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 11:23:18.17107601 +0000 UTC Feb 17 15:54:54 crc kubenswrapper[4808]: I0217 15:54:54.162716 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:54 crc kubenswrapper[4808]: I0217 15:54:54.162765 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:54 crc kubenswrapper[4808]: I0217 15:54:54.162850 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:54 crc kubenswrapper[4808]: I0217 15:54:54.162868 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:54 crc kubenswrapper[4808]: I0217 15:54:54.162882 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:54Z","lastTransitionTime":"2026-02-17T15:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:54 crc kubenswrapper[4808]: I0217 15:54:54.266162 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:54 crc kubenswrapper[4808]: I0217 15:54:54.266502 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:54 crc kubenswrapper[4808]: I0217 15:54:54.266615 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:54 crc kubenswrapper[4808]: I0217 15:54:54.266740 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:54 crc kubenswrapper[4808]: I0217 15:54:54.266858 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:54Z","lastTransitionTime":"2026-02-17T15:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:54 crc kubenswrapper[4808]: I0217 15:54:54.370611 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:54 crc kubenswrapper[4808]: I0217 15:54:54.370719 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:54 crc kubenswrapper[4808]: I0217 15:54:54.370744 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:54 crc kubenswrapper[4808]: I0217 15:54:54.370771 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:54 crc kubenswrapper[4808]: I0217 15:54:54.370790 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:54Z","lastTransitionTime":"2026-02-17T15:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:54 crc kubenswrapper[4808]: I0217 15:54:54.473850 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:54 crc kubenswrapper[4808]: I0217 15:54:54.473921 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:54 crc kubenswrapper[4808]: I0217 15:54:54.473943 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:54 crc kubenswrapper[4808]: I0217 15:54:54.473971 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:54 crc kubenswrapper[4808]: I0217 15:54:54.473992 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:54Z","lastTransitionTime":"2026-02-17T15:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:54 crc kubenswrapper[4808]: I0217 15:54:54.577728 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:54 crc kubenswrapper[4808]: I0217 15:54:54.577805 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:54 crc kubenswrapper[4808]: I0217 15:54:54.577823 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:54 crc kubenswrapper[4808]: I0217 15:54:54.577849 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:54 crc kubenswrapper[4808]: I0217 15:54:54.577869 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:54Z","lastTransitionTime":"2026-02-17T15:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:54 crc kubenswrapper[4808]: I0217 15:54:54.681557 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:54 crc kubenswrapper[4808]: I0217 15:54:54.681677 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:54 crc kubenswrapper[4808]: I0217 15:54:54.681703 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:54 crc kubenswrapper[4808]: I0217 15:54:54.681736 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:54 crc kubenswrapper[4808]: I0217 15:54:54.681758 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:54Z","lastTransitionTime":"2026-02-17T15:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:54 crc kubenswrapper[4808]: I0217 15:54:54.784722 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:54 crc kubenswrapper[4808]: I0217 15:54:54.784786 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:54 crc kubenswrapper[4808]: I0217 15:54:54.784797 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:54 crc kubenswrapper[4808]: I0217 15:54:54.784814 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:54 crc kubenswrapper[4808]: I0217 15:54:54.784828 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:54Z","lastTransitionTime":"2026-02-17T15:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:54 crc kubenswrapper[4808]: I0217 15:54:54.887903 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:54 crc kubenswrapper[4808]: I0217 15:54:54.887935 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:54 crc kubenswrapper[4808]: I0217 15:54:54.887944 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:54 crc kubenswrapper[4808]: I0217 15:54:54.887959 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:54 crc kubenswrapper[4808]: I0217 15:54:54.887969 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:54Z","lastTransitionTime":"2026-02-17T15:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:54 crc kubenswrapper[4808]: I0217 15:54:54.990563 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:54 crc kubenswrapper[4808]: I0217 15:54:54.990631 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:54 crc kubenswrapper[4808]: I0217 15:54:54.990643 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:54 crc kubenswrapper[4808]: I0217 15:54:54.990666 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:54 crc kubenswrapper[4808]: I0217 15:54:54.990680 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:54Z","lastTransitionTime":"2026-02-17T15:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:55 crc kubenswrapper[4808]: I0217 15:54:55.094074 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:55 crc kubenswrapper[4808]: I0217 15:54:55.094145 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:55 crc kubenswrapper[4808]: I0217 15:54:55.094169 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:55 crc kubenswrapper[4808]: I0217 15:54:55.094201 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:55 crc kubenswrapper[4808]: I0217 15:54:55.094224 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:55Z","lastTransitionTime":"2026-02-17T15:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:55 crc kubenswrapper[4808]: I0217 15:54:55.124881 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 08:51:26.626848063 +0000 UTC Feb 17 15:54:55 crc kubenswrapper[4808]: I0217 15:54:55.145727 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:54:55 crc kubenswrapper[4808]: I0217 15:54:55.145939 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:54:55 crc kubenswrapper[4808]: E0217 15:54:55.146032 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:54:55 crc kubenswrapper[4808]: I0217 15:54:55.146095 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:54:55 crc kubenswrapper[4808]: I0217 15:54:55.146053 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:54:55 crc kubenswrapper[4808]: E0217 15:54:55.146229 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:54:55 crc kubenswrapper[4808]: E0217 15:54:55.146433 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z8tn8" podUID="b88c3e5f-7390-477c-ae74-aced26a8ddf9" Feb 17 15:54:55 crc kubenswrapper[4808]: E0217 15:54:55.146651 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:54:55 crc kubenswrapper[4808]: I0217 15:54:55.198305 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:55 crc kubenswrapper[4808]: I0217 15:54:55.198400 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:55 crc kubenswrapper[4808]: I0217 15:54:55.198417 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:55 crc kubenswrapper[4808]: I0217 15:54:55.198452 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:55 crc kubenswrapper[4808]: I0217 15:54:55.198474 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:55Z","lastTransitionTime":"2026-02-17T15:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:55 crc kubenswrapper[4808]: I0217 15:54:55.303011 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:55 crc kubenswrapper[4808]: I0217 15:54:55.303058 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:55 crc kubenswrapper[4808]: I0217 15:54:55.303073 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:55 crc kubenswrapper[4808]: I0217 15:54:55.303095 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:55 crc kubenswrapper[4808]: I0217 15:54:55.303106 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:55Z","lastTransitionTime":"2026-02-17T15:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:55 crc kubenswrapper[4808]: I0217 15:54:55.406295 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:55 crc kubenswrapper[4808]: I0217 15:54:55.406624 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:55 crc kubenswrapper[4808]: I0217 15:54:55.406712 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:55 crc kubenswrapper[4808]: I0217 15:54:55.406863 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:55 crc kubenswrapper[4808]: I0217 15:54:55.406927 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:55Z","lastTransitionTime":"2026-02-17T15:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:55 crc kubenswrapper[4808]: I0217 15:54:55.509938 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:55 crc kubenswrapper[4808]: I0217 15:54:55.510210 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:55 crc kubenswrapper[4808]: I0217 15:54:55.510281 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:55 crc kubenswrapper[4808]: I0217 15:54:55.510344 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:55 crc kubenswrapper[4808]: I0217 15:54:55.510414 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:55Z","lastTransitionTime":"2026-02-17T15:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:55 crc kubenswrapper[4808]: I0217 15:54:55.613459 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:55 crc kubenswrapper[4808]: I0217 15:54:55.613533 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:55 crc kubenswrapper[4808]: I0217 15:54:55.613551 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:55 crc kubenswrapper[4808]: I0217 15:54:55.613617 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:55 crc kubenswrapper[4808]: I0217 15:54:55.613635 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:55Z","lastTransitionTime":"2026-02-17T15:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:55 crc kubenswrapper[4808]: I0217 15:54:55.716540 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:55 crc kubenswrapper[4808]: I0217 15:54:55.716598 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:55 crc kubenswrapper[4808]: I0217 15:54:55.716606 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:55 crc kubenswrapper[4808]: I0217 15:54:55.716621 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:55 crc kubenswrapper[4808]: I0217 15:54:55.716631 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:55Z","lastTransitionTime":"2026-02-17T15:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:55 crc kubenswrapper[4808]: I0217 15:54:55.819741 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:55 crc kubenswrapper[4808]: I0217 15:54:55.819783 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:55 crc kubenswrapper[4808]: I0217 15:54:55.819797 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:55 crc kubenswrapper[4808]: I0217 15:54:55.819819 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:55 crc kubenswrapper[4808]: I0217 15:54:55.819833 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:55Z","lastTransitionTime":"2026-02-17T15:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:55 crc kubenswrapper[4808]: I0217 15:54:55.923234 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:55 crc kubenswrapper[4808]: I0217 15:54:55.923291 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:55 crc kubenswrapper[4808]: I0217 15:54:55.923312 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:55 crc kubenswrapper[4808]: I0217 15:54:55.923336 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:55 crc kubenswrapper[4808]: I0217 15:54:55.923349 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:55Z","lastTransitionTime":"2026-02-17T15:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:56 crc kubenswrapper[4808]: I0217 15:54:56.027776 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:56 crc kubenswrapper[4808]: I0217 15:54:56.027848 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:56 crc kubenswrapper[4808]: I0217 15:54:56.027865 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:56 crc kubenswrapper[4808]: I0217 15:54:56.027899 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:56 crc kubenswrapper[4808]: I0217 15:54:56.027919 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:56Z","lastTransitionTime":"2026-02-17T15:54:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:56 crc kubenswrapper[4808]: I0217 15:54:56.125782 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 00:05:57.848483684 +0000 UTC Feb 17 15:54:56 crc kubenswrapper[4808]: I0217 15:54:56.131132 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:56 crc kubenswrapper[4808]: I0217 15:54:56.131217 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:56 crc kubenswrapper[4808]: I0217 15:54:56.131235 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:56 crc kubenswrapper[4808]: I0217 15:54:56.131299 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:56 crc kubenswrapper[4808]: I0217 15:54:56.131320 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:56Z","lastTransitionTime":"2026-02-17T15:54:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:56 crc kubenswrapper[4808]: I0217 15:54:56.234902 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:56 crc kubenswrapper[4808]: I0217 15:54:56.234975 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:56 crc kubenswrapper[4808]: I0217 15:54:56.234994 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:56 crc kubenswrapper[4808]: I0217 15:54:56.235020 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:56 crc kubenswrapper[4808]: I0217 15:54:56.235040 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:56Z","lastTransitionTime":"2026-02-17T15:54:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:56 crc kubenswrapper[4808]: I0217 15:54:56.339086 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:56 crc kubenswrapper[4808]: I0217 15:54:56.339199 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:56 crc kubenswrapper[4808]: I0217 15:54:56.339217 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:56 crc kubenswrapper[4808]: I0217 15:54:56.339253 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:56 crc kubenswrapper[4808]: I0217 15:54:56.339272 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:56Z","lastTransitionTime":"2026-02-17T15:54:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:56 crc kubenswrapper[4808]: I0217 15:54:56.441897 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:56 crc kubenswrapper[4808]: I0217 15:54:56.441950 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:56 crc kubenswrapper[4808]: I0217 15:54:56.441963 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:56 crc kubenswrapper[4808]: I0217 15:54:56.441983 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:56 crc kubenswrapper[4808]: I0217 15:54:56.441997 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:56Z","lastTransitionTime":"2026-02-17T15:54:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:56 crc kubenswrapper[4808]: I0217 15:54:56.545361 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:56 crc kubenswrapper[4808]: I0217 15:54:56.545435 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:56 crc kubenswrapper[4808]: I0217 15:54:56.545457 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:56 crc kubenswrapper[4808]: I0217 15:54:56.545485 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:56 crc kubenswrapper[4808]: I0217 15:54:56.545505 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:56Z","lastTransitionTime":"2026-02-17T15:54:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:56 crc kubenswrapper[4808]: I0217 15:54:56.649411 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:56 crc kubenswrapper[4808]: I0217 15:54:56.649481 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:56 crc kubenswrapper[4808]: I0217 15:54:56.649500 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:56 crc kubenswrapper[4808]: I0217 15:54:56.649533 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:56 crc kubenswrapper[4808]: I0217 15:54:56.649551 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:56Z","lastTransitionTime":"2026-02-17T15:54:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:56 crc kubenswrapper[4808]: I0217 15:54:56.759454 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:56 crc kubenswrapper[4808]: I0217 15:54:56.759533 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:56 crc kubenswrapper[4808]: I0217 15:54:56.759555 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:56 crc kubenswrapper[4808]: I0217 15:54:56.759700 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:56 crc kubenswrapper[4808]: I0217 15:54:56.759739 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:56Z","lastTransitionTime":"2026-02-17T15:54:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:56 crc kubenswrapper[4808]: I0217 15:54:56.863000 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:56 crc kubenswrapper[4808]: I0217 15:54:56.863084 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:56 crc kubenswrapper[4808]: I0217 15:54:56.863124 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:56 crc kubenswrapper[4808]: I0217 15:54:56.863163 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:56 crc kubenswrapper[4808]: I0217 15:54:56.863188 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:56Z","lastTransitionTime":"2026-02-17T15:54:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:56 crc kubenswrapper[4808]: I0217 15:54:56.967154 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:56 crc kubenswrapper[4808]: I0217 15:54:56.967237 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:56 crc kubenswrapper[4808]: I0217 15:54:56.967261 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:56 crc kubenswrapper[4808]: I0217 15:54:56.967292 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:56 crc kubenswrapper[4808]: I0217 15:54:56.967314 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:56Z","lastTransitionTime":"2026-02-17T15:54:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:57 crc kubenswrapper[4808]: I0217 15:54:57.070320 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:57 crc kubenswrapper[4808]: I0217 15:54:57.070403 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:57 crc kubenswrapper[4808]: I0217 15:54:57.070413 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:57 crc kubenswrapper[4808]: I0217 15:54:57.070432 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:57 crc kubenswrapper[4808]: I0217 15:54:57.070448 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:57Z","lastTransitionTime":"2026-02-17T15:54:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:57 crc kubenswrapper[4808]: I0217 15:54:57.126517 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 00:19:48.13221218 +0000 UTC Feb 17 15:54:57 crc kubenswrapper[4808]: I0217 15:54:57.145544 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:54:57 crc kubenswrapper[4808]: I0217 15:54:57.145642 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:54:57 crc kubenswrapper[4808]: I0217 15:54:57.145670 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:54:57 crc kubenswrapper[4808]: E0217 15:54:57.145763 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:54:57 crc kubenswrapper[4808]: I0217 15:54:57.145922 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:54:57 crc kubenswrapper[4808]: E0217 15:54:57.146105 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z8tn8" podUID="b88c3e5f-7390-477c-ae74-aced26a8ddf9" Feb 17 15:54:57 crc kubenswrapper[4808]: E0217 15:54:57.146325 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:54:57 crc kubenswrapper[4808]: E0217 15:54:57.146986 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:54:57 crc kubenswrapper[4808]: I0217 15:54:57.165713 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b5cb9af7fe50ad534e758ba5647e162dfc951f41f07330e8b671427811de556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:57Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:57 crc kubenswrapper[4808]: I0217 15:54:57.172998 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:57 crc kubenswrapper[4808]: I0217 15:54:57.173073 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:57 crc kubenswrapper[4808]: I0217 15:54:57.173098 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:57 crc kubenswrapper[4808]: I0217 15:54:57.173127 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:57 crc kubenswrapper[4808]: I0217 15:54:57.173149 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:57Z","lastTransitionTime":"2026-02-17T15:54:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:57 crc kubenswrapper[4808]: I0217 15:54:57.183271 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca38b6e7-b21c-453d-8b6c-a163dac84b35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14df09051221e795ef203b228b1f61d67e86d8052d81b4853a27d50d2b6e64bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://383650c9e8169aa5621d731ebcbfdd1ace0491ad4e7931fca1f6b595e0e782b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k8v8k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:57Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:57 crc kubenswrapper[4808]: I0217 15:54:57.207556 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e109410f-af42-4d80-bf58-9af3a5dde09a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd52f8fe1e994b2f877ce0843ce86d86d7674bace8c4ca163e3232248313435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b00de586738e2d759aa971e2114def8fdfeb2a25fd72f482d75b9f46ea9a3d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12c45de72b21abdab0a1073a9a1a357c8d593f68a339bf9b455b5e87aa7863aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59dcbb2be526e98cfd0a3c8cf833d6cfdef0120c58b47e52fb62f56adffb1d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:57Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:57 crc kubenswrapper[4808]: I0217 15:54:57.226660 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:57Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:57 crc kubenswrapper[4808]: I0217 15:54:57.245786 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6c9480c-4161-4c38-bec1-0822c6692f6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53d750dff2e0aa3d65e2defbc3cdf44f48375946c7021c0b1e1056b5ed7d729e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8d4091ef21fb9fef52dafcd7f1d0e865ff57652fcb75d0ba1e16361bcb81f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8d4091ef21fb9fef52dafcd7f1d0e865ff57652fcb75d0ba1e16361bcb81f44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26ac79dab2ec2e8e379a62382daa37e5c1feaa0666d3c6426bd9a295c64fdd5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26ac79dab2ec2e8e379a62382daa37e5c1feaa0666d3c6426bd9a295c64fdd5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43f3b959a4804631ce679ee8dd89b1fa9249892328d303865de288a5a7529af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43f3b959a4804631ce679ee8dd89b1fa9249892328d303865de288a5a7529af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cf535fc0e39f67860383b43629a84bb4608a6a5d42304c537ab91a306ed841c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cf535fc0e39f67860383b43629a84bb4608a6a5d42304c537ab91a306ed841c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89610759cc77f66154699ee9784109cba8ce21818125f447368e19fb6cc8cfb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89610759cc77f66154699ee9784109cba8ce21818125f447368e19fb6cc8cfb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx4nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:57Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:57 crc kubenswrapper[4808]: I0217 15:54:57.271746 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:57Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:57 crc kubenswrapper[4808]: I0217 15:54:57.276010 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:57 crc kubenswrapper[4808]: I0217 15:54:57.276058 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:57 crc kubenswrapper[4808]: I0217 15:54:57.276076 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:57 crc kubenswrapper[4808]: I0217 15:54:57.276101 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:57 crc kubenswrapper[4808]: I0217 15:54:57.276117 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:57Z","lastTransitionTime":"2026-02-17T15:54:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:57 crc kubenswrapper[4808]: I0217 15:54:57.301280 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6556f8ef16656338bd11e718549ef3c019e96928825ab9dc0596f24b8f43e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbc64aec6f296c59b9fb1e8c183c9f80c346f2d76620db59376c914ffcec02b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:57Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:57 crc kubenswrapper[4808]: I0217 15:54:57.318405 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f8pfh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13cb51e0-9eb4-4948-a9bf-93cddaa429fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e67e9f34fe5e5e9f272673e47a80dfec89a2832289e719b09d5a13399412b2ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkcvd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f8pfh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:57Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:57 crc kubenswrapper[4808]: I0217 15:54:57.341054 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-msgfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18916d6d-e063-40a0-816f-554f95cd2956\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d94a7bfe9ebc3fcec167acc2f840374566394d9425801a71bd3626ce196ee3a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-msgfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:57Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:57 crc kubenswrapper[4808]: I0217 15:54:57.368189 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748f02a-e3dd-47c7-b89d-b472c718e593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80ab3de82f2a3f22425c34c9b4abcbc925a7076e3f2ce3b952f10aeb856e1c09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c263e6c0445a0badadcbc5b50c370fd4ee9a4d0cb3e535e3d7944e938cbea4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58ee49f9d112bd2fe6a3cc5f499d1be9d4c51f2741ffb9bf24754a46a0a12814\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b04c73bfd5eadf6c1e436f6a7150074ee8357cef79b0e040c1d9f3809aab13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9e729fa5a68d07a0f7e4a86114ed39e4128428e5a21c2f3f113f869adc9fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26a9d62d12c66018649ffcb84c69e20f1c08f3241bdb02ba4306b08dbe5ec49a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d307d637e95a78d79b622b1de7d0ed293b2e0e690f6b661e6f8ed1c3ab91673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d307d637e95a78d79b622b1de7d0ed293b2e0e690f6b661e6f8ed1c3ab91673\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T15:54:47Z\\\",\\\"message\\\":\\\"s{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0217 15:54:47.336335 6443 services_controller.go:444] Built service openshift-console-operator/metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0217 15:54:47.336345 6443 services_controller.go:445] Built service openshift-console-operator/metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nF0217 15:54:47.336359 6443 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:47Z is after 2025-08-24T17:21:41Z]\\\\nI0217 15:54:47.336366 6443 services_controller.go:451] Built service openshift-consol\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-tgvlh_openshift-ovn-kubernetes(5748f02a-e3dd-47c7-b89d-b472c718e593)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://363a0f82d4347e522c91f27597bc03aa33f75e0399760fcc5cfdc1772eb6aabf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tgvlh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:57Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:57 crc kubenswrapper[4808]: I0217 15:54:57.379871 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:57 crc kubenswrapper[4808]: I0217 15:54:57.379949 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:57 crc kubenswrapper[4808]: I0217 15:54:57.379968 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:57 crc kubenswrapper[4808]: I0217 15:54:57.380004 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:57 crc kubenswrapper[4808]: I0217 15:54:57.380023 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:57Z","lastTransitionTime":"2026-02-17T15:54:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:57 crc kubenswrapper[4808]: I0217 15:54:57.387734 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"759d5f61-7cb6-48e5-878f-b6598b2e3736\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4372c35d9db61ec94e0ea9eacf8c4e39b960530780a05f7d69ef2a050d38d23b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d7c05a68a98372cde4e26c0c61f336641b7554e44bea9c4d240fed31e6b366b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://defa2be2862e24dfc99982183beaa92c8114cc81036544f19ed8bb4e10b0b09a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51962c47ab63116fa62604c3cc5603db1b7b4015519052616c363dc21c7cb913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51962c47ab63116fa62604c3cc5603db1b7b4015519052616c363dc21c7cb913\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:53:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:57Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:57 crc kubenswrapper[4808]: I0217 15:54:57.409287 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:57Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:57 crc kubenswrapper[4808]: I0217 15:54:57.428213 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-86pl6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"067d21e4-9618-42af-bb01-1ea41d1bd7ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcb207e998564484db273e9e68e20e49fb986fc4644b656e17b5c3fea9fb4eb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjv2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded2fa969b96132c1a5953da41b9418ec78621261888216b3854bc3cacb7bca6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjv2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-86pl6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:57Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:57 crc kubenswrapper[4808]: I0217 15:54:57.444700 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pr5s4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4989dd6-5d44-42b5-882c-12a10ffc7911\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://228e9f46385cedf80299c68685a8b2b94d96c41ade18eeea5de7a83c648cf704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2xc9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pr5s4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:57Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:57 crc kubenswrapper[4808]: I0217 15:54:57.461534 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-z8tn8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b88c3e5f-7390-477c-ae74-aced26a8ddf9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8f79s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8f79s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-z8tn8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:57Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:57 crc kubenswrapper[4808]: I0217 15:54:57.483533 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efd34c89-7350-4ce0-83d9-302614df88f7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa3ef5d82c776e482d3da2d223d74423393c75b813707483fadca8cfbb5ed3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://695c70a36ec8a626d22b6dc04fdaad77e3e1f27a035ce6f62b96afe1f2c29361\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2611c9a878eac336beeea637370ce7fe47a5a80a6f29002cb2fb79d4637a1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d0e25e29d8f9c5146809e50f50a20c537f5ddecea1b902928a94870b5d44ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68d1439ead0f87e8cde6925c6db2cfde8a7fe89c6e5afaf719868740138742df\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:54:16Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 15:54:01.029442 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:54:01.030078 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2660512818/tls.crt::/tmp/serving-cert-2660512818/tls.key\\\\\\\"\\\\nI0217 15:54:16.361222 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:54:16.370125 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:54:16.370169 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:54:16.370202 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:54:16.370212 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:54:16.383437 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:54:16.383473 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383482 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:54:16.383494 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:54:16.383498 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:54:16.383502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:54:16.383616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:54:16.393934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://715d799f5e1732f88175b90bad28450b9c5148e89bf47ac3e47f9585acf3b392\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:53:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:57Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:57 crc kubenswrapper[4808]: I0217 15:54:57.484037 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:57 crc kubenswrapper[4808]: I0217 15:54:57.484075 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:57 crc kubenswrapper[4808]: I0217 15:54:57.484086 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:57 crc kubenswrapper[4808]: I0217 15:54:57.484108 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:57 crc kubenswrapper[4808]: I0217 15:54:57.484121 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:57Z","lastTransitionTime":"2026-02-17T15:54:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:57 crc kubenswrapper[4808]: I0217 15:54:57.503217 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3aaaa97d92e1acc8fe17594a75ed3e720801983ea175873486102bca899d9c04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:57Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:57 crc kubenswrapper[4808]: I0217 15:54:57.587568 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:57 crc kubenswrapper[4808]: I0217 15:54:57.587667 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:57 crc kubenswrapper[4808]: I0217 15:54:57.587683 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:57 crc kubenswrapper[4808]: I0217 15:54:57.587712 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:57 crc kubenswrapper[4808]: I0217 15:54:57.587731 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:57Z","lastTransitionTime":"2026-02-17T15:54:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:57 crc kubenswrapper[4808]: I0217 15:54:57.691149 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:57 crc kubenswrapper[4808]: I0217 15:54:57.691212 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:57 crc kubenswrapper[4808]: I0217 15:54:57.691230 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:57 crc kubenswrapper[4808]: I0217 15:54:57.691256 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:57 crc kubenswrapper[4808]: I0217 15:54:57.691275 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:57Z","lastTransitionTime":"2026-02-17T15:54:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:57 crc kubenswrapper[4808]: I0217 15:54:57.794622 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:57 crc kubenswrapper[4808]: I0217 15:54:57.794690 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:57 crc kubenswrapper[4808]: I0217 15:54:57.794710 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:57 crc kubenswrapper[4808]: I0217 15:54:57.794740 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:57 crc kubenswrapper[4808]: I0217 15:54:57.794759 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:57Z","lastTransitionTime":"2026-02-17T15:54:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:57 crc kubenswrapper[4808]: I0217 15:54:57.898285 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:57 crc kubenswrapper[4808]: I0217 15:54:57.898345 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:57 crc kubenswrapper[4808]: I0217 15:54:57.898367 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:57 crc kubenswrapper[4808]: I0217 15:54:57.898394 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:57 crc kubenswrapper[4808]: I0217 15:54:57.898413 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:57Z","lastTransitionTime":"2026-02-17T15:54:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:58 crc kubenswrapper[4808]: I0217 15:54:58.001312 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:58 crc kubenswrapper[4808]: I0217 15:54:58.001383 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:58 crc kubenswrapper[4808]: I0217 15:54:58.001404 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:58 crc kubenswrapper[4808]: I0217 15:54:58.001430 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:58 crc kubenswrapper[4808]: I0217 15:54:58.001447 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:58Z","lastTransitionTime":"2026-02-17T15:54:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:58 crc kubenswrapper[4808]: I0217 15:54:58.105055 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:58 crc kubenswrapper[4808]: I0217 15:54:58.105124 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:58 crc kubenswrapper[4808]: I0217 15:54:58.105143 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:58 crc kubenswrapper[4808]: I0217 15:54:58.105171 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:58 crc kubenswrapper[4808]: I0217 15:54:58.105188 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:58Z","lastTransitionTime":"2026-02-17T15:54:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:58 crc kubenswrapper[4808]: I0217 15:54:58.127508 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 04:49:11.014264329 +0000 UTC Feb 17 15:54:58 crc kubenswrapper[4808]: I0217 15:54:58.208893 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:58 crc kubenswrapper[4808]: I0217 15:54:58.208971 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:58 crc kubenswrapper[4808]: I0217 15:54:58.208990 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:58 crc kubenswrapper[4808]: I0217 15:54:58.209017 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:58 crc kubenswrapper[4808]: I0217 15:54:58.209036 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:58Z","lastTransitionTime":"2026-02-17T15:54:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:58 crc kubenswrapper[4808]: I0217 15:54:58.313069 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:58 crc kubenswrapper[4808]: I0217 15:54:58.313149 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:58 crc kubenswrapper[4808]: I0217 15:54:58.313197 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:58 crc kubenswrapper[4808]: I0217 15:54:58.313236 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:58 crc kubenswrapper[4808]: I0217 15:54:58.313258 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:58Z","lastTransitionTime":"2026-02-17T15:54:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:58 crc kubenswrapper[4808]: I0217 15:54:58.416690 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:58 crc kubenswrapper[4808]: I0217 15:54:58.416737 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:58 crc kubenswrapper[4808]: I0217 15:54:58.416749 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:58 crc kubenswrapper[4808]: I0217 15:54:58.416767 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:58 crc kubenswrapper[4808]: I0217 15:54:58.416778 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:58Z","lastTransitionTime":"2026-02-17T15:54:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:58 crc kubenswrapper[4808]: I0217 15:54:58.519901 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:58 crc kubenswrapper[4808]: I0217 15:54:58.519969 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:58 crc kubenswrapper[4808]: I0217 15:54:58.519984 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:58 crc kubenswrapper[4808]: I0217 15:54:58.520012 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:58 crc kubenswrapper[4808]: I0217 15:54:58.520032 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:58Z","lastTransitionTime":"2026-02-17T15:54:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:58 crc kubenswrapper[4808]: I0217 15:54:58.622783 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:58 crc kubenswrapper[4808]: I0217 15:54:58.622841 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:58 crc kubenswrapper[4808]: I0217 15:54:58.622853 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:58 crc kubenswrapper[4808]: I0217 15:54:58.622873 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:58 crc kubenswrapper[4808]: I0217 15:54:58.622885 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:58Z","lastTransitionTime":"2026-02-17T15:54:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:58 crc kubenswrapper[4808]: I0217 15:54:58.726993 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:58 crc kubenswrapper[4808]: I0217 15:54:58.727064 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:58 crc kubenswrapper[4808]: I0217 15:54:58.727079 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:58 crc kubenswrapper[4808]: I0217 15:54:58.727111 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:58 crc kubenswrapper[4808]: I0217 15:54:58.727124 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:58Z","lastTransitionTime":"2026-02-17T15:54:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:58 crc kubenswrapper[4808]: I0217 15:54:58.830442 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:58 crc kubenswrapper[4808]: I0217 15:54:58.830518 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:58 crc kubenswrapper[4808]: I0217 15:54:58.830537 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:58 crc kubenswrapper[4808]: I0217 15:54:58.830566 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:58 crc kubenswrapper[4808]: I0217 15:54:58.830652 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:58Z","lastTransitionTime":"2026-02-17T15:54:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:58 crc kubenswrapper[4808]: I0217 15:54:58.934256 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:58 crc kubenswrapper[4808]: I0217 15:54:58.934329 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:58 crc kubenswrapper[4808]: I0217 15:54:58.934347 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:58 crc kubenswrapper[4808]: I0217 15:54:58.934373 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:58 crc kubenswrapper[4808]: I0217 15:54:58.934391 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:58Z","lastTransitionTime":"2026-02-17T15:54:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:59 crc kubenswrapper[4808]: I0217 15:54:59.042515 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:59 crc kubenswrapper[4808]: I0217 15:54:59.042564 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:59 crc kubenswrapper[4808]: I0217 15:54:59.042603 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:59 crc kubenswrapper[4808]: I0217 15:54:59.042627 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:59 crc kubenswrapper[4808]: I0217 15:54:59.042641 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:59Z","lastTransitionTime":"2026-02-17T15:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:59 crc kubenswrapper[4808]: I0217 15:54:59.127951 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 21:36:46.317969178 +0000 UTC Feb 17 15:54:59 crc kubenswrapper[4808]: I0217 15:54:59.144847 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:54:59 crc kubenswrapper[4808]: I0217 15:54:59.144901 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:54:59 crc kubenswrapper[4808]: I0217 15:54:59.144967 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:54:59 crc kubenswrapper[4808]: I0217 15:54:59.145058 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:54:59 crc kubenswrapper[4808]: E0217 15:54:59.145612 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:54:59 crc kubenswrapper[4808]: E0217 15:54:59.145740 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:54:59 crc kubenswrapper[4808]: E0217 15:54:59.145898 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z8tn8" podUID="b88c3e5f-7390-477c-ae74-aced26a8ddf9" Feb 17 15:54:59 crc kubenswrapper[4808]: E0217 15:54:59.146030 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:54:59 crc kubenswrapper[4808]: I0217 15:54:59.153019 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:59 crc kubenswrapper[4808]: I0217 15:54:59.153092 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:59 crc kubenswrapper[4808]: I0217 15:54:59.153110 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:59 crc kubenswrapper[4808]: I0217 15:54:59.153139 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:59 crc kubenswrapper[4808]: I0217 15:54:59.153158 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:59Z","lastTransitionTime":"2026-02-17T15:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:59 crc kubenswrapper[4808]: I0217 15:54:59.257284 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:59 crc kubenswrapper[4808]: I0217 15:54:59.257365 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:59 crc kubenswrapper[4808]: I0217 15:54:59.257390 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:59 crc kubenswrapper[4808]: I0217 15:54:59.257420 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:59 crc kubenswrapper[4808]: I0217 15:54:59.257446 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:59Z","lastTransitionTime":"2026-02-17T15:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:59 crc kubenswrapper[4808]: I0217 15:54:59.361114 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:59 crc kubenswrapper[4808]: I0217 15:54:59.361221 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:59 crc kubenswrapper[4808]: I0217 15:54:59.361239 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:59 crc kubenswrapper[4808]: I0217 15:54:59.361266 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:59 crc kubenswrapper[4808]: I0217 15:54:59.361287 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:59Z","lastTransitionTime":"2026-02-17T15:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:59 crc kubenswrapper[4808]: I0217 15:54:59.464124 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:59 crc kubenswrapper[4808]: I0217 15:54:59.464216 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:59 crc kubenswrapper[4808]: I0217 15:54:59.464226 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:59 crc kubenswrapper[4808]: I0217 15:54:59.464245 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:59 crc kubenswrapper[4808]: I0217 15:54:59.464255 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:59Z","lastTransitionTime":"2026-02-17T15:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:59 crc kubenswrapper[4808]: I0217 15:54:59.567087 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:59 crc kubenswrapper[4808]: I0217 15:54:59.567174 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:59 crc kubenswrapper[4808]: I0217 15:54:59.567193 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:59 crc kubenswrapper[4808]: I0217 15:54:59.567222 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:59 crc kubenswrapper[4808]: I0217 15:54:59.567242 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:59Z","lastTransitionTime":"2026-02-17T15:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:59 crc kubenswrapper[4808]: I0217 15:54:59.671593 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:59 crc kubenswrapper[4808]: I0217 15:54:59.671659 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:59 crc kubenswrapper[4808]: I0217 15:54:59.671675 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:59 crc kubenswrapper[4808]: I0217 15:54:59.671697 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:59 crc kubenswrapper[4808]: I0217 15:54:59.671708 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:59Z","lastTransitionTime":"2026-02-17T15:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:59 crc kubenswrapper[4808]: I0217 15:54:59.774720 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:59 crc kubenswrapper[4808]: I0217 15:54:59.774799 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:59 crc kubenswrapper[4808]: I0217 15:54:59.774823 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:59 crc kubenswrapper[4808]: I0217 15:54:59.774855 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:59 crc kubenswrapper[4808]: I0217 15:54:59.774875 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:59Z","lastTransitionTime":"2026-02-17T15:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:59 crc kubenswrapper[4808]: I0217 15:54:59.878013 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:59 crc kubenswrapper[4808]: I0217 15:54:59.878071 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:59 crc kubenswrapper[4808]: I0217 15:54:59.878088 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:59 crc kubenswrapper[4808]: I0217 15:54:59.878112 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:59 crc kubenswrapper[4808]: I0217 15:54:59.878131 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:59Z","lastTransitionTime":"2026-02-17T15:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:59 crc kubenswrapper[4808]: I0217 15:54:59.982106 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:59 crc kubenswrapper[4808]: I0217 15:54:59.982235 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:59 crc kubenswrapper[4808]: I0217 15:54:59.982264 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:59 crc kubenswrapper[4808]: I0217 15:54:59.982303 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:59 crc kubenswrapper[4808]: I0217 15:54:59.982333 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:59Z","lastTransitionTime":"2026-02-17T15:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:00 crc kubenswrapper[4808]: I0217 15:55:00.086625 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:00 crc kubenswrapper[4808]: I0217 15:55:00.086727 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:00 crc kubenswrapper[4808]: I0217 15:55:00.086757 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:00 crc kubenswrapper[4808]: I0217 15:55:00.086794 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:00 crc kubenswrapper[4808]: I0217 15:55:00.086813 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:00Z","lastTransitionTime":"2026-02-17T15:55:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:00 crc kubenswrapper[4808]: I0217 15:55:00.128976 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 20:25:47.909029977 +0000 UTC Feb 17 15:55:00 crc kubenswrapper[4808]: I0217 15:55:00.190479 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:00 crc kubenswrapper[4808]: I0217 15:55:00.190558 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:00 crc kubenswrapper[4808]: I0217 15:55:00.190606 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:00 crc kubenswrapper[4808]: I0217 15:55:00.190639 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:00 crc kubenswrapper[4808]: I0217 15:55:00.190661 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:00Z","lastTransitionTime":"2026-02-17T15:55:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:00 crc kubenswrapper[4808]: I0217 15:55:00.293702 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:00 crc kubenswrapper[4808]: I0217 15:55:00.293777 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:00 crc kubenswrapper[4808]: I0217 15:55:00.293795 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:00 crc kubenswrapper[4808]: I0217 15:55:00.293824 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:00 crc kubenswrapper[4808]: I0217 15:55:00.293843 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:00Z","lastTransitionTime":"2026-02-17T15:55:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:00 crc kubenswrapper[4808]: I0217 15:55:00.397060 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:00 crc kubenswrapper[4808]: I0217 15:55:00.397133 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:00 crc kubenswrapper[4808]: I0217 15:55:00.397149 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:00 crc kubenswrapper[4808]: I0217 15:55:00.397179 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:00 crc kubenswrapper[4808]: I0217 15:55:00.397196 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:00Z","lastTransitionTime":"2026-02-17T15:55:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:00 crc kubenswrapper[4808]: I0217 15:55:00.500415 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:00 crc kubenswrapper[4808]: I0217 15:55:00.500513 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:00 crc kubenswrapper[4808]: I0217 15:55:00.500535 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:00 crc kubenswrapper[4808]: I0217 15:55:00.500566 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:00 crc kubenswrapper[4808]: I0217 15:55:00.500619 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:00Z","lastTransitionTime":"2026-02-17T15:55:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:00 crc kubenswrapper[4808]: I0217 15:55:00.603802 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:00 crc kubenswrapper[4808]: I0217 15:55:00.603875 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:00 crc kubenswrapper[4808]: I0217 15:55:00.603895 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:00 crc kubenswrapper[4808]: I0217 15:55:00.603925 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:00 crc kubenswrapper[4808]: I0217 15:55:00.603947 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:00Z","lastTransitionTime":"2026-02-17T15:55:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:00 crc kubenswrapper[4808]: I0217 15:55:00.706473 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:00 crc kubenswrapper[4808]: I0217 15:55:00.706533 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:00 crc kubenswrapper[4808]: I0217 15:55:00.706544 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:00 crc kubenswrapper[4808]: I0217 15:55:00.706567 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:00 crc kubenswrapper[4808]: I0217 15:55:00.706598 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:00Z","lastTransitionTime":"2026-02-17T15:55:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:00 crc kubenswrapper[4808]: I0217 15:55:00.810257 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:00 crc kubenswrapper[4808]: I0217 15:55:00.810333 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:00 crc kubenswrapper[4808]: I0217 15:55:00.810364 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:00 crc kubenswrapper[4808]: I0217 15:55:00.810392 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:00 crc kubenswrapper[4808]: I0217 15:55:00.810414 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:00Z","lastTransitionTime":"2026-02-17T15:55:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:00 crc kubenswrapper[4808]: I0217 15:55:00.913634 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:00 crc kubenswrapper[4808]: I0217 15:55:00.913710 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:00 crc kubenswrapper[4808]: I0217 15:55:00.913728 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:00 crc kubenswrapper[4808]: I0217 15:55:00.913754 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:00 crc kubenswrapper[4808]: I0217 15:55:00.913774 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:00Z","lastTransitionTime":"2026-02-17T15:55:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:01 crc kubenswrapper[4808]: I0217 15:55:01.017530 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:01 crc kubenswrapper[4808]: I0217 15:55:01.017653 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:01 crc kubenswrapper[4808]: I0217 15:55:01.017676 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:01 crc kubenswrapper[4808]: I0217 15:55:01.017707 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:01 crc kubenswrapper[4808]: I0217 15:55:01.017729 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:01Z","lastTransitionTime":"2026-02-17T15:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:01 crc kubenswrapper[4808]: I0217 15:55:01.120342 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:01 crc kubenswrapper[4808]: I0217 15:55:01.120383 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:01 crc kubenswrapper[4808]: I0217 15:55:01.120411 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:01 crc kubenswrapper[4808]: I0217 15:55:01.120434 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:01 crc kubenswrapper[4808]: I0217 15:55:01.120466 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:01Z","lastTransitionTime":"2026-02-17T15:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:01 crc kubenswrapper[4808]: I0217 15:55:01.130204 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 20:08:22.920652421 +0000 UTC Feb 17 15:55:01 crc kubenswrapper[4808]: I0217 15:55:01.145713 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:55:01 crc kubenswrapper[4808]: E0217 15:55:01.145850 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:55:01 crc kubenswrapper[4808]: I0217 15:55:01.145968 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:55:01 crc kubenswrapper[4808]: I0217 15:55:01.145992 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:55:01 crc kubenswrapper[4808]: I0217 15:55:01.145960 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:01 crc kubenswrapper[4808]: E0217 15:55:01.146078 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z8tn8" podUID="b88c3e5f-7390-477c-ae74-aced26a8ddf9" Feb 17 15:55:01 crc kubenswrapper[4808]: E0217 15:55:01.146210 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:55:01 crc kubenswrapper[4808]: E0217 15:55:01.146454 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:55:01 crc kubenswrapper[4808]: I0217 15:55:01.223058 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:01 crc kubenswrapper[4808]: I0217 15:55:01.223121 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:01 crc kubenswrapper[4808]: I0217 15:55:01.223140 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:01 crc kubenswrapper[4808]: I0217 15:55:01.223176 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:01 crc kubenswrapper[4808]: I0217 15:55:01.223198 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:01Z","lastTransitionTime":"2026-02-17T15:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:01 crc kubenswrapper[4808]: I0217 15:55:01.327605 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:01 crc kubenswrapper[4808]: I0217 15:55:01.327647 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:01 crc kubenswrapper[4808]: I0217 15:55:01.327657 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:01 crc kubenswrapper[4808]: I0217 15:55:01.327686 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:01 crc kubenswrapper[4808]: I0217 15:55:01.327698 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:01Z","lastTransitionTime":"2026-02-17T15:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:01 crc kubenswrapper[4808]: I0217 15:55:01.430554 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:01 crc kubenswrapper[4808]: I0217 15:55:01.430625 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:01 crc kubenswrapper[4808]: I0217 15:55:01.430635 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:01 crc kubenswrapper[4808]: I0217 15:55:01.430659 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:01 crc kubenswrapper[4808]: I0217 15:55:01.430672 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:01Z","lastTransitionTime":"2026-02-17T15:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:01 crc kubenswrapper[4808]: I0217 15:55:01.533521 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:01 crc kubenswrapper[4808]: I0217 15:55:01.533604 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:01 crc kubenswrapper[4808]: I0217 15:55:01.533620 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:01 crc kubenswrapper[4808]: I0217 15:55:01.533640 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:01 crc kubenswrapper[4808]: I0217 15:55:01.533655 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:01Z","lastTransitionTime":"2026-02-17T15:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:01 crc kubenswrapper[4808]: I0217 15:55:01.635542 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:01 crc kubenswrapper[4808]: I0217 15:55:01.635605 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:01 crc kubenswrapper[4808]: I0217 15:55:01.635619 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:01 crc kubenswrapper[4808]: I0217 15:55:01.635635 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:01 crc kubenswrapper[4808]: I0217 15:55:01.635648 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:01Z","lastTransitionTime":"2026-02-17T15:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:01 crc kubenswrapper[4808]: I0217 15:55:01.738543 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:01 crc kubenswrapper[4808]: I0217 15:55:01.738623 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:01 crc kubenswrapper[4808]: I0217 15:55:01.738653 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:01 crc kubenswrapper[4808]: I0217 15:55:01.738674 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:01 crc kubenswrapper[4808]: I0217 15:55:01.738686 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:01Z","lastTransitionTime":"2026-02-17T15:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:01 crc kubenswrapper[4808]: I0217 15:55:01.841250 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:01 crc kubenswrapper[4808]: I0217 15:55:01.841341 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:01 crc kubenswrapper[4808]: I0217 15:55:01.841352 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:01 crc kubenswrapper[4808]: I0217 15:55:01.841394 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:01 crc kubenswrapper[4808]: I0217 15:55:01.841408 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:01Z","lastTransitionTime":"2026-02-17T15:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:01 crc kubenswrapper[4808]: I0217 15:55:01.943979 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:01 crc kubenswrapper[4808]: I0217 15:55:01.944025 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:01 crc kubenswrapper[4808]: I0217 15:55:01.944037 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:01 crc kubenswrapper[4808]: I0217 15:55:01.944057 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:01 crc kubenswrapper[4808]: I0217 15:55:01.944071 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:01Z","lastTransitionTime":"2026-02-17T15:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:02 crc kubenswrapper[4808]: I0217 15:55:02.048279 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:02 crc kubenswrapper[4808]: I0217 15:55:02.048376 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:02 crc kubenswrapper[4808]: I0217 15:55:02.048401 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:02 crc kubenswrapper[4808]: I0217 15:55:02.048436 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:02 crc kubenswrapper[4808]: I0217 15:55:02.048462 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:02Z","lastTransitionTime":"2026-02-17T15:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:02 crc kubenswrapper[4808]: I0217 15:55:02.130364 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 02:06:17.32783357 +0000 UTC Feb 17 15:55:02 crc kubenswrapper[4808]: I0217 15:55:02.145746 4808 scope.go:117] "RemoveContainer" containerID="5d307d637e95a78d79b622b1de7d0ed293b2e0e690f6b661e6f8ed1c3ab91673" Feb 17 15:55:02 crc kubenswrapper[4808]: E0217 15:55:02.146056 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-tgvlh_openshift-ovn-kubernetes(5748f02a-e3dd-47c7-b89d-b472c718e593)\"" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" podUID="5748f02a-e3dd-47c7-b89d-b472c718e593" Feb 17 15:55:02 crc kubenswrapper[4808]: I0217 15:55:02.152010 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:02 crc kubenswrapper[4808]: I0217 15:55:02.152075 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:02 crc kubenswrapper[4808]: I0217 15:55:02.152089 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:02 crc kubenswrapper[4808]: I0217 15:55:02.152108 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:02 crc kubenswrapper[4808]: I0217 15:55:02.152122 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:02Z","lastTransitionTime":"2026-02-17T15:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:02 crc kubenswrapper[4808]: I0217 15:55:02.254440 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:02 crc kubenswrapper[4808]: I0217 15:55:02.254494 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:02 crc kubenswrapper[4808]: I0217 15:55:02.254506 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:02 crc kubenswrapper[4808]: I0217 15:55:02.254525 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:02 crc kubenswrapper[4808]: I0217 15:55:02.254541 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:02Z","lastTransitionTime":"2026-02-17T15:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:02 crc kubenswrapper[4808]: I0217 15:55:02.357856 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:02 crc kubenswrapper[4808]: I0217 15:55:02.357909 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:02 crc kubenswrapper[4808]: I0217 15:55:02.357923 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:02 crc kubenswrapper[4808]: I0217 15:55:02.357945 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:02 crc kubenswrapper[4808]: I0217 15:55:02.357962 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:02Z","lastTransitionTime":"2026-02-17T15:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:02 crc kubenswrapper[4808]: I0217 15:55:02.461499 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:02 crc kubenswrapper[4808]: I0217 15:55:02.461551 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:02 crc kubenswrapper[4808]: I0217 15:55:02.461567 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:02 crc kubenswrapper[4808]: I0217 15:55:02.461613 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:02 crc kubenswrapper[4808]: I0217 15:55:02.461631 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:02Z","lastTransitionTime":"2026-02-17T15:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:02 crc kubenswrapper[4808]: I0217 15:55:02.564848 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:02 crc kubenswrapper[4808]: I0217 15:55:02.564900 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:02 crc kubenswrapper[4808]: I0217 15:55:02.564912 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:02 crc kubenswrapper[4808]: I0217 15:55:02.564933 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:02 crc kubenswrapper[4808]: I0217 15:55:02.564946 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:02Z","lastTransitionTime":"2026-02-17T15:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:02 crc kubenswrapper[4808]: I0217 15:55:02.667930 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:02 crc kubenswrapper[4808]: I0217 15:55:02.667996 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:02 crc kubenswrapper[4808]: I0217 15:55:02.668010 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:02 crc kubenswrapper[4808]: I0217 15:55:02.668033 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:02 crc kubenswrapper[4808]: I0217 15:55:02.668055 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:02Z","lastTransitionTime":"2026-02-17T15:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:02 crc kubenswrapper[4808]: I0217 15:55:02.771198 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:02 crc kubenswrapper[4808]: I0217 15:55:02.771259 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:02 crc kubenswrapper[4808]: I0217 15:55:02.771274 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:02 crc kubenswrapper[4808]: I0217 15:55:02.771293 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:02 crc kubenswrapper[4808]: I0217 15:55:02.771307 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:02Z","lastTransitionTime":"2026-02-17T15:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:02 crc kubenswrapper[4808]: I0217 15:55:02.873594 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:02 crc kubenswrapper[4808]: I0217 15:55:02.873647 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:02 crc kubenswrapper[4808]: I0217 15:55:02.873661 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:02 crc kubenswrapper[4808]: I0217 15:55:02.873680 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:02 crc kubenswrapper[4808]: I0217 15:55:02.873693 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:02Z","lastTransitionTime":"2026-02-17T15:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:02 crc kubenswrapper[4808]: I0217 15:55:02.976935 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:02 crc kubenswrapper[4808]: I0217 15:55:02.977017 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:02 crc kubenswrapper[4808]: I0217 15:55:02.977037 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:02 crc kubenswrapper[4808]: I0217 15:55:02.977069 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:02 crc kubenswrapper[4808]: I0217 15:55:02.977094 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:02Z","lastTransitionTime":"2026-02-17T15:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:03 crc kubenswrapper[4808]: I0217 15:55:03.079397 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:03 crc kubenswrapper[4808]: I0217 15:55:03.079465 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:03 crc kubenswrapper[4808]: I0217 15:55:03.079478 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:03 crc kubenswrapper[4808]: I0217 15:55:03.079500 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:03 crc kubenswrapper[4808]: I0217 15:55:03.079514 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:03Z","lastTransitionTime":"2026-02-17T15:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:03 crc kubenswrapper[4808]: I0217 15:55:03.131137 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 05:26:51.87595655 +0000 UTC Feb 17 15:55:03 crc kubenswrapper[4808]: I0217 15:55:03.144854 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:55:03 crc kubenswrapper[4808]: I0217 15:55:03.144881 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:55:03 crc kubenswrapper[4808]: I0217 15:55:03.144940 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:55:03 crc kubenswrapper[4808]: I0217 15:55:03.144858 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:03 crc kubenswrapper[4808]: E0217 15:55:03.145028 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:55:03 crc kubenswrapper[4808]: E0217 15:55:03.145105 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:55:03 crc kubenswrapper[4808]: E0217 15:55:03.145196 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z8tn8" podUID="b88c3e5f-7390-477c-ae74-aced26a8ddf9" Feb 17 15:55:03 crc kubenswrapper[4808]: E0217 15:55:03.145412 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:55:03 crc kubenswrapper[4808]: I0217 15:55:03.182483 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:03 crc kubenswrapper[4808]: I0217 15:55:03.182546 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:03 crc kubenswrapper[4808]: I0217 15:55:03.182560 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:03 crc kubenswrapper[4808]: I0217 15:55:03.182606 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:03 crc kubenswrapper[4808]: I0217 15:55:03.182619 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:03Z","lastTransitionTime":"2026-02-17T15:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:03 crc kubenswrapper[4808]: I0217 15:55:03.285015 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:03 crc kubenswrapper[4808]: I0217 15:55:03.285060 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:03 crc kubenswrapper[4808]: I0217 15:55:03.285074 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:03 crc kubenswrapper[4808]: I0217 15:55:03.285093 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:03 crc kubenswrapper[4808]: I0217 15:55:03.285104 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:03Z","lastTransitionTime":"2026-02-17T15:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:03 crc kubenswrapper[4808]: I0217 15:55:03.388380 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:03 crc kubenswrapper[4808]: I0217 15:55:03.388460 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:03 crc kubenswrapper[4808]: I0217 15:55:03.388477 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:03 crc kubenswrapper[4808]: I0217 15:55:03.388499 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:03 crc kubenswrapper[4808]: I0217 15:55:03.388513 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:03Z","lastTransitionTime":"2026-02-17T15:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:03 crc kubenswrapper[4808]: I0217 15:55:03.492038 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:03 crc kubenswrapper[4808]: I0217 15:55:03.492144 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:03 crc kubenswrapper[4808]: I0217 15:55:03.492169 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:03 crc kubenswrapper[4808]: I0217 15:55:03.492206 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:03 crc kubenswrapper[4808]: I0217 15:55:03.492232 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:03Z","lastTransitionTime":"2026-02-17T15:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:03 crc kubenswrapper[4808]: I0217 15:55:03.596099 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:03 crc kubenswrapper[4808]: I0217 15:55:03.596170 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:03 crc kubenswrapper[4808]: I0217 15:55:03.596182 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:03 crc kubenswrapper[4808]: I0217 15:55:03.596211 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:03 crc kubenswrapper[4808]: I0217 15:55:03.596222 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:03Z","lastTransitionTime":"2026-02-17T15:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:03 crc kubenswrapper[4808]: I0217 15:55:03.682432 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b88c3e5f-7390-477c-ae74-aced26a8ddf9-metrics-certs\") pod \"network-metrics-daemon-z8tn8\" (UID: \"b88c3e5f-7390-477c-ae74-aced26a8ddf9\") " pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:55:03 crc kubenswrapper[4808]: E0217 15:55:03.682673 4808 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 15:55:03 crc kubenswrapper[4808]: E0217 15:55:03.682742 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b88c3e5f-7390-477c-ae74-aced26a8ddf9-metrics-certs podName:b88c3e5f-7390-477c-ae74-aced26a8ddf9 nodeName:}" failed. No retries permitted until 2026-02-17 15:55:35.682726124 +0000 UTC m=+99.199085197 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b88c3e5f-7390-477c-ae74-aced26a8ddf9-metrics-certs") pod "network-metrics-daemon-z8tn8" (UID: "b88c3e5f-7390-477c-ae74-aced26a8ddf9") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 15:55:03 crc kubenswrapper[4808]: I0217 15:55:03.699018 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:03 crc kubenswrapper[4808]: I0217 15:55:03.699094 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:03 crc kubenswrapper[4808]: I0217 15:55:03.699114 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:03 crc kubenswrapper[4808]: I0217 15:55:03.699155 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:03 crc kubenswrapper[4808]: I0217 15:55:03.699174 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:03Z","lastTransitionTime":"2026-02-17T15:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:03 crc kubenswrapper[4808]: I0217 15:55:03.801627 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:03 crc kubenswrapper[4808]: I0217 15:55:03.801695 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:03 crc kubenswrapper[4808]: I0217 15:55:03.801708 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:03 crc kubenswrapper[4808]: I0217 15:55:03.801730 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:03 crc kubenswrapper[4808]: I0217 15:55:03.801743 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:03Z","lastTransitionTime":"2026-02-17T15:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:03 crc kubenswrapper[4808]: I0217 15:55:03.905680 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:03 crc kubenswrapper[4808]: I0217 15:55:03.905744 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:03 crc kubenswrapper[4808]: I0217 15:55:03.905761 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:03 crc kubenswrapper[4808]: I0217 15:55:03.905791 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:03 crc kubenswrapper[4808]: I0217 15:55:03.905810 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:03Z","lastTransitionTime":"2026-02-17T15:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.009473 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.009528 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.009537 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.009556 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.009569 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:04Z","lastTransitionTime":"2026-02-17T15:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.011271 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.011363 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.011390 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.011427 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.011455 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:04Z","lastTransitionTime":"2026-02-17T15:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:04 crc kubenswrapper[4808]: E0217 15:55:04.039714 4808 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7379f6dd-5937-4d60-901f-8c9dc45481b3\\\",\\\"systemUUID\\\":\\\"8fe3bc97-dd01-4038-9ff9-743e71f8162b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:04Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.045499 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.045536 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.045550 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.045567 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.045612 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:04Z","lastTransitionTime":"2026-02-17T15:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:04 crc kubenswrapper[4808]: E0217 15:55:04.063727 4808 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7379f6dd-5937-4d60-901f-8c9dc45481b3\\\",\\\"systemUUID\\\":\\\"8fe3bc97-dd01-4038-9ff9-743e71f8162b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:04Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.069064 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.069140 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.069155 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.069170 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.069180 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:04Z","lastTransitionTime":"2026-02-17T15:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:04 crc kubenswrapper[4808]: E0217 15:55:04.089660 4808 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7379f6dd-5937-4d60-901f-8c9dc45481b3\\\",\\\"systemUUID\\\":\\\"8fe3bc97-dd01-4038-9ff9-743e71f8162b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:04Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.094731 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.094812 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.094832 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.094860 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.094881 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:04Z","lastTransitionTime":"2026-02-17T15:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:04 crc kubenswrapper[4808]: E0217 15:55:04.110348 4808 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7379f6dd-5937-4d60-901f-8c9dc45481b3\\\",\\\"systemUUID\\\":\\\"8fe3bc97-dd01-4038-9ff9-743e71f8162b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:04Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.115294 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.115350 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.115366 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.115388 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.115404 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:04Z","lastTransitionTime":"2026-02-17T15:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:04 crc kubenswrapper[4808]: E0217 15:55:04.128851 4808 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7379f6dd-5937-4d60-901f-8c9dc45481b3\\\",\\\"systemUUID\\\":\\\"8fe3bc97-dd01-4038-9ff9-743e71f8162b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:04Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:04 crc kubenswrapper[4808]: E0217 15:55:04.129013 4808 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.131243 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 02:53:32.434478704 +0000 UTC Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.131602 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.131662 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.131680 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.131717 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.131747 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:04Z","lastTransitionTime":"2026-02-17T15:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.235648 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.235736 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.235756 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.235795 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.235816 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:04Z","lastTransitionTime":"2026-02-17T15:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.339869 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.340252 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.340345 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.340821 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.340933 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:04Z","lastTransitionTime":"2026-02-17T15:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.444433 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.444518 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.444536 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.444599 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.444629 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:04Z","lastTransitionTime":"2026-02-17T15:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.548276 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.548369 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.548393 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.548427 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.548451 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:04Z","lastTransitionTime":"2026-02-17T15:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.650742 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.650834 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.650862 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.650944 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.651013 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:04Z","lastTransitionTime":"2026-02-17T15:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.755280 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.755337 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.755354 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.755379 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.755398 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:04Z","lastTransitionTime":"2026-02-17T15:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.859719 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.859786 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.859807 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.859835 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.859854 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:04Z","lastTransitionTime":"2026-02-17T15:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.962628 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.962707 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.962723 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.962751 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.962776 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:04Z","lastTransitionTime":"2026-02-17T15:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:05 crc kubenswrapper[4808]: I0217 15:55:05.066095 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:05 crc kubenswrapper[4808]: I0217 15:55:05.066180 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:05 crc kubenswrapper[4808]: I0217 15:55:05.066198 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:05 crc kubenswrapper[4808]: I0217 15:55:05.066229 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:05 crc kubenswrapper[4808]: I0217 15:55:05.066249 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:05Z","lastTransitionTime":"2026-02-17T15:55:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:05 crc kubenswrapper[4808]: I0217 15:55:05.132124 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 14:22:36.677875395 +0000 UTC Feb 17 15:55:05 crc kubenswrapper[4808]: I0217 15:55:05.145908 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:55:05 crc kubenswrapper[4808]: I0217 15:55:05.145933 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:05 crc kubenswrapper[4808]: E0217 15:55:05.146054 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:55:05 crc kubenswrapper[4808]: I0217 15:55:05.146054 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:55:05 crc kubenswrapper[4808]: I0217 15:55:05.146138 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:55:05 crc kubenswrapper[4808]: E0217 15:55:05.146362 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:55:05 crc kubenswrapper[4808]: E0217 15:55:05.146756 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:55:05 crc kubenswrapper[4808]: E0217 15:55:05.146683 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z8tn8" podUID="b88c3e5f-7390-477c-ae74-aced26a8ddf9" Feb 17 15:55:05 crc kubenswrapper[4808]: I0217 15:55:05.168653 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:05 crc kubenswrapper[4808]: I0217 15:55:05.168691 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:05 crc kubenswrapper[4808]: I0217 15:55:05.168702 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:05 crc kubenswrapper[4808]: I0217 15:55:05.168722 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:05 crc kubenswrapper[4808]: I0217 15:55:05.168734 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:05Z","lastTransitionTime":"2026-02-17T15:55:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:05 crc kubenswrapper[4808]: I0217 15:55:05.272025 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:05 crc kubenswrapper[4808]: I0217 15:55:05.272077 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:05 crc kubenswrapper[4808]: I0217 15:55:05.272089 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:05 crc kubenswrapper[4808]: I0217 15:55:05.272111 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:05 crc kubenswrapper[4808]: I0217 15:55:05.272124 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:05Z","lastTransitionTime":"2026-02-17T15:55:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:05 crc kubenswrapper[4808]: I0217 15:55:05.375204 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:05 crc kubenswrapper[4808]: I0217 15:55:05.375256 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:05 crc kubenswrapper[4808]: I0217 15:55:05.375266 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:05 crc kubenswrapper[4808]: I0217 15:55:05.375285 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:05 crc kubenswrapper[4808]: I0217 15:55:05.375295 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:05Z","lastTransitionTime":"2026-02-17T15:55:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:05 crc kubenswrapper[4808]: I0217 15:55:05.478649 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:05 crc kubenswrapper[4808]: I0217 15:55:05.478720 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:05 crc kubenswrapper[4808]: I0217 15:55:05.478745 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:05 crc kubenswrapper[4808]: I0217 15:55:05.478779 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:05 crc kubenswrapper[4808]: I0217 15:55:05.478806 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:05Z","lastTransitionTime":"2026-02-17T15:55:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:05 crc kubenswrapper[4808]: I0217 15:55:05.581488 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:05 crc kubenswrapper[4808]: I0217 15:55:05.581534 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:05 crc kubenswrapper[4808]: I0217 15:55:05.581546 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:05 crc kubenswrapper[4808]: I0217 15:55:05.581567 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:05 crc kubenswrapper[4808]: I0217 15:55:05.581595 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:05Z","lastTransitionTime":"2026-02-17T15:55:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:05 crc kubenswrapper[4808]: I0217 15:55:05.684482 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:05 crc kubenswrapper[4808]: I0217 15:55:05.684555 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:05 crc kubenswrapper[4808]: I0217 15:55:05.684596 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:05 crc kubenswrapper[4808]: I0217 15:55:05.684625 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:05 crc kubenswrapper[4808]: I0217 15:55:05.684644 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:05Z","lastTransitionTime":"2026-02-17T15:55:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:05 crc kubenswrapper[4808]: I0217 15:55:05.787339 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:05 crc kubenswrapper[4808]: I0217 15:55:05.787410 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:05 crc kubenswrapper[4808]: I0217 15:55:05.787427 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:05 crc kubenswrapper[4808]: I0217 15:55:05.787454 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:05 crc kubenswrapper[4808]: I0217 15:55:05.787467 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:05Z","lastTransitionTime":"2026-02-17T15:55:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:05 crc kubenswrapper[4808]: I0217 15:55:05.890820 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:05 crc kubenswrapper[4808]: I0217 15:55:05.890877 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:05 crc kubenswrapper[4808]: I0217 15:55:05.897815 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:05 crc kubenswrapper[4808]: I0217 15:55:05.898026 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:05 crc kubenswrapper[4808]: I0217 15:55:05.898083 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:05Z","lastTransitionTime":"2026-02-17T15:55:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.002401 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.002482 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.002503 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.002534 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.002560 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:06Z","lastTransitionTime":"2026-02-17T15:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.105383 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.105437 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.105452 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.105476 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.105557 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:06Z","lastTransitionTime":"2026-02-17T15:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.132987 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 16:34:25.602605336 +0000 UTC Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.209141 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.209212 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.209234 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.209262 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.209310 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:06Z","lastTransitionTime":"2026-02-17T15:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.312175 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.312237 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.312256 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.312287 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.312304 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:06Z","lastTransitionTime":"2026-02-17T15:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.416098 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.416151 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.416169 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.416195 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.416211 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:06Z","lastTransitionTime":"2026-02-17T15:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.518479 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.518546 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.518565 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.518618 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.518638 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:06Z","lastTransitionTime":"2026-02-17T15:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.621321 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.621380 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.621398 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.621425 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.621447 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:06Z","lastTransitionTime":"2026-02-17T15:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.635698 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-msgfd_18916d6d-e063-40a0-816f-554f95cd2956/kube-multus/0.log" Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.635787 4808 generic.go:334] "Generic (PLEG): container finished" podID="18916d6d-e063-40a0-816f-554f95cd2956" containerID="d94a7bfe9ebc3fcec167acc2f840374566394d9425801a71bd3626ce196ee3a1" exitCode=1 Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.635838 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-msgfd" event={"ID":"18916d6d-e063-40a0-816f-554f95cd2956","Type":"ContainerDied","Data":"d94a7bfe9ebc3fcec167acc2f840374566394d9425801a71bd3626ce196ee3a1"} Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.636530 4808 scope.go:117] "RemoveContainer" containerID="d94a7bfe9ebc3fcec167acc2f840374566394d9425801a71bd3626ce196ee3a1" Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.660093 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e109410f-af42-4d80-bf58-9af3a5dde09a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd52f8fe1e994b2f877ce0843ce86d86d7674bace8c4ca163e3232248313435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b00de586738e2d759aa971e2114def8fdfeb2a25fd72f482d75b9f46ea9a3d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12c45de72b21abdab0a1073a9a1a357c8d593f68a339bf9b455b5e87aa7863aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59dcbb2be526e98cfd0a3c8cf833d6cfdef0120c58b47e52fb62f56adffb1d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:06Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.682613 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:06Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.698123 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6c9480c-4161-4c38-bec1-0822c6692f6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53d750dff2e0aa3d65e2defbc3cdf44f48375946c7021c0b1e1056b5ed7d729e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8d4091ef21fb9fef52dafcd7f1d0e865ff57652fcb75d0ba1e16361bcb81f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8d4091ef21fb9fef52dafcd7f1d0e865ff57652fcb75d0ba1e16361bcb81f44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26ac79dab2ec2e8e379a62382daa37e5c1feaa0666d3c6426bd9a295c64fdd5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26ac79dab2ec2e8e379a62382daa37e5c1feaa0666d3c6426bd9a295c64fdd5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43f3b959a4804631ce679ee8dd89b1fa9249892328d303865de288a5a7529af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43f3b959a4804631ce679ee8dd89b1fa9249892328d303865de288a5a7529af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cf535fc0e39f67860383b43629a84bb4608a6a5d42304c537ab91a306ed841c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cf535fc0e39f67860383b43629a84bb4608a6a5d42304c537ab91a306ed841c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89610759cc77f66154699ee9784109cba8ce21818125f447368e19fb6cc8cfb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89610759cc77f66154699ee9784109cba8ce21818125f447368e19fb6cc8cfb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx4nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:06Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.714460 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca38b6e7-b21c-453d-8b6c-a163dac84b35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14df09051221e795ef203b228b1f61d67e86d8052d81b4853a27d50d2b6e64bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://383650c9e8169aa5621d731ebcbfdd1ace0491ad4e7931fca1f6b595e0e782b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k8v8k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:06Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.727195 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.727233 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.727246 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.727265 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.727279 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:06Z","lastTransitionTime":"2026-02-17T15:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.736599 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-msgfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18916d6d-e063-40a0-816f-554f95cd2956\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d94a7bfe9ebc3fcec167acc2f840374566394d9425801a71bd3626ce196ee3a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d94a7bfe9ebc3fcec167acc2f840374566394d9425801a71bd3626ce196ee3a1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T15:55:05Z\\\",\\\"message\\\":\\\"2026-02-17T15:54:20+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c64dd7e9-22dc-4a6f-a49b-f38d3cbe118b\\\\n2026-02-17T15:54:20+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c64dd7e9-22dc-4a6f-a49b-f38d3cbe118b to /host/opt/cni/bin/\\\\n2026-02-17T15:54:20Z [verbose] multus-daemon started\\\\n2026-02-17T15:54:20Z [verbose] Readiness Indicator file check\\\\n2026-02-17T15:55:05Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-msgfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:06Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.760036 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748f02a-e3dd-47c7-b89d-b472c718e593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80ab3de82f2a3f22425c34c9b4abcbc925a7076e3f2ce3b952f10aeb856e1c09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c263e6c0445a0badadcbc5b50c370fd4ee9a4d0cb3e535e3d7944e938cbea4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58ee49f9d112bd2fe6a3cc5f499d1be9d4c51f2741ffb9bf24754a46a0a12814\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b04c73bfd5eadf6c1e436f6a7150074ee8357cef79b0e040c1d9f3809aab13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9e729fa5a68d07a0f7e4a86114ed39e4128428e5a21c2f3f113f869adc9fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26a9d62d12c66018649ffcb84c69e20f1c08f3241bdb02ba4306b08dbe5ec49a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d307d637e95a78d79b622b1de7d0ed293b2e0e690f6b661e6f8ed1c3ab91673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d307d637e95a78d79b622b1de7d0ed293b2e0e690f6b661e6f8ed1c3ab91673\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T15:54:47Z\\\",\\\"message\\\":\\\"s{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0217 15:54:47.336335 6443 services_controller.go:444] Built service openshift-console-operator/metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0217 15:54:47.336345 6443 services_controller.go:445] Built service openshift-console-operator/metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nF0217 15:54:47.336359 6443 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:47Z is after 2025-08-24T17:21:41Z]\\\\nI0217 15:54:47.336366 6443 services_controller.go:451] Built service openshift-consol\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-tgvlh_openshift-ovn-kubernetes(5748f02a-e3dd-47c7-b89d-b472c718e593)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://363a0f82d4347e522c91f27597bc03aa33f75e0399760fcc5cfdc1772eb6aabf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tgvlh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:06Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.777303 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"759d5f61-7cb6-48e5-878f-b6598b2e3736\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4372c35d9db61ec94e0ea9eacf8c4e39b960530780a05f7d69ef2a050d38d23b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d7c05a68a98372cde4e26c0c61f336641b7554e44bea9c4d240fed31e6b366b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://defa2be2862e24dfc99982183beaa92c8114cc81036544f19ed8bb4e10b0b09a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51962c47ab63116fa62604c3cc5603db1b7b4015519052616c363dc21c7cb913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51962c47ab63116fa62604c3cc5603db1b7b4015519052616c363dc21c7cb913\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:53:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:06Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.798494 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:06Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.817099 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:06Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.830642 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.830707 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.830723 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.830753 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.830774 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:06Z","lastTransitionTime":"2026-02-17T15:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.834182 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6556f8ef16656338bd11e718549ef3c019e96928825ab9dc0596f24b8f43e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbc64aec6f296c59b9fb1e8c183c9f80c346f2d76620db59376c914ffcec02b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:06Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.848989 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f8pfh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13cb51e0-9eb4-4948-a9bf-93cddaa429fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e67e9f34fe5e5e9f272673e47a80dfec89a2832289e719b09d5a13399412b2ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkcvd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f8pfh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:06Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.865521 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-86pl6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"067d21e4-9618-42af-bb01-1ea41d1bd7ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcb207e998564484db273e9e68e20e49fb986fc4644b656e17b5c3fea9fb4eb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjv2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded2fa969b96132c1a5953da41b9418ec78621261888216b3854bc3cacb7bca6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjv2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-86pl6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:06Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.880872 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efd34c89-7350-4ce0-83d9-302614df88f7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa3ef5d82c776e482d3da2d223d74423393c75b813707483fadca8cfbb5ed3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://695c70a36ec8a626d22b6dc04fdaad77e3e1f27a035ce6f62b96afe1f2c29361\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2611c9a878eac336beeea637370ce7fe47a5a80a6f29002cb2fb79d4637a1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d0e25e29d8f9c5146809e50f50a20c537f5ddecea1b902928a94870b5d44ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68d1439ead0f87e8cde6925c6db2cfde8a7fe89c6e5afaf719868740138742df\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:54:16Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 15:54:01.029442 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:54:01.030078 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2660512818/tls.crt::/tmp/serving-cert-2660512818/tls.key\\\\\\\"\\\\nI0217 15:54:16.361222 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:54:16.370125 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:54:16.370169 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:54:16.370202 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:54:16.370212 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:54:16.383437 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:54:16.383473 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383482 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:54:16.383494 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:54:16.383498 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:54:16.383502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:54:16.383616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:54:16.393934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://715d799f5e1732f88175b90bad28450b9c5148e89bf47ac3e47f9585acf3b392\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:53:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:06Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.894330 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3aaaa97d92e1acc8fe17594a75ed3e720801983ea175873486102bca899d9c04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:06Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.907909 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pr5s4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4989dd6-5d44-42b5-882c-12a10ffc7911\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://228e9f46385cedf80299c68685a8b2b94d96c41ade18eeea5de7a83c648cf704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2xc9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pr5s4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:06Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.919766 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-z8tn8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b88c3e5f-7390-477c-ae74-aced26a8ddf9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8f79s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8f79s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-z8tn8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:06Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.933240 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.933276 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.933285 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.933301 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.933311 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:06Z","lastTransitionTime":"2026-02-17T15:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.940853 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b5cb9af7fe50ad534e758ba5647e162dfc951f41f07330e8b671427811de556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:06Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.036341 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.036399 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.036409 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.036428 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.036450 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:07Z","lastTransitionTime":"2026-02-17T15:55:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.133616 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 18:26:11.207603569 +0000 UTC Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.139238 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.139295 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.139310 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.139332 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.139347 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:07Z","lastTransitionTime":"2026-02-17T15:55:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.145462 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.145462 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.145551 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.145657 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:55:07 crc kubenswrapper[4808]: E0217 15:55:07.145868 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:55:07 crc kubenswrapper[4808]: E0217 15:55:07.145953 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:55:07 crc kubenswrapper[4808]: E0217 15:55:07.145996 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z8tn8" podUID="b88c3e5f-7390-477c-ae74-aced26a8ddf9" Feb 17 15:55:07 crc kubenswrapper[4808]: E0217 15:55:07.146175 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.160773 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca38b6e7-b21c-453d-8b6c-a163dac84b35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14df09051221e795ef203b228b1f61d67e86d8052d81b4853a27d50d2b6e64bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://383650c9e8169aa5621d731ebcbfdd1ace0491ad4e7931fca1f6b595e0e782b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k8v8k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:07Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.175112 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e109410f-af42-4d80-bf58-9af3a5dde09a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd52f8fe1e994b2f877ce0843ce86d86d7674bace8c4ca163e3232248313435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b00de586738e2d759aa971e2114def8fdfeb2a25fd72f482d75b9f46ea9a3d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12c45de72b21abdab0a1073a9a1a357c8d593f68a339bf9b455b5e87aa7863aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59dcbb2be526e98cfd0a3c8cf833d6cfdef0120c58b47e52fb62f56adffb1d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:07Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.193967 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:07Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.219139 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6c9480c-4161-4c38-bec1-0822c6692f6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53d750dff2e0aa3d65e2defbc3cdf44f48375946c7021c0b1e1056b5ed7d729e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8d4091ef21fb9fef52dafcd7f1d0e865ff57652fcb75d0ba1e16361bcb81f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8d4091ef21fb9fef52dafcd7f1d0e865ff57652fcb75d0ba1e16361bcb81f44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26ac79dab2ec2e8e379a62382daa37e5c1feaa0666d3c6426bd9a295c64fdd5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26ac79dab2ec2e8e379a62382daa37e5c1feaa0666d3c6426bd9a295c64fdd5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43f3b959a4804631ce679ee8dd89b1fa9249892328d303865de288a5a7529af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43f3b959a4804631ce679ee8dd89b1fa9249892328d303865de288a5a7529af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cf535fc0e39f67860383b43629a84bb4608a6a5d42304c537ab91a306ed841c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cf535fc0e39f67860383b43629a84bb4608a6a5d42304c537ab91a306ed841c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89610759cc77f66154699ee9784109cba8ce21818125f447368e19fb6cc8cfb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89610759cc77f66154699ee9784109cba8ce21818125f447368e19fb6cc8cfb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx4nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:07Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.244791 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:07Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.247852 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.248133 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.248145 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.248167 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.248185 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:07Z","lastTransitionTime":"2026-02-17T15:55:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.263795 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6556f8ef16656338bd11e718549ef3c019e96928825ab9dc0596f24b8f43e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbc64aec6f296c59b9fb1e8c183c9f80c346f2d76620db59376c914ffcec02b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:07Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.274857 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f8pfh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13cb51e0-9eb4-4948-a9bf-93cddaa429fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e67e9f34fe5e5e9f272673e47a80dfec89a2832289e719b09d5a13399412b2ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkcvd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f8pfh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:07Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.289741 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-msgfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18916d6d-e063-40a0-816f-554f95cd2956\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d94a7bfe9ebc3fcec167acc2f840374566394d9425801a71bd3626ce196ee3a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d94a7bfe9ebc3fcec167acc2f840374566394d9425801a71bd3626ce196ee3a1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T15:55:05Z\\\",\\\"message\\\":\\\"2026-02-17T15:54:20+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c64dd7e9-22dc-4a6f-a49b-f38d3cbe118b\\\\n2026-02-17T15:54:20+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c64dd7e9-22dc-4a6f-a49b-f38d3cbe118b to /host/opt/cni/bin/\\\\n2026-02-17T15:54:20Z [verbose] multus-daemon started\\\\n2026-02-17T15:54:20Z [verbose] Readiness Indicator file check\\\\n2026-02-17T15:55:05Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-msgfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:07Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.310716 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748f02a-e3dd-47c7-b89d-b472c718e593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80ab3de82f2a3f22425c34c9b4abcbc925a7076e3f2ce3b952f10aeb856e1c09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c263e6c0445a0badadcbc5b50c370fd4ee9a4d0cb3e535e3d7944e938cbea4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58ee49f9d112bd2fe6a3cc5f499d1be9d4c51f2741ffb9bf24754a46a0a12814\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b04c73bfd5eadf6c1e436f6a7150074ee8357cef79b0e040c1d9f3809aab13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9e729fa5a68d07a0f7e4a86114ed39e4128428e5a21c2f3f113f869adc9fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26a9d62d12c66018649ffcb84c69e20f1c08f3241bdb02ba4306b08dbe5ec49a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d307d637e95a78d79b622b1de7d0ed293b2e0e690f6b661e6f8ed1c3ab91673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d307d637e95a78d79b622b1de7d0ed293b2e0e690f6b661e6f8ed1c3ab91673\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T15:54:47Z\\\",\\\"message\\\":\\\"s{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0217 15:54:47.336335 6443 services_controller.go:444] Built service openshift-console-operator/metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0217 15:54:47.336345 6443 services_controller.go:445] Built service openshift-console-operator/metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nF0217 15:54:47.336359 6443 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:47Z is after 2025-08-24T17:21:41Z]\\\\nI0217 15:54:47.336366 6443 services_controller.go:451] Built service openshift-consol\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-tgvlh_openshift-ovn-kubernetes(5748f02a-e3dd-47c7-b89d-b472c718e593)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://363a0f82d4347e522c91f27597bc03aa33f75e0399760fcc5cfdc1772eb6aabf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tgvlh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:07Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.323187 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"759d5f61-7cb6-48e5-878f-b6598b2e3736\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4372c35d9db61ec94e0ea9eacf8c4e39b960530780a05f7d69ef2a050d38d23b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d7c05a68a98372cde4e26c0c61f336641b7554e44bea9c4d240fed31e6b366b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://defa2be2862e24dfc99982183beaa92c8114cc81036544f19ed8bb4e10b0b09a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51962c47ab63116fa62604c3cc5603db1b7b4015519052616c363dc21c7cb913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51962c47ab63116fa62604c3cc5603db1b7b4015519052616c363dc21c7cb913\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:53:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:07Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.338082 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:07Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.350879 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.350938 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.350956 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.350977 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.350991 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:07Z","lastTransitionTime":"2026-02-17T15:55:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.353058 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-86pl6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"067d21e4-9618-42af-bb01-1ea41d1bd7ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcb207e998564484db273e9e68e20e49fb986fc4644b656e17b5c3fea9fb4eb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjv2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded2fa969b96132c1a5953da41b9418ec78621261888216b3854bc3cacb7bca6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjv2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-86pl6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:07Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.363521 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pr5s4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4989dd6-5d44-42b5-882c-12a10ffc7911\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://228e9f46385cedf80299c68685a8b2b94d96c41ade18eeea5de7a83c648cf704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2xc9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pr5s4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:07Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.374659 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-z8tn8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b88c3e5f-7390-477c-ae74-aced26a8ddf9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8f79s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8f79s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-z8tn8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:07Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.389111 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efd34c89-7350-4ce0-83d9-302614df88f7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa3ef5d82c776e482d3da2d223d74423393c75b813707483fadca8cfbb5ed3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://695c70a36ec8a626d22b6dc04fdaad77e3e1f27a035ce6f62b96afe1f2c29361\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2611c9a878eac336beeea637370ce7fe47a5a80a6f29002cb2fb79d4637a1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d0e25e29d8f9c5146809e50f50a20c537f5ddecea1b902928a94870b5d44ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68d1439ead0f87e8cde6925c6db2cfde8a7fe89c6e5afaf719868740138742df\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:54:16Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 15:54:01.029442 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:54:01.030078 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2660512818/tls.crt::/tmp/serving-cert-2660512818/tls.key\\\\\\\"\\\\nI0217 15:54:16.361222 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:54:16.370125 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:54:16.370169 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:54:16.370202 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:54:16.370212 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:54:16.383437 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:54:16.383473 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383482 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:54:16.383494 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:54:16.383498 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:54:16.383502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:54:16.383616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:54:16.393934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://715d799f5e1732f88175b90bad28450b9c5148e89bf47ac3e47f9585acf3b392\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:53:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:07Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.400555 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3aaaa97d92e1acc8fe17594a75ed3e720801983ea175873486102bca899d9c04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:07Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.414472 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b5cb9af7fe50ad534e758ba5647e162dfc951f41f07330e8b671427811de556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:07Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.454183 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.454230 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.454241 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.454257 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.454267 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:07Z","lastTransitionTime":"2026-02-17T15:55:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.557621 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.558007 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.558018 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.558039 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.558054 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:07Z","lastTransitionTime":"2026-02-17T15:55:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.640815 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-msgfd_18916d6d-e063-40a0-816f-554f95cd2956/kube-multus/0.log" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.640878 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-msgfd" event={"ID":"18916d6d-e063-40a0-816f-554f95cd2956","Type":"ContainerStarted","Data":"7bdc6e86716d40b6c433ccb24a97665384190bfe2ab5ddf0868109d78826917e"} Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.658748 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b5cb9af7fe50ad534e758ba5647e162dfc951f41f07330e8b671427811de556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:07Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.660888 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.660953 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.660969 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.660988 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.661001 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:07Z","lastTransitionTime":"2026-02-17T15:55:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.677083 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e109410f-af42-4d80-bf58-9af3a5dde09a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd52f8fe1e994b2f877ce0843ce86d86d7674bace8c4ca163e3232248313435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b00de586738e2d759aa971e2114def8fdfeb2a25fd72f482d75b9f46ea9a3d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12c45de72b21abdab0a1073a9a1a357c8d593f68a339bf9b455b5e87aa7863aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59dcbb2be526e98cfd0a3c8cf833d6cfdef0120c58b47e52fb62f56adffb1d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:07Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.693738 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:07Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.713790 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6c9480c-4161-4c38-bec1-0822c6692f6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53d750dff2e0aa3d65e2defbc3cdf44f48375946c7021c0b1e1056b5ed7d729e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8d4091ef21fb9fef52dafcd7f1d0e865ff57652fcb75d0ba1e16361bcb81f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8d4091ef21fb9fef52dafcd7f1d0e865ff57652fcb75d0ba1e16361bcb81f44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26ac79dab2ec2e8e379a62382daa37e5c1feaa0666d3c6426bd9a295c64fdd5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26ac79dab2ec2e8e379a62382daa37e5c1feaa0666d3c6426bd9a295c64fdd5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43f3b959a4804631ce679ee8dd89b1fa9249892328d303865de288a5a7529af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43f3b959a4804631ce679ee8dd89b1fa9249892328d303865de288a5a7529af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cf535fc0e39f67860383b43629a84bb4608a6a5d42304c537ab91a306ed841c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cf535fc0e39f67860383b43629a84bb4608a6a5d42304c537ab91a306ed841c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89610759cc77f66154699ee9784109cba8ce21818125f447368e19fb6cc8cfb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89610759cc77f66154699ee9784109cba8ce21818125f447368e19fb6cc8cfb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx4nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:07Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.727247 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca38b6e7-b21c-453d-8b6c-a163dac84b35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14df09051221e795ef203b228b1f61d67e86d8052d81b4853a27d50d2b6e64bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://383650c9e8169aa5621d731ebcbfdd1ace0491ad4e7931fca1f6b595e0e782b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k8v8k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:07Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.740196 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"759d5f61-7cb6-48e5-878f-b6598b2e3736\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4372c35d9db61ec94e0ea9eacf8c4e39b960530780a05f7d69ef2a050d38d23b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d7c05a68a98372cde4e26c0c61f336641b7554e44bea9c4d240fed31e6b366b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://defa2be2862e24dfc99982183beaa92c8114cc81036544f19ed8bb4e10b0b09a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51962c47ab63116fa62604c3cc5603db1b7b4015519052616c363dc21c7cb913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51962c47ab63116fa62604c3cc5603db1b7b4015519052616c363dc21c7cb913\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:53:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:07Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.755819 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:07Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.763561 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.763621 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.763633 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.763652 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.763665 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:07Z","lastTransitionTime":"2026-02-17T15:55:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.770397 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:07Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.787823 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6556f8ef16656338bd11e718549ef3c019e96928825ab9dc0596f24b8f43e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbc64aec6f296c59b9fb1e8c183c9f80c346f2d76620db59376c914ffcec02b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:07Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.801186 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f8pfh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13cb51e0-9eb4-4948-a9bf-93cddaa429fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e67e9f34fe5e5e9f272673e47a80dfec89a2832289e719b09d5a13399412b2ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkcvd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f8pfh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:07Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.847036 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-msgfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18916d6d-e063-40a0-816f-554f95cd2956\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bdc6e86716d40b6c433ccb24a97665384190bfe2ab5ddf0868109d78826917e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d94a7bfe9ebc3fcec167acc2f840374566394d9425801a71bd3626ce196ee3a1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T15:55:05Z\\\",\\\"message\\\":\\\"2026-02-17T15:54:20+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c64dd7e9-22dc-4a6f-a49b-f38d3cbe118b\\\\n2026-02-17T15:54:20+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c64dd7e9-22dc-4a6f-a49b-f38d3cbe118b to /host/opt/cni/bin/\\\\n2026-02-17T15:54:20Z [verbose] multus-daemon started\\\\n2026-02-17T15:54:20Z [verbose] Readiness Indicator file check\\\\n2026-02-17T15:55:05Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-msgfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:07Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.867092 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.867157 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.867174 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.867203 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.867223 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:07Z","lastTransitionTime":"2026-02-17T15:55:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.870160 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748f02a-e3dd-47c7-b89d-b472c718e593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80ab3de82f2a3f22425c34c9b4abcbc925a7076e3f2ce3b952f10aeb856e1c09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c263e6c0445a0badadcbc5b50c370fd4ee9a4d0cb3e535e3d7944e938cbea4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58ee49f9d112bd2fe6a3cc5f499d1be9d4c51f2741ffb9bf24754a46a0a12814\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b04c73bfd5eadf6c1e436f6a7150074ee8357cef79b0e040c1d9f3809aab13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9e729fa5a68d07a0f7e4a86114ed39e4128428e5a21c2f3f113f869adc9fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26a9d62d12c66018649ffcb84c69e20f1c08f3241bdb02ba4306b08dbe5ec49a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d307d637e95a78d79b622b1de7d0ed293b2e0e690f6b661e6f8ed1c3ab91673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d307d637e95a78d79b622b1de7d0ed293b2e0e690f6b661e6f8ed1c3ab91673\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T15:54:47Z\\\",\\\"message\\\":\\\"s{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0217 15:54:47.336335 6443 services_controller.go:444] Built service openshift-console-operator/metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0217 15:54:47.336345 6443 services_controller.go:445] Built service openshift-console-operator/metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nF0217 15:54:47.336359 6443 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:47Z is after 2025-08-24T17:21:41Z]\\\\nI0217 15:54:47.336366 6443 services_controller.go:451] Built service openshift-consol\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-tgvlh_openshift-ovn-kubernetes(5748f02a-e3dd-47c7-b89d-b472c718e593)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://363a0f82d4347e522c91f27597bc03aa33f75e0399760fcc5cfdc1772eb6aabf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tgvlh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:07Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.887123 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-86pl6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"067d21e4-9618-42af-bb01-1ea41d1bd7ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcb207e998564484db273e9e68e20e49fb986fc4644b656e17b5c3fea9fb4eb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjv2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded2fa969b96132c1a5953da41b9418ec78621261888216b3854bc3cacb7bca6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjv2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-86pl6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:07Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.912427 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efd34c89-7350-4ce0-83d9-302614df88f7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa3ef5d82c776e482d3da2d223d74423393c75b813707483fadca8cfbb5ed3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://695c70a36ec8a626d22b6dc04fdaad77e3e1f27a035ce6f62b96afe1f2c29361\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2611c9a878eac336beeea637370ce7fe47a5a80a6f29002cb2fb79d4637a1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d0e25e29d8f9c5146809e50f50a20c537f5ddecea1b902928a94870b5d44ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68d1439ead0f87e8cde6925c6db2cfde8a7fe89c6e5afaf719868740138742df\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:54:16Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 15:54:01.029442 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:54:01.030078 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2660512818/tls.crt::/tmp/serving-cert-2660512818/tls.key\\\\\\\"\\\\nI0217 15:54:16.361222 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:54:16.370125 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:54:16.370169 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:54:16.370202 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:54:16.370212 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:54:16.383437 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:54:16.383473 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383482 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:54:16.383494 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:54:16.383498 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:54:16.383502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:54:16.383616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:54:16.393934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://715d799f5e1732f88175b90bad28450b9c5148e89bf47ac3e47f9585acf3b392\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:53:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:07Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.931974 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3aaaa97d92e1acc8fe17594a75ed3e720801983ea175873486102bca899d9c04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:07Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.944836 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pr5s4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4989dd6-5d44-42b5-882c-12a10ffc7911\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://228e9f46385cedf80299c68685a8b2b94d96c41ade18eeea5de7a83c648cf704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2xc9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pr5s4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:07Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.956018 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-z8tn8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b88c3e5f-7390-477c-ae74-aced26a8ddf9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8f79s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8f79s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-z8tn8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:07Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.970374 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.970432 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.970454 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.970481 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.970501 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:07Z","lastTransitionTime":"2026-02-17T15:55:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:08 crc kubenswrapper[4808]: I0217 15:55:08.074196 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:08 crc kubenswrapper[4808]: I0217 15:55:08.074265 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:08 crc kubenswrapper[4808]: I0217 15:55:08.074277 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:08 crc kubenswrapper[4808]: I0217 15:55:08.074299 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:08 crc kubenswrapper[4808]: I0217 15:55:08.074313 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:08Z","lastTransitionTime":"2026-02-17T15:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:08 crc kubenswrapper[4808]: I0217 15:55:08.134197 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 20:05:06.130837619 +0000 UTC Feb 17 15:55:08 crc kubenswrapper[4808]: I0217 15:55:08.177889 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:08 crc kubenswrapper[4808]: I0217 15:55:08.177951 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:08 crc kubenswrapper[4808]: I0217 15:55:08.177970 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:08 crc kubenswrapper[4808]: I0217 15:55:08.177994 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:08 crc kubenswrapper[4808]: I0217 15:55:08.178013 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:08Z","lastTransitionTime":"2026-02-17T15:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:08 crc kubenswrapper[4808]: I0217 15:55:08.280926 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:08 crc kubenswrapper[4808]: I0217 15:55:08.281004 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:08 crc kubenswrapper[4808]: I0217 15:55:08.281027 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:08 crc kubenswrapper[4808]: I0217 15:55:08.281054 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:08 crc kubenswrapper[4808]: I0217 15:55:08.281078 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:08Z","lastTransitionTime":"2026-02-17T15:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:08 crc kubenswrapper[4808]: I0217 15:55:08.384623 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:08 crc kubenswrapper[4808]: I0217 15:55:08.384731 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:08 crc kubenswrapper[4808]: I0217 15:55:08.384797 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:08 crc kubenswrapper[4808]: I0217 15:55:08.384823 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:08 crc kubenswrapper[4808]: I0217 15:55:08.384841 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:08Z","lastTransitionTime":"2026-02-17T15:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:08 crc kubenswrapper[4808]: I0217 15:55:08.487797 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:08 crc kubenswrapper[4808]: I0217 15:55:08.487878 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:08 crc kubenswrapper[4808]: I0217 15:55:08.487897 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:08 crc kubenswrapper[4808]: I0217 15:55:08.487926 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:08 crc kubenswrapper[4808]: I0217 15:55:08.487947 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:08Z","lastTransitionTime":"2026-02-17T15:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:08 crc kubenswrapper[4808]: I0217 15:55:08.591501 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:08 crc kubenswrapper[4808]: I0217 15:55:08.591627 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:08 crc kubenswrapper[4808]: I0217 15:55:08.591657 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:08 crc kubenswrapper[4808]: I0217 15:55:08.591694 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:08 crc kubenswrapper[4808]: I0217 15:55:08.591718 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:08Z","lastTransitionTime":"2026-02-17T15:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:08 crc kubenswrapper[4808]: I0217 15:55:08.695424 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:08 crc kubenswrapper[4808]: I0217 15:55:08.695506 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:08 crc kubenswrapper[4808]: I0217 15:55:08.695525 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:08 crc kubenswrapper[4808]: I0217 15:55:08.695559 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:08 crc kubenswrapper[4808]: I0217 15:55:08.695610 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:08Z","lastTransitionTime":"2026-02-17T15:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:08 crc kubenswrapper[4808]: I0217 15:55:08.799298 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:08 crc kubenswrapper[4808]: I0217 15:55:08.799351 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:08 crc kubenswrapper[4808]: I0217 15:55:08.799364 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:08 crc kubenswrapper[4808]: I0217 15:55:08.799382 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:08 crc kubenswrapper[4808]: I0217 15:55:08.799395 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:08Z","lastTransitionTime":"2026-02-17T15:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:08 crc kubenswrapper[4808]: I0217 15:55:08.903337 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:08 crc kubenswrapper[4808]: I0217 15:55:08.903408 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:08 crc kubenswrapper[4808]: I0217 15:55:08.903429 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:08 crc kubenswrapper[4808]: I0217 15:55:08.903460 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:08 crc kubenswrapper[4808]: I0217 15:55:08.903481 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:08Z","lastTransitionTime":"2026-02-17T15:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:09 crc kubenswrapper[4808]: I0217 15:55:09.006348 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:09 crc kubenswrapper[4808]: I0217 15:55:09.006408 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:09 crc kubenswrapper[4808]: I0217 15:55:09.006421 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:09 crc kubenswrapper[4808]: I0217 15:55:09.006444 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:09 crc kubenswrapper[4808]: I0217 15:55:09.006460 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:09Z","lastTransitionTime":"2026-02-17T15:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:09 crc kubenswrapper[4808]: I0217 15:55:09.108928 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:09 crc kubenswrapper[4808]: I0217 15:55:09.108986 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:09 crc kubenswrapper[4808]: I0217 15:55:09.109000 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:09 crc kubenswrapper[4808]: I0217 15:55:09.109021 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:09 crc kubenswrapper[4808]: I0217 15:55:09.109034 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:09Z","lastTransitionTime":"2026-02-17T15:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:09 crc kubenswrapper[4808]: I0217 15:55:09.135235 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 00:46:13.981476476 +0000 UTC Feb 17 15:55:09 crc kubenswrapper[4808]: I0217 15:55:09.145706 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:09 crc kubenswrapper[4808]: I0217 15:55:09.145790 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:55:09 crc kubenswrapper[4808]: E0217 15:55:09.146055 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:55:09 crc kubenswrapper[4808]: I0217 15:55:09.146110 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:55:09 crc kubenswrapper[4808]: E0217 15:55:09.146176 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z8tn8" podUID="b88c3e5f-7390-477c-ae74-aced26a8ddf9" Feb 17 15:55:09 crc kubenswrapper[4808]: E0217 15:55:09.146353 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:55:09 crc kubenswrapper[4808]: I0217 15:55:09.146452 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:55:09 crc kubenswrapper[4808]: E0217 15:55:09.146878 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:55:09 crc kubenswrapper[4808]: I0217 15:55:09.212497 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:09 crc kubenswrapper[4808]: I0217 15:55:09.212545 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:09 crc kubenswrapper[4808]: I0217 15:55:09.212558 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:09 crc kubenswrapper[4808]: I0217 15:55:09.212602 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:09 crc kubenswrapper[4808]: I0217 15:55:09.212616 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:09Z","lastTransitionTime":"2026-02-17T15:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:09 crc kubenswrapper[4808]: I0217 15:55:09.315776 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:09 crc kubenswrapper[4808]: I0217 15:55:09.315873 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:09 crc kubenswrapper[4808]: I0217 15:55:09.315891 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:09 crc kubenswrapper[4808]: I0217 15:55:09.315916 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:09 crc kubenswrapper[4808]: I0217 15:55:09.315934 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:09Z","lastTransitionTime":"2026-02-17T15:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:09 crc kubenswrapper[4808]: I0217 15:55:09.419206 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:09 crc kubenswrapper[4808]: I0217 15:55:09.419259 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:09 crc kubenswrapper[4808]: I0217 15:55:09.419276 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:09 crc kubenswrapper[4808]: I0217 15:55:09.419300 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:09 crc kubenswrapper[4808]: I0217 15:55:09.419318 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:09Z","lastTransitionTime":"2026-02-17T15:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:09 crc kubenswrapper[4808]: I0217 15:55:09.522793 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:09 crc kubenswrapper[4808]: I0217 15:55:09.522844 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:09 crc kubenswrapper[4808]: I0217 15:55:09.522854 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:09 crc kubenswrapper[4808]: I0217 15:55:09.522872 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:09 crc kubenswrapper[4808]: I0217 15:55:09.522882 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:09Z","lastTransitionTime":"2026-02-17T15:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:09 crc kubenswrapper[4808]: I0217 15:55:09.626873 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:09 crc kubenswrapper[4808]: I0217 15:55:09.626944 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:09 crc kubenswrapper[4808]: I0217 15:55:09.626957 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:09 crc kubenswrapper[4808]: I0217 15:55:09.626990 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:09 crc kubenswrapper[4808]: I0217 15:55:09.627005 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:09Z","lastTransitionTime":"2026-02-17T15:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:09 crc kubenswrapper[4808]: I0217 15:55:09.729769 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:09 crc kubenswrapper[4808]: I0217 15:55:09.729829 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:09 crc kubenswrapper[4808]: I0217 15:55:09.729839 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:09 crc kubenswrapper[4808]: I0217 15:55:09.729861 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:09 crc kubenswrapper[4808]: I0217 15:55:09.729875 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:09Z","lastTransitionTime":"2026-02-17T15:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:09 crc kubenswrapper[4808]: I0217 15:55:09.833280 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:09 crc kubenswrapper[4808]: I0217 15:55:09.833378 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:09 crc kubenswrapper[4808]: I0217 15:55:09.833394 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:09 crc kubenswrapper[4808]: I0217 15:55:09.833415 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:09 crc kubenswrapper[4808]: I0217 15:55:09.833428 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:09Z","lastTransitionTime":"2026-02-17T15:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:09 crc kubenswrapper[4808]: I0217 15:55:09.936590 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:09 crc kubenswrapper[4808]: I0217 15:55:09.936649 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:09 crc kubenswrapper[4808]: I0217 15:55:09.936662 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:09 crc kubenswrapper[4808]: I0217 15:55:09.936684 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:09 crc kubenswrapper[4808]: I0217 15:55:09.936702 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:09Z","lastTransitionTime":"2026-02-17T15:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:10 crc kubenswrapper[4808]: I0217 15:55:10.040273 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:10 crc kubenswrapper[4808]: I0217 15:55:10.040332 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:10 crc kubenswrapper[4808]: I0217 15:55:10.040343 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:10 crc kubenswrapper[4808]: I0217 15:55:10.040362 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:10 crc kubenswrapper[4808]: I0217 15:55:10.040374 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:10Z","lastTransitionTime":"2026-02-17T15:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:10 crc kubenswrapper[4808]: I0217 15:55:10.136351 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 23:07:56.272787719 +0000 UTC Feb 17 15:55:10 crc kubenswrapper[4808]: I0217 15:55:10.144666 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:10 crc kubenswrapper[4808]: I0217 15:55:10.144748 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:10 crc kubenswrapper[4808]: I0217 15:55:10.144774 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:10 crc kubenswrapper[4808]: I0217 15:55:10.144808 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:10 crc kubenswrapper[4808]: I0217 15:55:10.144834 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:10Z","lastTransitionTime":"2026-02-17T15:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:10 crc kubenswrapper[4808]: I0217 15:55:10.247457 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:10 crc kubenswrapper[4808]: I0217 15:55:10.247519 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:10 crc kubenswrapper[4808]: I0217 15:55:10.247538 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:10 crc kubenswrapper[4808]: I0217 15:55:10.247626 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:10 crc kubenswrapper[4808]: I0217 15:55:10.247653 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:10Z","lastTransitionTime":"2026-02-17T15:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:10 crc kubenswrapper[4808]: I0217 15:55:10.351521 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:10 crc kubenswrapper[4808]: I0217 15:55:10.351618 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:10 crc kubenswrapper[4808]: I0217 15:55:10.351637 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:10 crc kubenswrapper[4808]: I0217 15:55:10.351664 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:10 crc kubenswrapper[4808]: I0217 15:55:10.351689 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:10Z","lastTransitionTime":"2026-02-17T15:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:10 crc kubenswrapper[4808]: I0217 15:55:10.460484 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:10 crc kubenswrapper[4808]: I0217 15:55:10.460553 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:10 crc kubenswrapper[4808]: I0217 15:55:10.460613 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:10 crc kubenswrapper[4808]: I0217 15:55:10.460649 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:10 crc kubenswrapper[4808]: I0217 15:55:10.460675 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:10Z","lastTransitionTime":"2026-02-17T15:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:10 crc kubenswrapper[4808]: I0217 15:55:10.563303 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:10 crc kubenswrapper[4808]: I0217 15:55:10.563377 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:10 crc kubenswrapper[4808]: I0217 15:55:10.563389 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:10 crc kubenswrapper[4808]: I0217 15:55:10.563406 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:10 crc kubenswrapper[4808]: I0217 15:55:10.563419 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:10Z","lastTransitionTime":"2026-02-17T15:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:10 crc kubenswrapper[4808]: I0217 15:55:10.666427 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:10 crc kubenswrapper[4808]: I0217 15:55:10.666479 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:10 crc kubenswrapper[4808]: I0217 15:55:10.666495 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:10 crc kubenswrapper[4808]: I0217 15:55:10.666516 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:10 crc kubenswrapper[4808]: I0217 15:55:10.666529 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:10Z","lastTransitionTime":"2026-02-17T15:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:10 crc kubenswrapper[4808]: I0217 15:55:10.769629 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:10 crc kubenswrapper[4808]: I0217 15:55:10.769686 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:10 crc kubenswrapper[4808]: I0217 15:55:10.769700 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:10 crc kubenswrapper[4808]: I0217 15:55:10.769724 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:10 crc kubenswrapper[4808]: I0217 15:55:10.769739 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:10Z","lastTransitionTime":"2026-02-17T15:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:10 crc kubenswrapper[4808]: I0217 15:55:10.872435 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:10 crc kubenswrapper[4808]: I0217 15:55:10.872495 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:10 crc kubenswrapper[4808]: I0217 15:55:10.872511 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:10 crc kubenswrapper[4808]: I0217 15:55:10.872533 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:10 crc kubenswrapper[4808]: I0217 15:55:10.872547 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:10Z","lastTransitionTime":"2026-02-17T15:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:10 crc kubenswrapper[4808]: I0217 15:55:10.976472 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:10 crc kubenswrapper[4808]: I0217 15:55:10.976552 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:10 crc kubenswrapper[4808]: I0217 15:55:10.976617 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:10 crc kubenswrapper[4808]: I0217 15:55:10.976652 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:10 crc kubenswrapper[4808]: I0217 15:55:10.976674 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:10Z","lastTransitionTime":"2026-02-17T15:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:11 crc kubenswrapper[4808]: I0217 15:55:11.080376 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:11 crc kubenswrapper[4808]: I0217 15:55:11.080466 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:11 crc kubenswrapper[4808]: I0217 15:55:11.080489 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:11 crc kubenswrapper[4808]: I0217 15:55:11.080523 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:11 crc kubenswrapper[4808]: I0217 15:55:11.080545 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:11Z","lastTransitionTime":"2026-02-17T15:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:11 crc kubenswrapper[4808]: I0217 15:55:11.137482 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 19:59:46.503713691 +0000 UTC Feb 17 15:55:11 crc kubenswrapper[4808]: I0217 15:55:11.144842 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:11 crc kubenswrapper[4808]: E0217 15:55:11.145029 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:55:11 crc kubenswrapper[4808]: I0217 15:55:11.145165 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:55:11 crc kubenswrapper[4808]: I0217 15:55:11.145251 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:55:11 crc kubenswrapper[4808]: I0217 15:55:11.145305 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:55:11 crc kubenswrapper[4808]: E0217 15:55:11.145343 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:55:11 crc kubenswrapper[4808]: E0217 15:55:11.145682 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z8tn8" podUID="b88c3e5f-7390-477c-ae74-aced26a8ddf9" Feb 17 15:55:11 crc kubenswrapper[4808]: E0217 15:55:11.146026 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:55:11 crc kubenswrapper[4808]: I0217 15:55:11.183975 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:11 crc kubenswrapper[4808]: I0217 15:55:11.184017 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:11 crc kubenswrapper[4808]: I0217 15:55:11.184034 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:11 crc kubenswrapper[4808]: I0217 15:55:11.184056 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:11 crc kubenswrapper[4808]: I0217 15:55:11.184074 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:11Z","lastTransitionTime":"2026-02-17T15:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:11 crc kubenswrapper[4808]: I0217 15:55:11.287462 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:11 crc kubenswrapper[4808]: I0217 15:55:11.287513 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:11 crc kubenswrapper[4808]: I0217 15:55:11.287529 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:11 crc kubenswrapper[4808]: I0217 15:55:11.287551 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:11 crc kubenswrapper[4808]: I0217 15:55:11.287569 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:11Z","lastTransitionTime":"2026-02-17T15:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:11 crc kubenswrapper[4808]: I0217 15:55:11.391688 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:11 crc kubenswrapper[4808]: I0217 15:55:11.391763 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:11 crc kubenswrapper[4808]: I0217 15:55:11.391781 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:11 crc kubenswrapper[4808]: I0217 15:55:11.391812 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:11 crc kubenswrapper[4808]: I0217 15:55:11.391831 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:11Z","lastTransitionTime":"2026-02-17T15:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:11 crc kubenswrapper[4808]: I0217 15:55:11.495206 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:11 crc kubenswrapper[4808]: I0217 15:55:11.495286 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:11 crc kubenswrapper[4808]: I0217 15:55:11.495311 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:11 crc kubenswrapper[4808]: I0217 15:55:11.495342 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:11 crc kubenswrapper[4808]: I0217 15:55:11.495364 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:11Z","lastTransitionTime":"2026-02-17T15:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:11 crc kubenswrapper[4808]: I0217 15:55:11.599715 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:11 crc kubenswrapper[4808]: I0217 15:55:11.599782 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:11 crc kubenswrapper[4808]: I0217 15:55:11.599800 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:11 crc kubenswrapper[4808]: I0217 15:55:11.599828 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:11 crc kubenswrapper[4808]: I0217 15:55:11.599847 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:11Z","lastTransitionTime":"2026-02-17T15:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:11 crc kubenswrapper[4808]: I0217 15:55:11.703165 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:11 crc kubenswrapper[4808]: I0217 15:55:11.703243 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:11 crc kubenswrapper[4808]: I0217 15:55:11.703265 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:11 crc kubenswrapper[4808]: I0217 15:55:11.703293 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:11 crc kubenswrapper[4808]: I0217 15:55:11.703320 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:11Z","lastTransitionTime":"2026-02-17T15:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:11 crc kubenswrapper[4808]: I0217 15:55:11.807971 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:11 crc kubenswrapper[4808]: I0217 15:55:11.808047 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:11 crc kubenswrapper[4808]: I0217 15:55:11.808066 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:11 crc kubenswrapper[4808]: I0217 15:55:11.808096 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:11 crc kubenswrapper[4808]: I0217 15:55:11.808120 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:11Z","lastTransitionTime":"2026-02-17T15:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:11 crc kubenswrapper[4808]: I0217 15:55:11.911681 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:11 crc kubenswrapper[4808]: I0217 15:55:11.911776 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:11 crc kubenswrapper[4808]: I0217 15:55:11.911804 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:11 crc kubenswrapper[4808]: I0217 15:55:11.911841 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:11 crc kubenswrapper[4808]: I0217 15:55:11.911864 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:11Z","lastTransitionTime":"2026-02-17T15:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:12 crc kubenswrapper[4808]: I0217 15:55:12.015395 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:12 crc kubenswrapper[4808]: I0217 15:55:12.015445 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:12 crc kubenswrapper[4808]: I0217 15:55:12.015456 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:12 crc kubenswrapper[4808]: I0217 15:55:12.015476 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:12 crc kubenswrapper[4808]: I0217 15:55:12.015489 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:12Z","lastTransitionTime":"2026-02-17T15:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:12 crc kubenswrapper[4808]: I0217 15:55:12.118668 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:12 crc kubenswrapper[4808]: I0217 15:55:12.118739 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:12 crc kubenswrapper[4808]: I0217 15:55:12.118760 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:12 crc kubenswrapper[4808]: I0217 15:55:12.118794 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:12 crc kubenswrapper[4808]: I0217 15:55:12.118824 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:12Z","lastTransitionTime":"2026-02-17T15:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:12 crc kubenswrapper[4808]: I0217 15:55:12.138597 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 00:44:54.513654428 +0000 UTC Feb 17 15:55:12 crc kubenswrapper[4808]: I0217 15:55:12.222661 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:12 crc kubenswrapper[4808]: I0217 15:55:12.222729 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:12 crc kubenswrapper[4808]: I0217 15:55:12.222748 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:12 crc kubenswrapper[4808]: I0217 15:55:12.222782 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:12 crc kubenswrapper[4808]: I0217 15:55:12.222803 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:12Z","lastTransitionTime":"2026-02-17T15:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:12 crc kubenswrapper[4808]: I0217 15:55:12.326147 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:12 crc kubenswrapper[4808]: I0217 15:55:12.326206 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:12 crc kubenswrapper[4808]: I0217 15:55:12.326227 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:12 crc kubenswrapper[4808]: I0217 15:55:12.326258 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:12 crc kubenswrapper[4808]: I0217 15:55:12.326276 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:12Z","lastTransitionTime":"2026-02-17T15:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:12 crc kubenswrapper[4808]: I0217 15:55:12.430012 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:12 crc kubenswrapper[4808]: I0217 15:55:12.430115 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:12 crc kubenswrapper[4808]: I0217 15:55:12.430165 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:12 crc kubenswrapper[4808]: I0217 15:55:12.430193 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:12 crc kubenswrapper[4808]: I0217 15:55:12.430214 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:12Z","lastTransitionTime":"2026-02-17T15:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:12 crc kubenswrapper[4808]: I0217 15:55:12.533477 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:12 crc kubenswrapper[4808]: I0217 15:55:12.533544 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:12 crc kubenswrapper[4808]: I0217 15:55:12.533607 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:12 crc kubenswrapper[4808]: I0217 15:55:12.533642 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:12 crc kubenswrapper[4808]: I0217 15:55:12.533667 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:12Z","lastTransitionTime":"2026-02-17T15:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:12 crc kubenswrapper[4808]: I0217 15:55:12.638023 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:12 crc kubenswrapper[4808]: I0217 15:55:12.638104 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:12 crc kubenswrapper[4808]: I0217 15:55:12.638124 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:12 crc kubenswrapper[4808]: I0217 15:55:12.638154 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:12 crc kubenswrapper[4808]: I0217 15:55:12.638172 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:12Z","lastTransitionTime":"2026-02-17T15:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:12 crc kubenswrapper[4808]: I0217 15:55:12.741935 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:12 crc kubenswrapper[4808]: I0217 15:55:12.741999 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:12 crc kubenswrapper[4808]: I0217 15:55:12.742013 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:12 crc kubenswrapper[4808]: I0217 15:55:12.742038 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:12 crc kubenswrapper[4808]: I0217 15:55:12.742056 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:12Z","lastTransitionTime":"2026-02-17T15:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:12 crc kubenswrapper[4808]: I0217 15:55:12.846104 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:12 crc kubenswrapper[4808]: I0217 15:55:12.846168 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:12 crc kubenswrapper[4808]: I0217 15:55:12.846180 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:12 crc kubenswrapper[4808]: I0217 15:55:12.846202 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:12 crc kubenswrapper[4808]: I0217 15:55:12.846217 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:12Z","lastTransitionTime":"2026-02-17T15:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:12 crc kubenswrapper[4808]: I0217 15:55:12.949403 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:12 crc kubenswrapper[4808]: I0217 15:55:12.949521 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:12 crc kubenswrapper[4808]: I0217 15:55:12.949548 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:12 crc kubenswrapper[4808]: I0217 15:55:12.949614 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:12 crc kubenswrapper[4808]: I0217 15:55:12.949667 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:12Z","lastTransitionTime":"2026-02-17T15:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:13 crc kubenswrapper[4808]: I0217 15:55:13.054124 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:13 crc kubenswrapper[4808]: I0217 15:55:13.054242 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:13 crc kubenswrapper[4808]: I0217 15:55:13.054267 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:13 crc kubenswrapper[4808]: I0217 15:55:13.054300 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:13 crc kubenswrapper[4808]: I0217 15:55:13.054320 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:13Z","lastTransitionTime":"2026-02-17T15:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:13 crc kubenswrapper[4808]: I0217 15:55:13.139623 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 17:54:16.680292903 +0000 UTC Feb 17 15:55:13 crc kubenswrapper[4808]: I0217 15:55:13.145038 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:55:13 crc kubenswrapper[4808]: E0217 15:55:13.145474 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z8tn8" podUID="b88c3e5f-7390-477c-ae74-aced26a8ddf9" Feb 17 15:55:13 crc kubenswrapper[4808]: I0217 15:55:13.145753 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:55:13 crc kubenswrapper[4808]: I0217 15:55:13.145816 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:13 crc kubenswrapper[4808]: E0217 15:55:13.146926 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:55:13 crc kubenswrapper[4808]: E0217 15:55:13.147046 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:55:13 crc kubenswrapper[4808]: I0217 15:55:13.147462 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:55:13 crc kubenswrapper[4808]: E0217 15:55:13.147639 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:55:13 crc kubenswrapper[4808]: I0217 15:55:13.156805 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:13 crc kubenswrapper[4808]: I0217 15:55:13.156852 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:13 crc kubenswrapper[4808]: I0217 15:55:13.156866 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:13 crc kubenswrapper[4808]: I0217 15:55:13.156884 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:13 crc kubenswrapper[4808]: I0217 15:55:13.156897 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:13Z","lastTransitionTime":"2026-02-17T15:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:13 crc kubenswrapper[4808]: I0217 15:55:13.260437 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:13 crc kubenswrapper[4808]: I0217 15:55:13.260533 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:13 crc kubenswrapper[4808]: I0217 15:55:13.260560 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:13 crc kubenswrapper[4808]: I0217 15:55:13.260628 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:13 crc kubenswrapper[4808]: I0217 15:55:13.260651 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:13Z","lastTransitionTime":"2026-02-17T15:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:13 crc kubenswrapper[4808]: I0217 15:55:13.363612 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:13 crc kubenswrapper[4808]: I0217 15:55:13.363687 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:13 crc kubenswrapper[4808]: I0217 15:55:13.363704 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:13 crc kubenswrapper[4808]: I0217 15:55:13.363733 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:13 crc kubenswrapper[4808]: I0217 15:55:13.363752 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:13Z","lastTransitionTime":"2026-02-17T15:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:13 crc kubenswrapper[4808]: I0217 15:55:13.467229 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:13 crc kubenswrapper[4808]: I0217 15:55:13.467305 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:13 crc kubenswrapper[4808]: I0217 15:55:13.467324 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:13 crc kubenswrapper[4808]: I0217 15:55:13.467355 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:13 crc kubenswrapper[4808]: I0217 15:55:13.467379 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:13Z","lastTransitionTime":"2026-02-17T15:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:13 crc kubenswrapper[4808]: I0217 15:55:13.570282 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:13 crc kubenswrapper[4808]: I0217 15:55:13.570349 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:13 crc kubenswrapper[4808]: I0217 15:55:13.570368 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:13 crc kubenswrapper[4808]: I0217 15:55:13.570397 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:13 crc kubenswrapper[4808]: I0217 15:55:13.570418 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:13Z","lastTransitionTime":"2026-02-17T15:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:13 crc kubenswrapper[4808]: I0217 15:55:13.672839 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:13 crc kubenswrapper[4808]: I0217 15:55:13.672904 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:13 crc kubenswrapper[4808]: I0217 15:55:13.672922 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:13 crc kubenswrapper[4808]: I0217 15:55:13.672944 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:13 crc kubenswrapper[4808]: I0217 15:55:13.672961 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:13Z","lastTransitionTime":"2026-02-17T15:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:13 crc kubenswrapper[4808]: I0217 15:55:13.776841 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:13 crc kubenswrapper[4808]: I0217 15:55:13.776903 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:13 crc kubenswrapper[4808]: I0217 15:55:13.776921 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:13 crc kubenswrapper[4808]: I0217 15:55:13.776949 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:13 crc kubenswrapper[4808]: I0217 15:55:13.776970 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:13Z","lastTransitionTime":"2026-02-17T15:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:13 crc kubenswrapper[4808]: I0217 15:55:13.880852 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:13 crc kubenswrapper[4808]: I0217 15:55:13.880938 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:13 crc kubenswrapper[4808]: I0217 15:55:13.880962 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:13 crc kubenswrapper[4808]: I0217 15:55:13.880986 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:13 crc kubenswrapper[4808]: I0217 15:55:13.880999 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:13Z","lastTransitionTime":"2026-02-17T15:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:13 crc kubenswrapper[4808]: I0217 15:55:13.984664 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:13 crc kubenswrapper[4808]: I0217 15:55:13.984730 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:13 crc kubenswrapper[4808]: I0217 15:55:13.984742 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:13 crc kubenswrapper[4808]: I0217 15:55:13.984770 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:13 crc kubenswrapper[4808]: I0217 15:55:13.984785 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:13Z","lastTransitionTime":"2026-02-17T15:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.088639 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.088731 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.088761 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.088798 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.088825 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:14Z","lastTransitionTime":"2026-02-17T15:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.141990 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 05:41:02.529625845 +0000 UTC Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.192383 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.192440 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.192457 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.192484 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.192502 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:14Z","lastTransitionTime":"2026-02-17T15:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.296483 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.296554 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.296605 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.296636 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.296655 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:14Z","lastTransitionTime":"2026-02-17T15:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.334202 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.334265 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.334283 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.334311 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.334329 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:14Z","lastTransitionTime":"2026-02-17T15:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:14 crc kubenswrapper[4808]: E0217 15:55:14.357018 4808 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7379f6dd-5937-4d60-901f-8c9dc45481b3\\\",\\\"systemUUID\\\":\\\"8fe3bc97-dd01-4038-9ff9-743e71f8162b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:14Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.363766 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.363854 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.363884 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.363922 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.363947 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:14Z","lastTransitionTime":"2026-02-17T15:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:14 crc kubenswrapper[4808]: E0217 15:55:14.385990 4808 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7379f6dd-5937-4d60-901f-8c9dc45481b3\\\",\\\"systemUUID\\\":\\\"8fe3bc97-dd01-4038-9ff9-743e71f8162b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:14Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.393106 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.393189 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.393213 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.393246 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.393271 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:14Z","lastTransitionTime":"2026-02-17T15:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:14 crc kubenswrapper[4808]: E0217 15:55:14.414690 4808 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7379f6dd-5937-4d60-901f-8c9dc45481b3\\\",\\\"systemUUID\\\":\\\"8fe3bc97-dd01-4038-9ff9-743e71f8162b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:14Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.420566 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.420669 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.420688 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.420719 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.420740 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:14Z","lastTransitionTime":"2026-02-17T15:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:14 crc kubenswrapper[4808]: E0217 15:55:14.441554 4808 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7379f6dd-5937-4d60-901f-8c9dc45481b3\\\",\\\"systemUUID\\\":\\\"8fe3bc97-dd01-4038-9ff9-743e71f8162b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:14Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.447388 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.447453 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.447472 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.447500 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.447522 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:14Z","lastTransitionTime":"2026-02-17T15:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:14 crc kubenswrapper[4808]: E0217 15:55:14.469315 4808 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7379f6dd-5937-4d60-901f-8c9dc45481b3\\\",\\\"systemUUID\\\":\\\"8fe3bc97-dd01-4038-9ff9-743e71f8162b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:14Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:14 crc kubenswrapper[4808]: E0217 15:55:14.469531 4808 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.472346 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.472489 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.472526 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.472559 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.472616 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:14Z","lastTransitionTime":"2026-02-17T15:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.576538 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.576648 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.576668 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.576697 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.576718 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:14Z","lastTransitionTime":"2026-02-17T15:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.679279 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.679361 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.679384 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.679419 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.679439 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:14Z","lastTransitionTime":"2026-02-17T15:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.783611 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.783687 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.783710 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.783741 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.783762 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:14Z","lastTransitionTime":"2026-02-17T15:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.887298 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.887373 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.887394 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.887423 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.887445 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:14Z","lastTransitionTime":"2026-02-17T15:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.990186 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.990226 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.990238 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.990257 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.990271 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:14Z","lastTransitionTime":"2026-02-17T15:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:15 crc kubenswrapper[4808]: I0217 15:55:15.094565 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:15 crc kubenswrapper[4808]: I0217 15:55:15.094667 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:15 crc kubenswrapper[4808]: I0217 15:55:15.094685 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:15 crc kubenswrapper[4808]: I0217 15:55:15.094712 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:15 crc kubenswrapper[4808]: I0217 15:55:15.094734 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:15Z","lastTransitionTime":"2026-02-17T15:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:15 crc kubenswrapper[4808]: I0217 15:55:15.142827 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 07:39:41.082982418 +0000 UTC Feb 17 15:55:15 crc kubenswrapper[4808]: I0217 15:55:15.145736 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:15 crc kubenswrapper[4808]: I0217 15:55:15.145806 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:55:15 crc kubenswrapper[4808]: E0217 15:55:15.145970 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:55:15 crc kubenswrapper[4808]: E0217 15:55:15.146130 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:55:15 crc kubenswrapper[4808]: I0217 15:55:15.146278 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:55:15 crc kubenswrapper[4808]: I0217 15:55:15.146304 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:55:15 crc kubenswrapper[4808]: E0217 15:55:15.146493 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z8tn8" podUID="b88c3e5f-7390-477c-ae74-aced26a8ddf9" Feb 17 15:55:15 crc kubenswrapper[4808]: E0217 15:55:15.146873 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:55:15 crc kubenswrapper[4808]: I0217 15:55:15.198697 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:15 crc kubenswrapper[4808]: I0217 15:55:15.198788 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:15 crc kubenswrapper[4808]: I0217 15:55:15.198812 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:15 crc kubenswrapper[4808]: I0217 15:55:15.198843 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:15 crc kubenswrapper[4808]: I0217 15:55:15.198864 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:15Z","lastTransitionTime":"2026-02-17T15:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:15 crc kubenswrapper[4808]: I0217 15:55:15.304503 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:15 crc kubenswrapper[4808]: I0217 15:55:15.304606 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:15 crc kubenswrapper[4808]: I0217 15:55:15.304630 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:15 crc kubenswrapper[4808]: I0217 15:55:15.304662 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:15 crc kubenswrapper[4808]: I0217 15:55:15.304687 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:15Z","lastTransitionTime":"2026-02-17T15:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:15 crc kubenswrapper[4808]: I0217 15:55:15.441856 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:15 crc kubenswrapper[4808]: I0217 15:55:15.441926 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:15 crc kubenswrapper[4808]: I0217 15:55:15.441944 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:15 crc kubenswrapper[4808]: I0217 15:55:15.441979 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:15 crc kubenswrapper[4808]: I0217 15:55:15.442010 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:15Z","lastTransitionTime":"2026-02-17T15:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:15 crc kubenswrapper[4808]: I0217 15:55:15.545620 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:15 crc kubenswrapper[4808]: I0217 15:55:15.545673 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:15 crc kubenswrapper[4808]: I0217 15:55:15.545692 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:15 crc kubenswrapper[4808]: I0217 15:55:15.545721 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:15 crc kubenswrapper[4808]: I0217 15:55:15.545741 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:15Z","lastTransitionTime":"2026-02-17T15:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:15 crc kubenswrapper[4808]: I0217 15:55:15.649730 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:15 crc kubenswrapper[4808]: I0217 15:55:15.649816 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:15 crc kubenswrapper[4808]: I0217 15:55:15.649837 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:15 crc kubenswrapper[4808]: I0217 15:55:15.649864 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:15 crc kubenswrapper[4808]: I0217 15:55:15.649883 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:15Z","lastTransitionTime":"2026-02-17T15:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:15 crc kubenswrapper[4808]: I0217 15:55:15.753681 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:15 crc kubenswrapper[4808]: I0217 15:55:15.753780 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:15 crc kubenswrapper[4808]: I0217 15:55:15.753810 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:15 crc kubenswrapper[4808]: I0217 15:55:15.753852 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:15 crc kubenswrapper[4808]: I0217 15:55:15.753877 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:15Z","lastTransitionTime":"2026-02-17T15:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:15 crc kubenswrapper[4808]: I0217 15:55:15.857352 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:15 crc kubenswrapper[4808]: I0217 15:55:15.857441 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:15 crc kubenswrapper[4808]: I0217 15:55:15.857459 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:15 crc kubenswrapper[4808]: I0217 15:55:15.857492 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:15 crc kubenswrapper[4808]: I0217 15:55:15.857514 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:15Z","lastTransitionTime":"2026-02-17T15:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:15 crc kubenswrapper[4808]: I0217 15:55:15.961621 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:15 crc kubenswrapper[4808]: I0217 15:55:15.961688 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:15 crc kubenswrapper[4808]: I0217 15:55:15.961708 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:15 crc kubenswrapper[4808]: I0217 15:55:15.961735 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:15 crc kubenswrapper[4808]: I0217 15:55:15.961754 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:15Z","lastTransitionTime":"2026-02-17T15:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:16 crc kubenswrapper[4808]: I0217 15:55:16.066849 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:16 crc kubenswrapper[4808]: I0217 15:55:16.066932 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:16 crc kubenswrapper[4808]: I0217 15:55:16.066943 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:16 crc kubenswrapper[4808]: I0217 15:55:16.066963 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:16 crc kubenswrapper[4808]: I0217 15:55:16.066975 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:16Z","lastTransitionTime":"2026-02-17T15:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:16 crc kubenswrapper[4808]: I0217 15:55:16.143224 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 09:48:35.189546079 +0000 UTC Feb 17 15:55:16 crc kubenswrapper[4808]: I0217 15:55:16.172184 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:16 crc kubenswrapper[4808]: I0217 15:55:16.172243 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:16 crc kubenswrapper[4808]: I0217 15:55:16.172255 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:16 crc kubenswrapper[4808]: I0217 15:55:16.172296 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:16 crc kubenswrapper[4808]: I0217 15:55:16.172310 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:16Z","lastTransitionTime":"2026-02-17T15:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:16 crc kubenswrapper[4808]: I0217 15:55:16.276110 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:16 crc kubenswrapper[4808]: I0217 15:55:16.276166 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:16 crc kubenswrapper[4808]: I0217 15:55:16.276185 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:16 crc kubenswrapper[4808]: I0217 15:55:16.276215 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:16 crc kubenswrapper[4808]: I0217 15:55:16.276233 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:16Z","lastTransitionTime":"2026-02-17T15:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:16 crc kubenswrapper[4808]: I0217 15:55:16.379643 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:16 crc kubenswrapper[4808]: I0217 15:55:16.379725 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:16 crc kubenswrapper[4808]: I0217 15:55:16.379746 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:16 crc kubenswrapper[4808]: I0217 15:55:16.379779 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:16 crc kubenswrapper[4808]: I0217 15:55:16.379808 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:16Z","lastTransitionTime":"2026-02-17T15:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:16 crc kubenswrapper[4808]: I0217 15:55:16.483677 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:16 crc kubenswrapper[4808]: I0217 15:55:16.483739 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:16 crc kubenswrapper[4808]: I0217 15:55:16.483756 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:16 crc kubenswrapper[4808]: I0217 15:55:16.483781 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:16 crc kubenswrapper[4808]: I0217 15:55:16.483800 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:16Z","lastTransitionTime":"2026-02-17T15:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:16 crc kubenswrapper[4808]: I0217 15:55:16.587683 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:16 crc kubenswrapper[4808]: I0217 15:55:16.587760 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:16 crc kubenswrapper[4808]: I0217 15:55:16.587778 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:16 crc kubenswrapper[4808]: I0217 15:55:16.587802 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:16 crc kubenswrapper[4808]: I0217 15:55:16.587818 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:16Z","lastTransitionTime":"2026-02-17T15:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:16 crc kubenswrapper[4808]: I0217 15:55:16.691162 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:16 crc kubenswrapper[4808]: I0217 15:55:16.691244 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:16 crc kubenswrapper[4808]: I0217 15:55:16.691266 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:16 crc kubenswrapper[4808]: I0217 15:55:16.691297 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:16 crc kubenswrapper[4808]: I0217 15:55:16.691316 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:16Z","lastTransitionTime":"2026-02-17T15:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:16 crc kubenswrapper[4808]: I0217 15:55:16.794805 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:16 crc kubenswrapper[4808]: I0217 15:55:16.794908 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:16 crc kubenswrapper[4808]: I0217 15:55:16.794939 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:16 crc kubenswrapper[4808]: I0217 15:55:16.794978 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:16 crc kubenswrapper[4808]: I0217 15:55:16.795004 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:16Z","lastTransitionTime":"2026-02-17T15:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:16 crc kubenswrapper[4808]: I0217 15:55:16.901109 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:16 crc kubenswrapper[4808]: I0217 15:55:16.901154 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:16 crc kubenswrapper[4808]: I0217 15:55:16.901189 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:16 crc kubenswrapper[4808]: I0217 15:55:16.901213 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:16 crc kubenswrapper[4808]: I0217 15:55:16.901229 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:16Z","lastTransitionTime":"2026-02-17T15:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.003871 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.003940 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.003954 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.003974 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.003986 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:17Z","lastTransitionTime":"2026-02-17T15:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.106835 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.106915 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.106936 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.106970 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.106994 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:17Z","lastTransitionTime":"2026-02-17T15:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.144463 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 13:54:30.415624609 +0000 UTC Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.144868 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.145099 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:55:17 crc kubenswrapper[4808]: E0217 15:55:17.145095 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.145304 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.145433 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:17 crc kubenswrapper[4808]: E0217 15:55:17.145532 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z8tn8" podUID="b88c3e5f-7390-477c-ae74-aced26a8ddf9" Feb 17 15:55:17 crc kubenswrapper[4808]: E0217 15:55:17.145729 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:55:17 crc kubenswrapper[4808]: E0217 15:55:17.145789 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.147845 4808 scope.go:117] "RemoveContainer" containerID="5d307d637e95a78d79b622b1de7d0ed293b2e0e690f6b661e6f8ed1c3ab91673" Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.167200 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b5cb9af7fe50ad534e758ba5647e162dfc951f41f07330e8b671427811de556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:17Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.183392 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e109410f-af42-4d80-bf58-9af3a5dde09a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd52f8fe1e994b2f877ce0843ce86d86d7674bace8c4ca163e3232248313435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b00de586738e2d759aa971e2114def8fdfeb2a25fd72f482d75b9f46ea9a3d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12c45de72b21abdab0a1073a9a1a357c8d593f68a339bf9b455b5e87aa7863aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59dcbb2be526e98cfd0a3c8cf833d6cfdef0120c58b47e52fb62f56adffb1d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:17Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.200079 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:17Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.211024 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.211091 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.211112 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.211190 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.211281 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:17Z","lastTransitionTime":"2026-02-17T15:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.219886 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6c9480c-4161-4c38-bec1-0822c6692f6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53d750dff2e0aa3d65e2defbc3cdf44f48375946c7021c0b1e1056b5ed7d729e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8d4091ef21fb9fef52dafcd7f1d0e865ff57652fcb75d0ba1e16361bcb81f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8d4091ef21fb9fef52dafcd7f1d0e865ff57652fcb75d0ba1e16361bcb81f44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26ac79dab2ec2e8e379a62382daa37e5c1feaa0666d3c6426bd9a295c64fdd5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26ac79dab2ec2e8e379a62382daa37e5c1feaa0666d3c6426bd9a295c64fdd5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43f3b959a4804631ce679ee8dd89b1fa9249892328d303865de288a5a7529af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43f3b959a4804631ce679ee8dd89b1fa9249892328d303865de288a5a7529af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cf535fc0e39f67860383b43629a84bb4608a6a5d42304c537ab91a306ed841c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cf535fc0e39f67860383b43629a84bb4608a6a5d42304c537ab91a306ed841c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89610759cc77f66154699ee9784109cba8ce21818125f447368e19fb6cc8cfb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89610759cc77f66154699ee9784109cba8ce21818125f447368e19fb6cc8cfb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx4nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:17Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.236145 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca38b6e7-b21c-453d-8b6c-a163dac84b35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14df09051221e795ef203b228b1f61d67e86d8052d81b4853a27d50d2b6e64bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://383650c9e8169aa5621d731ebcbfdd1ace0491ad4e7931fca1f6b595e0e782b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k8v8k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:17Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.263234 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6556f8ef16656338bd11e718549ef3c019e96928825ab9dc0596f24b8f43e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbc64aec6f296c59b9fb1e8c183c9f80c346f2d76620db59376c914ffcec02b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:17Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.280286 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f8pfh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13cb51e0-9eb4-4948-a9bf-93cddaa429fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e67e9f34fe5e5e9f272673e47a80dfec89a2832289e719b09d5a13399412b2ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkcvd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f8pfh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:17Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.297879 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-msgfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18916d6d-e063-40a0-816f-554f95cd2956\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bdc6e86716d40b6c433ccb24a97665384190bfe2ab5ddf0868109d78826917e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d94a7bfe9ebc3fcec167acc2f840374566394d9425801a71bd3626ce196ee3a1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T15:55:05Z\\\",\\\"message\\\":\\\"2026-02-17T15:54:20+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c64dd7e9-22dc-4a6f-a49b-f38d3cbe118b\\\\n2026-02-17T15:54:20+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c64dd7e9-22dc-4a6f-a49b-f38d3cbe118b to /host/opt/cni/bin/\\\\n2026-02-17T15:54:20Z [verbose] multus-daemon started\\\\n2026-02-17T15:54:20Z [verbose] Readiness Indicator file check\\\\n2026-02-17T15:55:05Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-msgfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:17Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.315055 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.315171 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.315196 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.315227 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.315251 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:17Z","lastTransitionTime":"2026-02-17T15:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.332488 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748f02a-e3dd-47c7-b89d-b472c718e593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80ab3de82f2a3f22425c34c9b4abcbc925a7076e3f2ce3b952f10aeb856e1c09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c263e6c0445a0badadcbc5b50c370fd4ee9a4d0cb3e535e3d7944e938cbea4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58ee49f9d112bd2fe6a3cc5f499d1be9d4c51f2741ffb9bf24754a46a0a12814\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b04c73bfd5eadf6c1e436f6a7150074ee8357cef79b0e040c1d9f3809aab13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9e729fa5a68d07a0f7e4a86114ed39e4128428e5a21c2f3f113f869adc9fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26a9d62d12c66018649ffcb84c69e20f1c08f3241bdb02ba4306b08dbe5ec49a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d307d637e95a78d79b622b1de7d0ed293b2e0e690f6b661e6f8ed1c3ab91673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d307d637e95a78d79b622b1de7d0ed293b2e0e690f6b661e6f8ed1c3ab91673\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T15:54:47Z\\\",\\\"message\\\":\\\"s{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0217 15:54:47.336335 6443 services_controller.go:444] Built service openshift-console-operator/metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0217 15:54:47.336345 6443 services_controller.go:445] Built service openshift-console-operator/metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nF0217 15:54:47.336359 6443 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:47Z is after 2025-08-24T17:21:41Z]\\\\nI0217 15:54:47.336366 6443 services_controller.go:451] Built service openshift-consol\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-tgvlh_openshift-ovn-kubernetes(5748f02a-e3dd-47c7-b89d-b472c718e593)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://363a0f82d4347e522c91f27597bc03aa33f75e0399760fcc5cfdc1772eb6aabf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tgvlh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:17Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.354639 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"759d5f61-7cb6-48e5-878f-b6598b2e3736\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4372c35d9db61ec94e0ea9eacf8c4e39b960530780a05f7d69ef2a050d38d23b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d7c05a68a98372cde4e26c0c61f336641b7554e44bea9c4d240fed31e6b366b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://defa2be2862e24dfc99982183beaa92c8114cc81036544f19ed8bb4e10b0b09a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51962c47ab63116fa62604c3cc5603db1b7b4015519052616c363dc21c7cb913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51962c47ab63116fa62604c3cc5603db1b7b4015519052616c363dc21c7cb913\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:53:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:17Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.379784 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:17Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.400682 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:17Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.418044 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-86pl6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"067d21e4-9618-42af-bb01-1ea41d1bd7ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcb207e998564484db273e9e68e20e49fb986fc4644b656e17b5c3fea9fb4eb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjv2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded2fa969b96132c1a5953da41b9418ec78621261888216b3854bc3cacb7bca6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjv2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-86pl6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:17Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.419727 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.419775 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.419789 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.419810 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.419824 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:17Z","lastTransitionTime":"2026-02-17T15:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.435766 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-z8tn8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b88c3e5f-7390-477c-ae74-aced26a8ddf9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8f79s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8f79s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-z8tn8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:17Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.453021 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efd34c89-7350-4ce0-83d9-302614df88f7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa3ef5d82c776e482d3da2d223d74423393c75b813707483fadca8cfbb5ed3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://695c70a36ec8a626d22b6dc04fdaad77e3e1f27a035ce6f62b96afe1f2c29361\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2611c9a878eac336beeea637370ce7fe47a5a80a6f29002cb2fb79d4637a1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d0e25e29d8f9c5146809e50f50a20c537f5ddecea1b902928a94870b5d44ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68d1439ead0f87e8cde6925c6db2cfde8a7fe89c6e5afaf719868740138742df\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:54:16Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 15:54:01.029442 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:54:01.030078 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2660512818/tls.crt::/tmp/serving-cert-2660512818/tls.key\\\\\\\"\\\\nI0217 15:54:16.361222 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:54:16.370125 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:54:16.370169 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:54:16.370202 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:54:16.370212 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:54:16.383437 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:54:16.383473 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383482 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:54:16.383494 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:54:16.383498 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:54:16.383502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:54:16.383616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:54:16.393934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://715d799f5e1732f88175b90bad28450b9c5148e89bf47ac3e47f9585acf3b392\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:53:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:17Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.469957 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3aaaa97d92e1acc8fe17594a75ed3e720801983ea175873486102bca899d9c04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:17Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.483168 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pr5s4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4989dd6-5d44-42b5-882c-12a10ffc7911\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://228e9f46385cedf80299c68685a8b2b94d96c41ade18eeea5de7a83c648cf704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2xc9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pr5s4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:17Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.522863 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.523289 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.523420 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.523559 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.523728 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:17Z","lastTransitionTime":"2026-02-17T15:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.627560 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.627686 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.627713 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.627751 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.627775 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:17Z","lastTransitionTime":"2026-02-17T15:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.693898 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tgvlh_5748f02a-e3dd-47c7-b89d-b472c718e593/ovnkube-controller/2.log" Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.701269 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" event={"ID":"5748f02a-e3dd-47c7-b89d-b472c718e593","Type":"ContainerStarted","Data":"a3c59386483fde848e69cdd193832875e9c1cbe4725d43032090c9a62494c40f"} Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.702009 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.724799 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b5cb9af7fe50ad534e758ba5647e162dfc951f41f07330e8b671427811de556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:17Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.730948 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.730990 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.731008 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.731030 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.731044 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:17Z","lastTransitionTime":"2026-02-17T15:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.746119 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e109410f-af42-4d80-bf58-9af3a5dde09a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd52f8fe1e994b2f877ce0843ce86d86d7674bace8c4ca163e3232248313435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b00de586738e2d759aa971e2114def8fdfeb2a25fd72f482d75b9f46ea9a3d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12c45de72b21abdab0a1073a9a1a357c8d593f68a339bf9b455b5e87aa7863aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59dcbb2be526e98cfd0a3c8cf833d6cfdef0120c58b47e52fb62f56adffb1d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:17Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.768031 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:17Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.889564 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.889619 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.889630 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.889648 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.889661 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:17Z","lastTransitionTime":"2026-02-17T15:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.888518 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6c9480c-4161-4c38-bec1-0822c6692f6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53d750dff2e0aa3d65e2defbc3cdf44f48375946c7021c0b1e1056b5ed7d729e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8d4091ef21fb9fef52dafcd7f1d0e865ff57652fcb75d0ba1e16361bcb81f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8d4091ef21fb9fef52dafcd7f1d0e865ff57652fcb75d0ba1e16361bcb81f44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26ac79dab2ec2e8e379a62382daa37e5c1feaa0666d3c6426bd9a295c64fdd5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26ac79dab2ec2e8e379a62382daa37e5c1feaa0666d3c6426bd9a295c64fdd5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43f3b959a4804631ce679ee8dd89b1fa9249892328d303865de288a5a7529af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43f3b959a4804631ce679ee8dd89b1fa9249892328d303865de288a5a7529af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cf535fc0e39f67860383b43629a84bb4608a6a5d42304c537ab91a306ed841c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cf535fc0e39f67860383b43629a84bb4608a6a5d42304c537ab91a306ed841c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89610759cc77f66154699ee9784109cba8ce21818125f447368e19fb6cc8cfb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89610759cc77f66154699ee9784109cba8ce21818125f447368e19fb6cc8cfb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx4nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:17Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.914515 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca38b6e7-b21c-453d-8b6c-a163dac84b35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14df09051221e795ef203b228b1f61d67e86d8052d81b4853a27d50d2b6e64bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://383650c9e8169aa5621d731ebcbfdd1ace0491ad4e7931fca1f6b595e0e782b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k8v8k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:17Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.931265 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-msgfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18916d6d-e063-40a0-816f-554f95cd2956\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bdc6e86716d40b6c433ccb24a97665384190bfe2ab5ddf0868109d78826917e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d94a7bfe9ebc3fcec167acc2f840374566394d9425801a71bd3626ce196ee3a1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T15:55:05Z\\\",\\\"message\\\":\\\"2026-02-17T15:54:20+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c64dd7e9-22dc-4a6f-a49b-f38d3cbe118b\\\\n2026-02-17T15:54:20+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c64dd7e9-22dc-4a6f-a49b-f38d3cbe118b to /host/opt/cni/bin/\\\\n2026-02-17T15:54:20Z [verbose] multus-daemon started\\\\n2026-02-17T15:54:20Z [verbose] Readiness Indicator file check\\\\n2026-02-17T15:55:05Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-msgfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:17Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.959399 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748f02a-e3dd-47c7-b89d-b472c718e593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80ab3de82f2a3f22425c34c9b4abcbc925a7076e3f2ce3b952f10aeb856e1c09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c263e6c0445a0badadcbc5b50c370fd4ee9a4d0cb3e535e3d7944e938cbea4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58ee49f9d112bd2fe6a3cc5f499d1be9d4c51f2741ffb9bf24754a46a0a12814\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b04c73bfd5eadf6c1e436f6a7150074ee8357cef79b0e040c1d9f3809aab13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9e729fa5a68d07a0f7e4a86114ed39e4128428e5a21c2f3f113f869adc9fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26a9d62d12c66018649ffcb84c69e20f1c08f3241bdb02ba4306b08dbe5ec49a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3c59386483fde848e69cdd193832875e9c1cbe4725d43032090c9a62494c40f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d307d637e95a78d79b622b1de7d0ed293b2e0e690f6b661e6f8ed1c3ab91673\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T15:54:47Z\\\",\\\"message\\\":\\\"s{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0217 15:54:47.336335 6443 services_controller.go:444] Built service openshift-console-operator/metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0217 15:54:47.336345 6443 services_controller.go:445] Built service openshift-console-operator/metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nF0217 15:54:47.336359 6443 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:47Z is after 2025-08-24T17:21:41Z]\\\\nI0217 15:54:47.336366 6443 services_controller.go:451] Built service openshift-consol\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://363a0f82d4347e522c91f27597bc03aa33f75e0399760fcc5cfdc1772eb6aabf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tgvlh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:17Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.974902 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"759d5f61-7cb6-48e5-878f-b6598b2e3736\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4372c35d9db61ec94e0ea9eacf8c4e39b960530780a05f7d69ef2a050d38d23b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d7c05a68a98372cde4e26c0c61f336641b7554e44bea9c4d240fed31e6b366b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://defa2be2862e24dfc99982183beaa92c8114cc81036544f19ed8bb4e10b0b09a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51962c47ab63116fa62604c3cc5603db1b7b4015519052616c363dc21c7cb913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51962c47ab63116fa62604c3cc5603db1b7b4015519052616c363dc21c7cb913\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:53:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:17Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.990533 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:17Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.002587 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.002635 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.002648 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.002693 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.002706 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:18Z","lastTransitionTime":"2026-02-17T15:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.011110 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.025704 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6556f8ef16656338bd11e718549ef3c019e96928825ab9dc0596f24b8f43e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbc64aec6f296c59b9fb1e8c183c9f80c346f2d76620db59376c914ffcec02b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.037946 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f8pfh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13cb51e0-9eb4-4948-a9bf-93cddaa429fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e67e9f34fe5e5e9f272673e47a80dfec89a2832289e719b09d5a13399412b2ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkcvd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f8pfh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.053865 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-86pl6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"067d21e4-9618-42af-bb01-1ea41d1bd7ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcb207e998564484db273e9e68e20e49fb986fc4644b656e17b5c3fea9fb4eb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjv2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded2fa969b96132c1a5953da41b9418ec78621261888216b3854bc3cacb7bca6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjv2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-86pl6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.081756 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efd34c89-7350-4ce0-83d9-302614df88f7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa3ef5d82c776e482d3da2d223d74423393c75b813707483fadca8cfbb5ed3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://695c70a36ec8a626d22b6dc04fdaad77e3e1f27a035ce6f62b96afe1f2c29361\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2611c9a878eac336beeea637370ce7fe47a5a80a6f29002cb2fb79d4637a1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d0e25e29d8f9c5146809e50f50a20c537f5ddecea1b902928a94870b5d44ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68d1439ead0f87e8cde6925c6db2cfde8a7fe89c6e5afaf719868740138742df\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:54:16Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 15:54:01.029442 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:54:01.030078 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2660512818/tls.crt::/tmp/serving-cert-2660512818/tls.key\\\\\\\"\\\\nI0217 15:54:16.361222 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:54:16.370125 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:54:16.370169 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:54:16.370202 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:54:16.370212 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:54:16.383437 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:54:16.383473 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383482 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:54:16.383494 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:54:16.383498 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:54:16.383502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:54:16.383616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:54:16.393934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://715d799f5e1732f88175b90bad28450b9c5148e89bf47ac3e47f9585acf3b392\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:53:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.099105 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3aaaa97d92e1acc8fe17594a75ed3e720801983ea175873486102bca899d9c04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.105563 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.105662 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.105684 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.105714 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.105736 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:18Z","lastTransitionTime":"2026-02-17T15:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.115924 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pr5s4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4989dd6-5d44-42b5-882c-12a10ffc7911\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://228e9f46385cedf80299c68685a8b2b94d96c41ade18eeea5de7a83c648cf704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2xc9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pr5s4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.136641 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-z8tn8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b88c3e5f-7390-477c-ae74-aced26a8ddf9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8f79s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8f79s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-z8tn8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.144659 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 07:12:38.599292047 +0000 UTC Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.158188 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.208563 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.208653 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.208672 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.208701 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.208721 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:18Z","lastTransitionTime":"2026-02-17T15:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.311941 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.311999 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.312018 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.312046 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.312064 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:18Z","lastTransitionTime":"2026-02-17T15:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.415398 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.415488 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.415512 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.415544 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.415564 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:18Z","lastTransitionTime":"2026-02-17T15:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.519065 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.519138 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.519158 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.519192 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.519215 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:18Z","lastTransitionTime":"2026-02-17T15:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.622661 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.622717 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.622733 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.622759 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.622775 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:18Z","lastTransitionTime":"2026-02-17T15:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.709287 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tgvlh_5748f02a-e3dd-47c7-b89d-b472c718e593/ovnkube-controller/3.log" Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.710348 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tgvlh_5748f02a-e3dd-47c7-b89d-b472c718e593/ovnkube-controller/2.log" Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.714778 4808 generic.go:334] "Generic (PLEG): container finished" podID="5748f02a-e3dd-47c7-b89d-b472c718e593" containerID="a3c59386483fde848e69cdd193832875e9c1cbe4725d43032090c9a62494c40f" exitCode=1 Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.714950 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" event={"ID":"5748f02a-e3dd-47c7-b89d-b472c718e593","Type":"ContainerDied","Data":"a3c59386483fde848e69cdd193832875e9c1cbe4725d43032090c9a62494c40f"} Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.715040 4808 scope.go:117] "RemoveContainer" containerID="5d307d637e95a78d79b622b1de7d0ed293b2e0e690f6b661e6f8ed1c3ab91673" Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.716121 4808 scope.go:117] "RemoveContainer" containerID="a3c59386483fde848e69cdd193832875e9c1cbe4725d43032090c9a62494c40f" Feb 17 15:55:18 crc kubenswrapper[4808]: E0217 15:55:18.716405 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-tgvlh_openshift-ovn-kubernetes(5748f02a-e3dd-47c7-b89d-b472c718e593)\"" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" podUID="5748f02a-e3dd-47c7-b89d-b472c718e593" Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.726149 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.726298 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.726322 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.726397 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.726432 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:18Z","lastTransitionTime":"2026-02-17T15:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.742141 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e109410f-af42-4d80-bf58-9af3a5dde09a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd52f8fe1e994b2f877ce0843ce86d86d7674bace8c4ca163e3232248313435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b00de586738e2d759aa971e2114def8fdfeb2a25fd72f482d75b9f46ea9a3d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12c45de72b21abdab0a1073a9a1a357c8d593f68a339bf9b455b5e87aa7863aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59dcbb2be526e98cfd0a3c8cf833d6cfdef0120c58b47e52fb62f56adffb1d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.760294 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.784160 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6c9480c-4161-4c38-bec1-0822c6692f6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53d750dff2e0aa3d65e2defbc3cdf44f48375946c7021c0b1e1056b5ed7d729e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8d4091ef21fb9fef52dafcd7f1d0e865ff57652fcb75d0ba1e16361bcb81f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8d4091ef21fb9fef52dafcd7f1d0e865ff57652fcb75d0ba1e16361bcb81f44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26ac79dab2ec2e8e379a62382daa37e5c1feaa0666d3c6426bd9a295c64fdd5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26ac79dab2ec2e8e379a62382daa37e5c1feaa0666d3c6426bd9a295c64fdd5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43f3b959a4804631ce679ee8dd89b1fa9249892328d303865de288a5a7529af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43f3b959a4804631ce679ee8dd89b1fa9249892328d303865de288a5a7529af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cf535fc0e39f67860383b43629a84bb4608a6a5d42304c537ab91a306ed841c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cf535fc0e39f67860383b43629a84bb4608a6a5d42304c537ab91a306ed841c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89610759cc77f66154699ee9784109cba8ce21818125f447368e19fb6cc8cfb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89610759cc77f66154699ee9784109cba8ce21818125f447368e19fb6cc8cfb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx4nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.808615 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca38b6e7-b21c-453d-8b6c-a163dac84b35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14df09051221e795ef203b228b1f61d67e86d8052d81b4853a27d50d2b6e64bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://383650c9e8169aa5621d731ebcbfdd1ace0491ad4e7931fca1f6b595e0e782b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k8v8k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.830886 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.830961 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.830981 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.831010 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.831029 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:18Z","lastTransitionTime":"2026-02-17T15:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.847528 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748f02a-e3dd-47c7-b89d-b472c718e593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80ab3de82f2a3f22425c34c9b4abcbc925a7076e3f2ce3b952f10aeb856e1c09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c263e6c0445a0badadcbc5b50c370fd4ee9a4d0cb3e535e3d7944e938cbea4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58ee49f9d112bd2fe6a3cc5f499d1be9d4c51f2741ffb9bf24754a46a0a12814\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b04c73bfd5eadf6c1e436f6a7150074ee8357cef79b0e040c1d9f3809aab13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9e729fa5a68d07a0f7e4a86114ed39e4128428e5a21c2f3f113f869adc9fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26a9d62d12c66018649ffcb84c69e20f1c08f3241bdb02ba4306b08dbe5ec49a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3c59386483fde848e69cdd193832875e9c1cbe4725d43032090c9a62494c40f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d307d637e95a78d79b622b1de7d0ed293b2e0e690f6b661e6f8ed1c3ab91673\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T15:54:47Z\\\",\\\"message\\\":\\\"s{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0217 15:54:47.336335 6443 services_controller.go:444] Built service openshift-console-operator/metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0217 15:54:47.336345 6443 services_controller.go:445] Built service openshift-console-operator/metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nF0217 15:54:47.336359 6443 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:47Z is after 2025-08-24T17:21:41Z]\\\\nI0217 15:54:47.336366 6443 services_controller.go:451] Built service openshift-consol\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3c59386483fde848e69cdd193832875e9c1cbe4725d43032090c9a62494c40f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T15:55:18Z\\\",\\\"message\\\":\\\"il), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0217 15:55:18.361927 6847 services_controller.go:453] Built service openshift-network-diagnostics/network-check-target template LB for network=default: []services.LB{}\\\\nI0217 15:55:18.362067 6847 services_controller.go:452] Built service openshift-operator-lifecycle-manager/olm-operator-metrics per-node LB for network=default: []services.LB{}\\\\nI0217 15:55:18.362078 6847 services_controller.go:454] Service openshift-network-diagnostics/network-check-target for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI0217 15:55:18.362100 6847 services_controller.go:453] Built service openshift-operator-lifecycle-manager/olm-operator-metrics template LB for network=default: []services.LB{}\\\\nF0217 15:55:18.362112 6847 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network con\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://363a0f82d4347e522c91f27597bc03aa33f75e0399760fcc5cfdc1772eb6aabf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tgvlh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.869636 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"759d5f61-7cb6-48e5-878f-b6598b2e3736\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4372c35d9db61ec94e0ea9eacf8c4e39b960530780a05f7d69ef2a050d38d23b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d7c05a68a98372cde4e26c0c61f336641b7554e44bea9c4d240fed31e6b366b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://defa2be2862e24dfc99982183beaa92c8114cc81036544f19ed8bb4e10b0b09a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51962c47ab63116fa62604c3cc5603db1b7b4015519052616c363dc21c7cb913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51962c47ab63116fa62604c3cc5603db1b7b4015519052616c363dc21c7cb913\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:53:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.886510 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d03f8049-78a3-4d6f-a6a2-894fc1a93f11\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://670ac0bd1d8baf07179e911a15b5cb9c2137b2711e56c6a0243052ad67ff8ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://878385dba8da392fa6524e2bd7051d00b7423ba16efe985229cc6e353f150159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://878385dba8da392fa6524e2bd7051d00b7423ba16efe985229cc6e353f150159\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:53:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.918549 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.934321 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.934376 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.934389 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.934409 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.934423 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:18Z","lastTransitionTime":"2026-02-17T15:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.945072 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.968329 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6556f8ef16656338bd11e718549ef3c019e96928825ab9dc0596f24b8f43e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbc64aec6f296c59b9fb1e8c183c9f80c346f2d76620db59376c914ffcec02b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.986515 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f8pfh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13cb51e0-9eb4-4948-a9bf-93cddaa429fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e67e9f34fe5e5e9f272673e47a80dfec89a2832289e719b09d5a13399412b2ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkcvd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f8pfh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.013089 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-msgfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18916d6d-e063-40a0-816f-554f95cd2956\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bdc6e86716d40b6c433ccb24a97665384190bfe2ab5ddf0868109d78826917e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d94a7bfe9ebc3fcec167acc2f840374566394d9425801a71bd3626ce196ee3a1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T15:55:05Z\\\",\\\"message\\\":\\\"2026-02-17T15:54:20+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c64dd7e9-22dc-4a6f-a49b-f38d3cbe118b\\\\n2026-02-17T15:54:20+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c64dd7e9-22dc-4a6f-a49b-f38d3cbe118b to /host/opt/cni/bin/\\\\n2026-02-17T15:54:20Z [verbose] multus-daemon started\\\\n2026-02-17T15:54:20Z [verbose] Readiness Indicator file check\\\\n2026-02-17T15:55:05Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-msgfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:19Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.032724 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-86pl6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"067d21e4-9618-42af-bb01-1ea41d1bd7ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcb207e998564484db273e9e68e20e49fb986fc4644b656e17b5c3fea9fb4eb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjv2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded2fa969b96132c1a5953da41b9418ec78621261888216b3854bc3cacb7bca6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjv2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-86pl6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:19Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.040123 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.040166 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.040179 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.040206 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.040223 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:19Z","lastTransitionTime":"2026-02-17T15:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.082307 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efd34c89-7350-4ce0-83d9-302614df88f7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa3ef5d82c776e482d3da2d223d74423393c75b813707483fadca8cfbb5ed3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://695c70a36ec8a626d22b6dc04fdaad77e3e1f27a035ce6f62b96afe1f2c29361\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2611c9a878eac336beeea637370ce7fe47a5a80a6f29002cb2fb79d4637a1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d0e25e29d8f9c5146809e50f50a20c537f5ddecea1b902928a94870b5d44ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68d1439ead0f87e8cde6925c6db2cfde8a7fe89c6e5afaf719868740138742df\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:54:16Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 15:54:01.029442 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:54:01.030078 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2660512818/tls.crt::/tmp/serving-cert-2660512818/tls.key\\\\\\\"\\\\nI0217 15:54:16.361222 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:54:16.370125 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:54:16.370169 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:54:16.370202 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:54:16.370212 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:54:16.383437 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:54:16.383473 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383482 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:54:16.383494 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:54:16.383498 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:54:16.383502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:54:16.383616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:54:16.393934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://715d799f5e1732f88175b90bad28450b9c5148e89bf47ac3e47f9585acf3b392\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:53:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:19Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.103013 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3aaaa97d92e1acc8fe17594a75ed3e720801983ea175873486102bca899d9c04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:19Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.121262 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pr5s4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4989dd6-5d44-42b5-882c-12a10ffc7911\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://228e9f46385cedf80299c68685a8b2b94d96c41ade18eeea5de7a83c648cf704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2xc9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pr5s4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:19Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.140221 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-z8tn8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b88c3e5f-7390-477c-ae74-aced26a8ddf9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8f79s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8f79s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-z8tn8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:19Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.144825 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.144909 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.144924 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 02:05:27.892137941 +0000 UTC Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.144825 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:55:19 crc kubenswrapper[4808]: E0217 15:55:19.145083 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.145161 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:55:19 crc kubenswrapper[4808]: E0217 15:55:19.145295 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.145496 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:19 crc kubenswrapper[4808]: E0217 15:55:19.145500 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z8tn8" podUID="b88c3e5f-7390-477c-ae74-aced26a8ddf9" Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.145535 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.145651 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.145688 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:19 crc kubenswrapper[4808]: E0217 15:55:19.145692 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.145714 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:19Z","lastTransitionTime":"2026-02-17T15:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.168214 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b5cb9af7fe50ad534e758ba5647e162dfc951f41f07330e8b671427811de556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:19Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.249135 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.249212 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.249230 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.249261 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.249280 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:19Z","lastTransitionTime":"2026-02-17T15:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.353882 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.353944 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.353963 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.353992 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.354008 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:19Z","lastTransitionTime":"2026-02-17T15:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.457491 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.457535 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.457550 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.457592 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.457607 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:19Z","lastTransitionTime":"2026-02-17T15:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.560816 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.560851 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.560860 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.560880 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.560890 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:19Z","lastTransitionTime":"2026-02-17T15:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.664617 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.664667 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.664678 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.664697 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.664711 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:19Z","lastTransitionTime":"2026-02-17T15:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.723203 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tgvlh_5748f02a-e3dd-47c7-b89d-b472c718e593/ovnkube-controller/3.log" Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.730064 4808 scope.go:117] "RemoveContainer" containerID="a3c59386483fde848e69cdd193832875e9c1cbe4725d43032090c9a62494c40f" Feb 17 15:55:19 crc kubenswrapper[4808]: E0217 15:55:19.730884 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-tgvlh_openshift-ovn-kubernetes(5748f02a-e3dd-47c7-b89d-b472c718e593)\"" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" podUID="5748f02a-e3dd-47c7-b89d-b472c718e593" Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.750724 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:19Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.768179 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.768245 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.768271 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.768309 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.768339 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:19Z","lastTransitionTime":"2026-02-17T15:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.773015 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6556f8ef16656338bd11e718549ef3c019e96928825ab9dc0596f24b8f43e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbc64aec6f296c59b9fb1e8c183c9f80c346f2d76620db59376c914ffcec02b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:19Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.788980 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f8pfh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13cb51e0-9eb4-4948-a9bf-93cddaa429fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e67e9f34fe5e5e9f272673e47a80dfec89a2832289e719b09d5a13399412b2ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkcvd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f8pfh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:19Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.813991 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-msgfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18916d6d-e063-40a0-816f-554f95cd2956\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bdc6e86716d40b6c433ccb24a97665384190bfe2ab5ddf0868109d78826917e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d94a7bfe9ebc3fcec167acc2f840374566394d9425801a71bd3626ce196ee3a1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T15:55:05Z\\\",\\\"message\\\":\\\"2026-02-17T15:54:20+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c64dd7e9-22dc-4a6f-a49b-f38d3cbe118b\\\\n2026-02-17T15:54:20+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c64dd7e9-22dc-4a6f-a49b-f38d3cbe118b to /host/opt/cni/bin/\\\\n2026-02-17T15:54:20Z [verbose] multus-daemon started\\\\n2026-02-17T15:54:20Z [verbose] Readiness Indicator file check\\\\n2026-02-17T15:55:05Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-msgfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:19Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.848119 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748f02a-e3dd-47c7-b89d-b472c718e593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80ab3de82f2a3f22425c34c9b4abcbc925a7076e3f2ce3b952f10aeb856e1c09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c263e6c0445a0badadcbc5b50c370fd4ee9a4d0cb3e535e3d7944e938cbea4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58ee49f9d112bd2fe6a3cc5f499d1be9d4c51f2741ffb9bf24754a46a0a12814\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b04c73bfd5eadf6c1e436f6a7150074ee8357cef79b0e040c1d9f3809aab13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9e729fa5a68d07a0f7e4a86114ed39e4128428e5a21c2f3f113f869adc9fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26a9d62d12c66018649ffcb84c69e20f1c08f3241bdb02ba4306b08dbe5ec49a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3c59386483fde848e69cdd193832875e9c1cbe4725d43032090c9a62494c40f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3c59386483fde848e69cdd193832875e9c1cbe4725d43032090c9a62494c40f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T15:55:18Z\\\",\\\"message\\\":\\\"il), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0217 15:55:18.361927 6847 services_controller.go:453] Built service openshift-network-diagnostics/network-check-target template LB for network=default: []services.LB{}\\\\nI0217 15:55:18.362067 6847 services_controller.go:452] Built service openshift-operator-lifecycle-manager/olm-operator-metrics per-node LB for network=default: []services.LB{}\\\\nI0217 15:55:18.362078 6847 services_controller.go:454] Service openshift-network-diagnostics/network-check-target for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI0217 15:55:18.362100 6847 services_controller.go:453] Built service openshift-operator-lifecycle-manager/olm-operator-metrics template LB for network=default: []services.LB{}\\\\nF0217 15:55:18.362112 6847 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network con\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-tgvlh_openshift-ovn-kubernetes(5748f02a-e3dd-47c7-b89d-b472c718e593)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://363a0f82d4347e522c91f27597bc03aa33f75e0399760fcc5cfdc1772eb6aabf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tgvlh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:19Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.868790 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"759d5f61-7cb6-48e5-878f-b6598b2e3736\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4372c35d9db61ec94e0ea9eacf8c4e39b960530780a05f7d69ef2a050d38d23b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d7c05a68a98372cde4e26c0c61f336641b7554e44bea9c4d240fed31e6b366b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://defa2be2862e24dfc99982183beaa92c8114cc81036544f19ed8bb4e10b0b09a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51962c47ab63116fa62604c3cc5603db1b7b4015519052616c363dc21c7cb913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51962c47ab63116fa62604c3cc5603db1b7b4015519052616c363dc21c7cb913\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:53:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:19Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.873601 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.873656 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.873674 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.873702 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.873723 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:19Z","lastTransitionTime":"2026-02-17T15:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.888505 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d03f8049-78a3-4d6f-a6a2-894fc1a93f11\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://670ac0bd1d8baf07179e911a15b5cb9c2137b2711e56c6a0243052ad67ff8ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://878385dba8da392fa6524e2bd7051d00b7423ba16efe985229cc6e353f150159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://878385dba8da392fa6524e2bd7051d00b7423ba16efe985229cc6e353f150159\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:53:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:19Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.910071 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:19Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.928478 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-86pl6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"067d21e4-9618-42af-bb01-1ea41d1bd7ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcb207e998564484db273e9e68e20e49fb986fc4644b656e17b5c3fea9fb4eb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjv2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded2fa969b96132c1a5953da41b9418ec78621261888216b3854bc3cacb7bca6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjv2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-86pl6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:19Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.946797 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pr5s4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4989dd6-5d44-42b5-882c-12a10ffc7911\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://228e9f46385cedf80299c68685a8b2b94d96c41ade18eeea5de7a83c648cf704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2xc9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pr5s4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:19Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.965460 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-z8tn8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b88c3e5f-7390-477c-ae74-aced26a8ddf9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8f79s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8f79s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-z8tn8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:19Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.980312 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.980404 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.980425 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.980458 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.980486 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:19Z","lastTransitionTime":"2026-02-17T15:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.993120 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efd34c89-7350-4ce0-83d9-302614df88f7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa3ef5d82c776e482d3da2d223d74423393c75b813707483fadca8cfbb5ed3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://695c70a36ec8a626d22b6dc04fdaad77e3e1f27a035ce6f62b96afe1f2c29361\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2611c9a878eac336beeea637370ce7fe47a5a80a6f29002cb2fb79d4637a1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d0e25e29d8f9c5146809e50f50a20c537f5ddecea1b902928a94870b5d44ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68d1439ead0f87e8cde6925c6db2cfde8a7fe89c6e5afaf719868740138742df\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:54:16Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 15:54:01.029442 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:54:01.030078 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2660512818/tls.crt::/tmp/serving-cert-2660512818/tls.key\\\\\\\"\\\\nI0217 15:54:16.361222 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:54:16.370125 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:54:16.370169 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:54:16.370202 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:54:16.370212 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:54:16.383437 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:54:16.383473 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383482 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:54:16.383494 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:54:16.383498 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:54:16.383502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:54:16.383616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:54:16.393934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://715d799f5e1732f88175b90bad28450b9c5148e89bf47ac3e47f9585acf3b392\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:53:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:19Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:20 crc kubenswrapper[4808]: I0217 15:55:20.012819 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3aaaa97d92e1acc8fe17594a75ed3e720801983ea175873486102bca899d9c04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:20Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:20 crc kubenswrapper[4808]: I0217 15:55:20.039875 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b5cb9af7fe50ad534e758ba5647e162dfc951f41f07330e8b671427811de556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:20Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:20 crc kubenswrapper[4808]: I0217 15:55:20.061193 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca38b6e7-b21c-453d-8b6c-a163dac84b35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14df09051221e795ef203b228b1f61d67e86d8052d81b4853a27d50d2b6e64bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://383650c9e8169aa5621d731ebcbfdd1ace0491ad4e7931fca1f6b595e0e782b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k8v8k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:20Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:20 crc kubenswrapper[4808]: I0217 15:55:20.081809 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e109410f-af42-4d80-bf58-9af3a5dde09a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd52f8fe1e994b2f877ce0843ce86d86d7674bace8c4ca163e3232248313435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b00de586738e2d759aa971e2114def8fdfeb2a25fd72f482d75b9f46ea9a3d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12c45de72b21abdab0a1073a9a1a357c8d593f68a339bf9b455b5e87aa7863aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59dcbb2be526e98cfd0a3c8cf833d6cfdef0120c58b47e52fb62f56adffb1d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:20Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:20 crc kubenswrapper[4808]: I0217 15:55:20.084836 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:20 crc kubenswrapper[4808]: I0217 15:55:20.084917 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:20 crc kubenswrapper[4808]: I0217 15:55:20.084941 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:20 crc kubenswrapper[4808]: I0217 15:55:20.085116 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:20 crc kubenswrapper[4808]: I0217 15:55:20.085145 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:20Z","lastTransitionTime":"2026-02-17T15:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:20 crc kubenswrapper[4808]: I0217 15:55:20.102844 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:20Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:20 crc kubenswrapper[4808]: I0217 15:55:20.127754 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6c9480c-4161-4c38-bec1-0822c6692f6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53d750dff2e0aa3d65e2defbc3cdf44f48375946c7021c0b1e1056b5ed7d729e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8d4091ef21fb9fef52dafcd7f1d0e865ff57652fcb75d0ba1e16361bcb81f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8d4091ef21fb9fef52dafcd7f1d0e865ff57652fcb75d0ba1e16361bcb81f44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26ac79dab2ec2e8e379a62382daa37e5c1feaa0666d3c6426bd9a295c64fdd5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26ac79dab2ec2e8e379a62382daa37e5c1feaa0666d3c6426bd9a295c64fdd5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43f3b959a4804631ce679ee8dd89b1fa9249892328d303865de288a5a7529af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43f3b959a4804631ce679ee8dd89b1fa9249892328d303865de288a5a7529af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cf535fc0e39f67860383b43629a84bb4608a6a5d42304c537ab91a306ed841c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cf535fc0e39f67860383b43629a84bb4608a6a5d42304c537ab91a306ed841c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89610759cc77f66154699ee9784109cba8ce21818125f447368e19fb6cc8cfb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89610759cc77f66154699ee9784109cba8ce21818125f447368e19fb6cc8cfb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx4nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:20Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:20 crc kubenswrapper[4808]: I0217 15:55:20.145419 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 15:49:53.651855464 +0000 UTC Feb 17 15:55:20 crc kubenswrapper[4808]: I0217 15:55:20.188561 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:20 crc kubenswrapper[4808]: I0217 15:55:20.188717 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:20 crc kubenswrapper[4808]: I0217 15:55:20.188738 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:20 crc kubenswrapper[4808]: I0217 15:55:20.188955 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:20 crc kubenswrapper[4808]: I0217 15:55:20.188986 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:20Z","lastTransitionTime":"2026-02-17T15:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:20 crc kubenswrapper[4808]: I0217 15:55:20.292561 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:20 crc kubenswrapper[4808]: I0217 15:55:20.292673 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:20 crc kubenswrapper[4808]: I0217 15:55:20.292691 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:20 crc kubenswrapper[4808]: I0217 15:55:20.292721 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:20 crc kubenswrapper[4808]: I0217 15:55:20.292742 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:20Z","lastTransitionTime":"2026-02-17T15:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:20 crc kubenswrapper[4808]: I0217 15:55:20.396481 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:20 crc kubenswrapper[4808]: I0217 15:55:20.396527 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:20 crc kubenswrapper[4808]: I0217 15:55:20.396537 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:20 crc kubenswrapper[4808]: I0217 15:55:20.396560 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:20 crc kubenswrapper[4808]: I0217 15:55:20.396601 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:20Z","lastTransitionTime":"2026-02-17T15:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:20 crc kubenswrapper[4808]: I0217 15:55:20.499406 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:20 crc kubenswrapper[4808]: I0217 15:55:20.499474 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:20 crc kubenswrapper[4808]: I0217 15:55:20.499484 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:20 crc kubenswrapper[4808]: I0217 15:55:20.499506 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:20 crc kubenswrapper[4808]: I0217 15:55:20.499519 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:20Z","lastTransitionTime":"2026-02-17T15:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:20 crc kubenswrapper[4808]: I0217 15:55:20.602943 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:20 crc kubenswrapper[4808]: I0217 15:55:20.603287 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:20 crc kubenswrapper[4808]: I0217 15:55:20.603396 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:20 crc kubenswrapper[4808]: I0217 15:55:20.603553 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:20 crc kubenswrapper[4808]: I0217 15:55:20.603701 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:20Z","lastTransitionTime":"2026-02-17T15:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:20 crc kubenswrapper[4808]: I0217 15:55:20.707700 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:20 crc kubenswrapper[4808]: I0217 15:55:20.707794 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:20 crc kubenswrapper[4808]: I0217 15:55:20.707813 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:20 crc kubenswrapper[4808]: I0217 15:55:20.707839 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:20 crc kubenswrapper[4808]: I0217 15:55:20.707855 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:20Z","lastTransitionTime":"2026-02-17T15:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:20 crc kubenswrapper[4808]: I0217 15:55:20.810634 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:20 crc kubenswrapper[4808]: I0217 15:55:20.810691 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:20 crc kubenswrapper[4808]: I0217 15:55:20.810742 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:20 crc kubenswrapper[4808]: I0217 15:55:20.810772 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:20 crc kubenswrapper[4808]: I0217 15:55:20.810791 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:20Z","lastTransitionTime":"2026-02-17T15:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:20 crc kubenswrapper[4808]: I0217 15:55:20.913916 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:20 crc kubenswrapper[4808]: I0217 15:55:20.913994 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:20 crc kubenswrapper[4808]: I0217 15:55:20.914013 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:20 crc kubenswrapper[4808]: I0217 15:55:20.914040 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:20 crc kubenswrapper[4808]: I0217 15:55:20.914059 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:20Z","lastTransitionTime":"2026-02-17T15:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:21 crc kubenswrapper[4808]: I0217 15:55:21.017659 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:21 crc kubenswrapper[4808]: I0217 15:55:21.017725 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:21 crc kubenswrapper[4808]: I0217 15:55:21.017745 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:21 crc kubenswrapper[4808]: I0217 15:55:21.017773 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:21 crc kubenswrapper[4808]: I0217 15:55:21.017803 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:21Z","lastTransitionTime":"2026-02-17T15:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:21 crc kubenswrapper[4808]: I0217 15:55:21.034922 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:55:21 crc kubenswrapper[4808]: E0217 15:55:21.035145 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:56:25.035103539 +0000 UTC m=+148.551462652 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:55:21 crc kubenswrapper[4808]: I0217 15:55:21.121844 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:21 crc kubenswrapper[4808]: I0217 15:55:21.121944 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:21 crc kubenswrapper[4808]: I0217 15:55:21.121976 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:21 crc kubenswrapper[4808]: I0217 15:55:21.122003 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:21 crc kubenswrapper[4808]: I0217 15:55:21.122023 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:21Z","lastTransitionTime":"2026-02-17T15:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:21 crc kubenswrapper[4808]: I0217 15:55:21.137129 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:21 crc kubenswrapper[4808]: I0217 15:55:21.137243 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:55:21 crc kubenswrapper[4808]: I0217 15:55:21.137285 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:55:21 crc kubenswrapper[4808]: I0217 15:55:21.137349 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:21 crc kubenswrapper[4808]: E0217 15:55:21.137403 4808 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 15:55:21 crc kubenswrapper[4808]: E0217 15:55:21.137541 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 15:56:25.137507351 +0000 UTC m=+148.653866454 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 15:55:21 crc kubenswrapper[4808]: E0217 15:55:21.137541 4808 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 15:55:21 crc kubenswrapper[4808]: E0217 15:55:21.137554 4808 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 15:55:21 crc kubenswrapper[4808]: E0217 15:55:21.137625 4808 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 15:55:21 crc kubenswrapper[4808]: E0217 15:55:21.137631 4808 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 15:55:21 crc kubenswrapper[4808]: E0217 15:55:21.137658 4808 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:55:21 crc kubenswrapper[4808]: E0217 15:55:21.137679 4808 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 15:55:21 crc kubenswrapper[4808]: E0217 15:55:21.137704 4808 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:55:21 crc kubenswrapper[4808]: E0217 15:55:21.137708 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 15:56:25.137681175 +0000 UTC m=+148.654040278 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 15:55:21 crc kubenswrapper[4808]: E0217 15:55:21.137747 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-17 15:56:25.137728296 +0000 UTC m=+148.654087399 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:55:21 crc kubenswrapper[4808]: E0217 15:55:21.137782 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-17 15:56:25.137766247 +0000 UTC m=+148.654125360 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:55:21 crc kubenswrapper[4808]: I0217 15:55:21.145146 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:55:21 crc kubenswrapper[4808]: I0217 15:55:21.145196 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:21 crc kubenswrapper[4808]: I0217 15:55:21.145159 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:55:21 crc kubenswrapper[4808]: E0217 15:55:21.145361 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:55:21 crc kubenswrapper[4808]: I0217 15:55:21.145409 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:55:21 crc kubenswrapper[4808]: E0217 15:55:21.145607 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:55:21 crc kubenswrapper[4808]: I0217 15:55:21.145674 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-06 19:48:27.417156686 +0000 UTC Feb 17 15:55:21 crc kubenswrapper[4808]: E0217 15:55:21.145792 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z8tn8" podUID="b88c3e5f-7390-477c-ae74-aced26a8ddf9" Feb 17 15:55:21 crc kubenswrapper[4808]: E0217 15:55:21.145861 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:55:21 crc kubenswrapper[4808]: I0217 15:55:21.225782 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:21 crc kubenswrapper[4808]: I0217 15:55:21.225833 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:21 crc kubenswrapper[4808]: I0217 15:55:21.225848 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:21 crc kubenswrapper[4808]: I0217 15:55:21.225950 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:21 crc kubenswrapper[4808]: I0217 15:55:21.225973 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:21Z","lastTransitionTime":"2026-02-17T15:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:21 crc kubenswrapper[4808]: I0217 15:55:21.330027 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:21 crc kubenswrapper[4808]: I0217 15:55:21.330097 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:21 crc kubenswrapper[4808]: I0217 15:55:21.330116 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:21 crc kubenswrapper[4808]: I0217 15:55:21.330143 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:21 crc kubenswrapper[4808]: I0217 15:55:21.330162 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:21Z","lastTransitionTime":"2026-02-17T15:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:21 crc kubenswrapper[4808]: I0217 15:55:21.433466 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:21 crc kubenswrapper[4808]: I0217 15:55:21.433531 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:21 crc kubenswrapper[4808]: I0217 15:55:21.433547 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:21 crc kubenswrapper[4808]: I0217 15:55:21.433597 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:21 crc kubenswrapper[4808]: I0217 15:55:21.433617 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:21Z","lastTransitionTime":"2026-02-17T15:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:21 crc kubenswrapper[4808]: I0217 15:55:21.536860 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:21 crc kubenswrapper[4808]: I0217 15:55:21.536938 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:21 crc kubenswrapper[4808]: I0217 15:55:21.536955 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:21 crc kubenswrapper[4808]: I0217 15:55:21.536988 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:21 crc kubenswrapper[4808]: I0217 15:55:21.537008 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:21Z","lastTransitionTime":"2026-02-17T15:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:21 crc kubenswrapper[4808]: I0217 15:55:21.640637 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:21 crc kubenswrapper[4808]: I0217 15:55:21.640716 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:21 crc kubenswrapper[4808]: I0217 15:55:21.640734 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:21 crc kubenswrapper[4808]: I0217 15:55:21.640761 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:21 crc kubenswrapper[4808]: I0217 15:55:21.640780 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:21Z","lastTransitionTime":"2026-02-17T15:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:21 crc kubenswrapper[4808]: I0217 15:55:21.743469 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:21 crc kubenswrapper[4808]: I0217 15:55:21.743539 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:21 crc kubenswrapper[4808]: I0217 15:55:21.743557 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:21 crc kubenswrapper[4808]: I0217 15:55:21.743634 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:21 crc kubenswrapper[4808]: I0217 15:55:21.743666 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:21Z","lastTransitionTime":"2026-02-17T15:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:21 crc kubenswrapper[4808]: I0217 15:55:21.846617 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:21 crc kubenswrapper[4808]: I0217 15:55:21.846697 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:21 crc kubenswrapper[4808]: I0217 15:55:21.846716 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:21 crc kubenswrapper[4808]: I0217 15:55:21.846747 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:21 crc kubenswrapper[4808]: I0217 15:55:21.846767 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:21Z","lastTransitionTime":"2026-02-17T15:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:21 crc kubenswrapper[4808]: I0217 15:55:21.951278 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:21 crc kubenswrapper[4808]: I0217 15:55:21.951338 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:21 crc kubenswrapper[4808]: I0217 15:55:21.951352 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:21 crc kubenswrapper[4808]: I0217 15:55:21.951376 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:21 crc kubenswrapper[4808]: I0217 15:55:21.951394 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:21Z","lastTransitionTime":"2026-02-17T15:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:22 crc kubenswrapper[4808]: I0217 15:55:22.055148 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:22 crc kubenswrapper[4808]: I0217 15:55:22.055227 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:22 crc kubenswrapper[4808]: I0217 15:55:22.055247 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:22 crc kubenswrapper[4808]: I0217 15:55:22.055280 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:22 crc kubenswrapper[4808]: I0217 15:55:22.055304 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:22Z","lastTransitionTime":"2026-02-17T15:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:22 crc kubenswrapper[4808]: I0217 15:55:22.146432 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 13:41:32.460563848 +0000 UTC Feb 17 15:55:22 crc kubenswrapper[4808]: I0217 15:55:22.158393 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:22 crc kubenswrapper[4808]: I0217 15:55:22.158450 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:22 crc kubenswrapper[4808]: I0217 15:55:22.158468 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:22 crc kubenswrapper[4808]: I0217 15:55:22.158492 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:22 crc kubenswrapper[4808]: I0217 15:55:22.158514 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:22Z","lastTransitionTime":"2026-02-17T15:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:22 crc kubenswrapper[4808]: I0217 15:55:22.261341 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:22 crc kubenswrapper[4808]: I0217 15:55:22.261381 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:22 crc kubenswrapper[4808]: I0217 15:55:22.261399 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:22 crc kubenswrapper[4808]: I0217 15:55:22.261433 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:22 crc kubenswrapper[4808]: I0217 15:55:22.261469 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:22Z","lastTransitionTime":"2026-02-17T15:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:22 crc kubenswrapper[4808]: I0217 15:55:22.364727 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:22 crc kubenswrapper[4808]: I0217 15:55:22.364785 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:22 crc kubenswrapper[4808]: I0217 15:55:22.364809 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:22 crc kubenswrapper[4808]: I0217 15:55:22.364837 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:22 crc kubenswrapper[4808]: I0217 15:55:22.364860 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:22Z","lastTransitionTime":"2026-02-17T15:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:22 crc kubenswrapper[4808]: I0217 15:55:22.468059 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:22 crc kubenswrapper[4808]: I0217 15:55:22.468140 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:22 crc kubenswrapper[4808]: I0217 15:55:22.468158 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:22 crc kubenswrapper[4808]: I0217 15:55:22.468180 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:22 crc kubenswrapper[4808]: I0217 15:55:22.468198 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:22Z","lastTransitionTime":"2026-02-17T15:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:22 crc kubenswrapper[4808]: I0217 15:55:22.571490 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:22 crc kubenswrapper[4808]: I0217 15:55:22.571542 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:22 crc kubenswrapper[4808]: I0217 15:55:22.571561 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:22 crc kubenswrapper[4808]: I0217 15:55:22.571608 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:22 crc kubenswrapper[4808]: I0217 15:55:22.571627 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:22Z","lastTransitionTime":"2026-02-17T15:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:22 crc kubenswrapper[4808]: I0217 15:55:22.675169 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:22 crc kubenswrapper[4808]: I0217 15:55:22.675243 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:22 crc kubenswrapper[4808]: I0217 15:55:22.675265 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:22 crc kubenswrapper[4808]: I0217 15:55:22.675297 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:22 crc kubenswrapper[4808]: I0217 15:55:22.675322 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:22Z","lastTransitionTime":"2026-02-17T15:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:22 crc kubenswrapper[4808]: I0217 15:55:22.778748 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:22 crc kubenswrapper[4808]: I0217 15:55:22.778810 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:22 crc kubenswrapper[4808]: I0217 15:55:22.778829 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:22 crc kubenswrapper[4808]: I0217 15:55:22.778856 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:22 crc kubenswrapper[4808]: I0217 15:55:22.778876 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:22Z","lastTransitionTime":"2026-02-17T15:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:22 crc kubenswrapper[4808]: I0217 15:55:22.882129 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:22 crc kubenswrapper[4808]: I0217 15:55:22.882207 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:22 crc kubenswrapper[4808]: I0217 15:55:22.882220 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:22 crc kubenswrapper[4808]: I0217 15:55:22.882242 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:22 crc kubenswrapper[4808]: I0217 15:55:22.882262 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:22Z","lastTransitionTime":"2026-02-17T15:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:22 crc kubenswrapper[4808]: I0217 15:55:22.986455 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:22 crc kubenswrapper[4808]: I0217 15:55:22.986560 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:22 crc kubenswrapper[4808]: I0217 15:55:22.986622 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:22 crc kubenswrapper[4808]: I0217 15:55:22.986714 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:22 crc kubenswrapper[4808]: I0217 15:55:22.986735 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:22Z","lastTransitionTime":"2026-02-17T15:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:23 crc kubenswrapper[4808]: I0217 15:55:23.090438 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:23 crc kubenswrapper[4808]: I0217 15:55:23.090540 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:23 crc kubenswrapper[4808]: I0217 15:55:23.090568 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:23 crc kubenswrapper[4808]: I0217 15:55:23.090680 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:23 crc kubenswrapper[4808]: I0217 15:55:23.090726 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:23Z","lastTransitionTime":"2026-02-17T15:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:23 crc kubenswrapper[4808]: I0217 15:55:23.145421 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:23 crc kubenswrapper[4808]: I0217 15:55:23.145483 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:55:23 crc kubenswrapper[4808]: I0217 15:55:23.145521 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:55:23 crc kubenswrapper[4808]: E0217 15:55:23.145650 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:55:23 crc kubenswrapper[4808]: I0217 15:55:23.145727 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:55:23 crc kubenswrapper[4808]: E0217 15:55:23.145856 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:55:23 crc kubenswrapper[4808]: E0217 15:55:23.145979 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:55:23 crc kubenswrapper[4808]: E0217 15:55:23.146254 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z8tn8" podUID="b88c3e5f-7390-477c-ae74-aced26a8ddf9" Feb 17 15:55:23 crc kubenswrapper[4808]: I0217 15:55:23.147385 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 09:41:42.693545656 +0000 UTC Feb 17 15:55:23 crc kubenswrapper[4808]: I0217 15:55:23.193603 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:23 crc kubenswrapper[4808]: I0217 15:55:23.193649 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:23 crc kubenswrapper[4808]: I0217 15:55:23.193659 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:23 crc kubenswrapper[4808]: I0217 15:55:23.193681 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:23 crc kubenswrapper[4808]: I0217 15:55:23.193699 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:23Z","lastTransitionTime":"2026-02-17T15:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:23 crc kubenswrapper[4808]: I0217 15:55:23.297365 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:23 crc kubenswrapper[4808]: I0217 15:55:23.297412 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:23 crc kubenswrapper[4808]: I0217 15:55:23.297423 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:23 crc kubenswrapper[4808]: I0217 15:55:23.297442 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:23 crc kubenswrapper[4808]: I0217 15:55:23.297456 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:23Z","lastTransitionTime":"2026-02-17T15:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:23 crc kubenswrapper[4808]: I0217 15:55:23.400987 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:23 crc kubenswrapper[4808]: I0217 15:55:23.401037 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:23 crc kubenswrapper[4808]: I0217 15:55:23.401056 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:23 crc kubenswrapper[4808]: I0217 15:55:23.401085 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:23 crc kubenswrapper[4808]: I0217 15:55:23.401104 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:23Z","lastTransitionTime":"2026-02-17T15:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:23 crc kubenswrapper[4808]: I0217 15:55:23.504410 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:23 crc kubenswrapper[4808]: I0217 15:55:23.504456 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:23 crc kubenswrapper[4808]: I0217 15:55:23.504473 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:23 crc kubenswrapper[4808]: I0217 15:55:23.504495 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:23 crc kubenswrapper[4808]: I0217 15:55:23.504514 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:23Z","lastTransitionTime":"2026-02-17T15:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:23 crc kubenswrapper[4808]: I0217 15:55:23.608320 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:23 crc kubenswrapper[4808]: I0217 15:55:23.608377 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:23 crc kubenswrapper[4808]: I0217 15:55:23.608388 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:23 crc kubenswrapper[4808]: I0217 15:55:23.608406 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:23 crc kubenswrapper[4808]: I0217 15:55:23.608419 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:23Z","lastTransitionTime":"2026-02-17T15:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:23 crc kubenswrapper[4808]: I0217 15:55:23.711758 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:23 crc kubenswrapper[4808]: I0217 15:55:23.711824 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:23 crc kubenswrapper[4808]: I0217 15:55:23.711841 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:23 crc kubenswrapper[4808]: I0217 15:55:23.711870 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:23 crc kubenswrapper[4808]: I0217 15:55:23.711889 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:23Z","lastTransitionTime":"2026-02-17T15:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:23 crc kubenswrapper[4808]: I0217 15:55:23.815911 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:23 crc kubenswrapper[4808]: I0217 15:55:23.815961 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:23 crc kubenswrapper[4808]: I0217 15:55:23.815973 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:23 crc kubenswrapper[4808]: I0217 15:55:23.815990 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:23 crc kubenswrapper[4808]: I0217 15:55:23.816004 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:23Z","lastTransitionTime":"2026-02-17T15:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:23 crc kubenswrapper[4808]: I0217 15:55:23.919896 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:23 crc kubenswrapper[4808]: I0217 15:55:23.919954 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:23 crc kubenswrapper[4808]: I0217 15:55:23.919965 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:23 crc kubenswrapper[4808]: I0217 15:55:23.920006 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:23 crc kubenswrapper[4808]: I0217 15:55:23.920019 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:23Z","lastTransitionTime":"2026-02-17T15:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:24 crc kubenswrapper[4808]: I0217 15:55:24.023558 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:24 crc kubenswrapper[4808]: I0217 15:55:24.023658 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:24 crc kubenswrapper[4808]: I0217 15:55:24.023668 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:24 crc kubenswrapper[4808]: I0217 15:55:24.023708 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:24 crc kubenswrapper[4808]: I0217 15:55:24.023723 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:24Z","lastTransitionTime":"2026-02-17T15:55:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:24 crc kubenswrapper[4808]: I0217 15:55:24.127218 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:24 crc kubenswrapper[4808]: I0217 15:55:24.127284 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:24 crc kubenswrapper[4808]: I0217 15:55:24.127302 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:24 crc kubenswrapper[4808]: I0217 15:55:24.127329 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:24 crc kubenswrapper[4808]: I0217 15:55:24.127347 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:24Z","lastTransitionTime":"2026-02-17T15:55:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:24 crc kubenswrapper[4808]: I0217 15:55:24.147899 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 16:42:57.678122673 +0000 UTC Feb 17 15:55:24 crc kubenswrapper[4808]: I0217 15:55:24.238489 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:24 crc kubenswrapper[4808]: I0217 15:55:24.238673 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:24 crc kubenswrapper[4808]: I0217 15:55:24.238706 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:24 crc kubenswrapper[4808]: I0217 15:55:24.238746 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:24 crc kubenswrapper[4808]: I0217 15:55:24.238775 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:24Z","lastTransitionTime":"2026-02-17T15:55:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:24 crc kubenswrapper[4808]: I0217 15:55:24.341975 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:24 crc kubenswrapper[4808]: I0217 15:55:24.342056 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:24 crc kubenswrapper[4808]: I0217 15:55:24.342078 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:24 crc kubenswrapper[4808]: I0217 15:55:24.342108 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:24 crc kubenswrapper[4808]: I0217 15:55:24.342134 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:24Z","lastTransitionTime":"2026-02-17T15:55:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:24 crc kubenswrapper[4808]: I0217 15:55:24.445636 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:24 crc kubenswrapper[4808]: I0217 15:55:24.445720 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:24 crc kubenswrapper[4808]: I0217 15:55:24.445740 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:24 crc kubenswrapper[4808]: I0217 15:55:24.445767 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:24 crc kubenswrapper[4808]: I0217 15:55:24.445786 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:24Z","lastTransitionTime":"2026-02-17T15:55:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:24 crc kubenswrapper[4808]: I0217 15:55:24.526102 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:24 crc kubenswrapper[4808]: I0217 15:55:24.526188 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:24 crc kubenswrapper[4808]: I0217 15:55:24.526246 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:24 crc kubenswrapper[4808]: I0217 15:55:24.526282 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:24 crc kubenswrapper[4808]: I0217 15:55:24.526304 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:24Z","lastTransitionTime":"2026-02-17T15:55:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:24 crc kubenswrapper[4808]: I0217 15:55:24.564164 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:24 crc kubenswrapper[4808]: I0217 15:55:24.564195 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:24 crc kubenswrapper[4808]: I0217 15:55:24.564203 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:24 crc kubenswrapper[4808]: I0217 15:55:24.564219 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:24 crc kubenswrapper[4808]: I0217 15:55:24.564230 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:24Z","lastTransitionTime":"2026-02-17T15:55:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:24 crc kubenswrapper[4808]: I0217 15:55:24.618464 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-rpl76"] Feb 17 15:55:24 crc kubenswrapper[4808]: I0217 15:55:24.619226 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rpl76" Feb 17 15:55:24 crc kubenswrapper[4808]: I0217 15:55:24.622023 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 17 15:55:24 crc kubenswrapper[4808]: I0217 15:55:24.622360 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 17 15:55:24 crc kubenswrapper[4808]: I0217 15:55:24.622448 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 17 15:55:24 crc kubenswrapper[4808]: I0217 15:55:24.623734 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 17 15:55:24 crc kubenswrapper[4808]: I0217 15:55:24.678973 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2737fdbb-be6e-4b06-bdf6-43aeb1186369-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-rpl76\" (UID: \"2737fdbb-be6e-4b06-bdf6-43aeb1186369\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rpl76" Feb 17 15:55:24 crc kubenswrapper[4808]: I0217 15:55:24.679066 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/2737fdbb-be6e-4b06-bdf6-43aeb1186369-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-rpl76\" (UID: \"2737fdbb-be6e-4b06-bdf6-43aeb1186369\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rpl76" Feb 17 15:55:24 crc kubenswrapper[4808]: I0217 15:55:24.679160 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/2737fdbb-be6e-4b06-bdf6-43aeb1186369-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-rpl76\" (UID: \"2737fdbb-be6e-4b06-bdf6-43aeb1186369\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rpl76" Feb 17 15:55:24 crc kubenswrapper[4808]: I0217 15:55:24.679280 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2737fdbb-be6e-4b06-bdf6-43aeb1186369-service-ca\") pod \"cluster-version-operator-5c965bbfc6-rpl76\" (UID: \"2737fdbb-be6e-4b06-bdf6-43aeb1186369\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rpl76" Feb 17 15:55:24 crc kubenswrapper[4808]: I0217 15:55:24.679314 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2737fdbb-be6e-4b06-bdf6-43aeb1186369-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-rpl76\" (UID: \"2737fdbb-be6e-4b06-bdf6-43aeb1186369\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rpl76" Feb 17 15:55:24 crc kubenswrapper[4808]: I0217 15:55:24.692040 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-msgfd" podStartSLOduration=67.692024378 podStartE2EDuration="1m7.692024378s" podCreationTimestamp="2026-02-17 15:54:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:55:24.653840147 +0000 UTC m=+88.170199230" watchObservedRunningTime="2026-02-17 15:55:24.692024378 +0000 UTC m=+88.208383461" Feb 17 15:55:24 crc kubenswrapper[4808]: I0217 15:55:24.709498 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=35.70947155 podStartE2EDuration="35.70947155s" podCreationTimestamp="2026-02-17 15:54:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:55:24.708906636 +0000 UTC m=+88.225265749" watchObservedRunningTime="2026-02-17 15:55:24.70947155 +0000 UTC m=+88.225830633" Feb 17 15:55:24 crc kubenswrapper[4808]: I0217 15:55:24.725836 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=6.725811393 podStartE2EDuration="6.725811393s" podCreationTimestamp="2026-02-17 15:55:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:55:24.725159986 +0000 UTC m=+88.241519089" watchObservedRunningTime="2026-02-17 15:55:24.725811393 +0000 UTC m=+88.242170506" Feb 17 15:55:24 crc kubenswrapper[4808]: I0217 15:55:24.780707 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/2737fdbb-be6e-4b06-bdf6-43aeb1186369-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-rpl76\" (UID: \"2737fdbb-be6e-4b06-bdf6-43aeb1186369\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rpl76" Feb 17 15:55:24 crc kubenswrapper[4808]: I0217 15:55:24.780814 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2737fdbb-be6e-4b06-bdf6-43aeb1186369-service-ca\") pod \"cluster-version-operator-5c965bbfc6-rpl76\" (UID: \"2737fdbb-be6e-4b06-bdf6-43aeb1186369\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rpl76" Feb 17 15:55:24 crc kubenswrapper[4808]: I0217 15:55:24.780849 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2737fdbb-be6e-4b06-bdf6-43aeb1186369-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-rpl76\" (UID: \"2737fdbb-be6e-4b06-bdf6-43aeb1186369\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rpl76" Feb 17 15:55:24 crc kubenswrapper[4808]: I0217 15:55:24.780941 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2737fdbb-be6e-4b06-bdf6-43aeb1186369-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-rpl76\" (UID: \"2737fdbb-be6e-4b06-bdf6-43aeb1186369\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rpl76" Feb 17 15:55:24 crc kubenswrapper[4808]: I0217 15:55:24.780993 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/2737fdbb-be6e-4b06-bdf6-43aeb1186369-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-rpl76\" (UID: \"2737fdbb-be6e-4b06-bdf6-43aeb1186369\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rpl76" Feb 17 15:55:24 crc kubenswrapper[4808]: I0217 15:55:24.781101 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/2737fdbb-be6e-4b06-bdf6-43aeb1186369-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-rpl76\" (UID: \"2737fdbb-be6e-4b06-bdf6-43aeb1186369\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rpl76" Feb 17 15:55:24 crc kubenswrapper[4808]: I0217 15:55:24.781175 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/2737fdbb-be6e-4b06-bdf6-43aeb1186369-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-rpl76\" (UID: \"2737fdbb-be6e-4b06-bdf6-43aeb1186369\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rpl76" Feb 17 15:55:24 crc kubenswrapper[4808]: I0217 15:55:24.783484 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2737fdbb-be6e-4b06-bdf6-43aeb1186369-service-ca\") pod \"cluster-version-operator-5c965bbfc6-rpl76\" (UID: \"2737fdbb-be6e-4b06-bdf6-43aeb1186369\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rpl76" Feb 17 15:55:24 crc kubenswrapper[4808]: I0217 15:55:24.791902 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2737fdbb-be6e-4b06-bdf6-43aeb1186369-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-rpl76\" (UID: \"2737fdbb-be6e-4b06-bdf6-43aeb1186369\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rpl76" Feb 17 15:55:24 crc kubenswrapper[4808]: I0217 15:55:24.806625 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2737fdbb-be6e-4b06-bdf6-43aeb1186369-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-rpl76\" (UID: \"2737fdbb-be6e-4b06-bdf6-43aeb1186369\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rpl76" Feb 17 15:55:24 crc kubenswrapper[4808]: I0217 15:55:24.839412 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-f8pfh" podStartSLOduration=68.839374181 podStartE2EDuration="1m8.839374181s" podCreationTimestamp="2026-02-17 15:54:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:55:24.81933565 +0000 UTC m=+88.335694783" watchObservedRunningTime="2026-02-17 15:55:24.839374181 +0000 UTC m=+88.355733254" Feb 17 15:55:24 crc kubenswrapper[4808]: I0217 15:55:24.839614 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-86pl6" podStartSLOduration=66.839609837 podStartE2EDuration="1m6.839609837s" podCreationTimestamp="2026-02-17 15:54:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:55:24.838822446 +0000 UTC m=+88.355181559" watchObservedRunningTime="2026-02-17 15:55:24.839609837 +0000 UTC m=+88.355968910" Feb 17 15:55:24 crc kubenswrapper[4808]: I0217 15:55:24.882413 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=68.8823759 podStartE2EDuration="1m8.8823759s" podCreationTimestamp="2026-02-17 15:54:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:55:24.862793811 +0000 UTC m=+88.379152964" watchObservedRunningTime="2026-02-17 15:55:24.8823759 +0000 UTC m=+88.398735003" Feb 17 15:55:24 crc kubenswrapper[4808]: I0217 15:55:24.895069 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-pr5s4" podStartSLOduration=67.895045375 podStartE2EDuration="1m7.895045375s" podCreationTimestamp="2026-02-17 15:54:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:55:24.894668415 +0000 UTC m=+88.411027588" watchObservedRunningTime="2026-02-17 15:55:24.895045375 +0000 UTC m=+88.411404448" Feb 17 15:55:24 crc kubenswrapper[4808]: I0217 15:55:24.939785 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rpl76" Feb 17 15:55:24 crc kubenswrapper[4808]: W0217 15:55:24.989881 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2737fdbb_be6e_4b06_bdf6_43aeb1186369.slice/crio-37bcfc963470957993f2590642cd56327641dde3bb2684fd123bbe6036cc7481 WatchSource:0}: Error finding container 37bcfc963470957993f2590642cd56327641dde3bb2684fd123bbe6036cc7481: Status 404 returned error can't find the container with id 37bcfc963470957993f2590642cd56327641dde3bb2684fd123bbe6036cc7481 Feb 17 15:55:25 crc kubenswrapper[4808]: I0217 15:55:25.001196 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=65.001171226 podStartE2EDuration="1m5.001171226s" podCreationTimestamp="2026-02-17 15:54:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:55:24.977969241 +0000 UTC m=+88.494328334" watchObservedRunningTime="2026-02-17 15:55:25.001171226 +0000 UTC m=+88.517530309" Feb 17 15:55:25 crc kubenswrapper[4808]: I0217 15:55:25.040018 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-kx4nl" podStartSLOduration=68.039994734 podStartE2EDuration="1m8.039994734s" podCreationTimestamp="2026-02-17 15:54:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:55:25.039943092 +0000 UTC m=+88.556302235" watchObservedRunningTime="2026-02-17 15:55:25.039994734 +0000 UTC m=+88.556353817" Feb 17 15:55:25 crc kubenswrapper[4808]: I0217 15:55:25.145282 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:55:25 crc kubenswrapper[4808]: E0217 15:55:25.145423 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:55:25 crc kubenswrapper[4808]: I0217 15:55:25.145521 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:55:25 crc kubenswrapper[4808]: I0217 15:55:25.145289 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:25 crc kubenswrapper[4808]: E0217 15:55:25.145762 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z8tn8" podUID="b88c3e5f-7390-477c-ae74-aced26a8ddf9" Feb 17 15:55:25 crc kubenswrapper[4808]: E0217 15:55:25.146204 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:55:25 crc kubenswrapper[4808]: I0217 15:55:25.146482 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:55:25 crc kubenswrapper[4808]: E0217 15:55:25.146822 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:55:25 crc kubenswrapper[4808]: I0217 15:55:25.148004 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 05:20:22.349480065 +0000 UTC Feb 17 15:55:25 crc kubenswrapper[4808]: I0217 15:55:25.148060 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 17 15:55:25 crc kubenswrapper[4808]: I0217 15:55:25.155536 4808 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 17 15:55:25 crc kubenswrapper[4808]: I0217 15:55:25.754207 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rpl76" event={"ID":"2737fdbb-be6e-4b06-bdf6-43aeb1186369","Type":"ContainerStarted","Data":"42737538f82a4ba95a740ff938504a0e1c236bf7b0e67b94a50d9b0fab529bab"} Feb 17 15:55:25 crc kubenswrapper[4808]: I0217 15:55:25.754306 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rpl76" event={"ID":"2737fdbb-be6e-4b06-bdf6-43aeb1186369","Type":"ContainerStarted","Data":"37bcfc963470957993f2590642cd56327641dde3bb2684fd123bbe6036cc7481"} Feb 17 15:55:25 crc kubenswrapper[4808]: I0217 15:55:25.775920 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podStartSLOduration=68.775887614 podStartE2EDuration="1m8.775887614s" podCreationTimestamp="2026-02-17 15:54:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:55:25.055790463 +0000 UTC m=+88.572149606" watchObservedRunningTime="2026-02-17 15:55:25.775887614 +0000 UTC m=+89.292246697" Feb 17 15:55:27 crc kubenswrapper[4808]: I0217 15:55:27.145557 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:55:27 crc kubenswrapper[4808]: I0217 15:55:27.145663 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:55:27 crc kubenswrapper[4808]: E0217 15:55:27.145795 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:55:27 crc kubenswrapper[4808]: I0217 15:55:27.145881 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:55:27 crc kubenswrapper[4808]: E0217 15:55:27.156354 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z8tn8" podUID="b88c3e5f-7390-477c-ae74-aced26a8ddf9" Feb 17 15:55:27 crc kubenswrapper[4808]: I0217 15:55:27.156699 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:27 crc kubenswrapper[4808]: E0217 15:55:27.157330 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:55:27 crc kubenswrapper[4808]: E0217 15:55:27.156812 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:55:29 crc kubenswrapper[4808]: I0217 15:55:29.144972 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:55:29 crc kubenswrapper[4808]: I0217 15:55:29.145019 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:55:29 crc kubenswrapper[4808]: I0217 15:55:29.145033 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:29 crc kubenswrapper[4808]: E0217 15:55:29.145117 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z8tn8" podUID="b88c3e5f-7390-477c-ae74-aced26a8ddf9" Feb 17 15:55:29 crc kubenswrapper[4808]: E0217 15:55:29.145301 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:55:29 crc kubenswrapper[4808]: E0217 15:55:29.145660 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:55:29 crc kubenswrapper[4808]: I0217 15:55:29.146794 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:55:29 crc kubenswrapper[4808]: E0217 15:55:29.147056 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:55:30 crc kubenswrapper[4808]: I0217 15:55:30.146434 4808 scope.go:117] "RemoveContainer" containerID="a3c59386483fde848e69cdd193832875e9c1cbe4725d43032090c9a62494c40f" Feb 17 15:55:30 crc kubenswrapper[4808]: E0217 15:55:30.146778 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-tgvlh_openshift-ovn-kubernetes(5748f02a-e3dd-47c7-b89d-b472c718e593)\"" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" podUID="5748f02a-e3dd-47c7-b89d-b472c718e593" Feb 17 15:55:31 crc kubenswrapper[4808]: I0217 15:55:31.145668 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:31 crc kubenswrapper[4808]: I0217 15:55:31.145806 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:55:31 crc kubenswrapper[4808]: E0217 15:55:31.145878 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:55:31 crc kubenswrapper[4808]: E0217 15:55:31.146053 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:55:31 crc kubenswrapper[4808]: I0217 15:55:31.145806 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:55:31 crc kubenswrapper[4808]: E0217 15:55:31.146286 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z8tn8" podUID="b88c3e5f-7390-477c-ae74-aced26a8ddf9" Feb 17 15:55:31 crc kubenswrapper[4808]: I0217 15:55:31.146379 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:55:31 crc kubenswrapper[4808]: E0217 15:55:31.146623 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:55:33 crc kubenswrapper[4808]: I0217 15:55:33.144834 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:33 crc kubenswrapper[4808]: I0217 15:55:33.144882 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:55:33 crc kubenswrapper[4808]: I0217 15:55:33.144866 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:55:33 crc kubenswrapper[4808]: I0217 15:55:33.144819 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:55:33 crc kubenswrapper[4808]: E0217 15:55:33.145019 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:55:33 crc kubenswrapper[4808]: E0217 15:55:33.145172 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:55:33 crc kubenswrapper[4808]: E0217 15:55:33.145303 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:55:33 crc kubenswrapper[4808]: E0217 15:55:33.145418 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z8tn8" podUID="b88c3e5f-7390-477c-ae74-aced26a8ddf9" Feb 17 15:55:35 crc kubenswrapper[4808]: I0217 15:55:35.145816 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:55:35 crc kubenswrapper[4808]: I0217 15:55:35.145942 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:55:35 crc kubenswrapper[4808]: I0217 15:55:35.146034 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:35 crc kubenswrapper[4808]: E0217 15:55:35.146032 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z8tn8" podUID="b88c3e5f-7390-477c-ae74-aced26a8ddf9" Feb 17 15:55:35 crc kubenswrapper[4808]: E0217 15:55:35.146195 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:55:35 crc kubenswrapper[4808]: I0217 15:55:35.146268 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:55:35 crc kubenswrapper[4808]: E0217 15:55:35.146357 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:55:35 crc kubenswrapper[4808]: E0217 15:55:35.146435 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:55:35 crc kubenswrapper[4808]: I0217 15:55:35.729283 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b88c3e5f-7390-477c-ae74-aced26a8ddf9-metrics-certs\") pod \"network-metrics-daemon-z8tn8\" (UID: \"b88c3e5f-7390-477c-ae74-aced26a8ddf9\") " pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:55:35 crc kubenswrapper[4808]: E0217 15:55:35.729671 4808 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 15:55:35 crc kubenswrapper[4808]: E0217 15:55:35.730142 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b88c3e5f-7390-477c-ae74-aced26a8ddf9-metrics-certs podName:b88c3e5f-7390-477c-ae74-aced26a8ddf9 nodeName:}" failed. No retries permitted until 2026-02-17 15:56:39.730108328 +0000 UTC m=+163.246467431 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b88c3e5f-7390-477c-ae74-aced26a8ddf9-metrics-certs") pod "network-metrics-daemon-z8tn8" (UID: "b88c3e5f-7390-477c-ae74-aced26a8ddf9") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 15:55:37 crc kubenswrapper[4808]: I0217 15:55:37.145466 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:55:37 crc kubenswrapper[4808]: I0217 15:55:37.145522 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:37 crc kubenswrapper[4808]: I0217 15:55:37.146958 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:55:37 crc kubenswrapper[4808]: I0217 15:55:37.147024 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:55:37 crc kubenswrapper[4808]: E0217 15:55:37.147227 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:55:37 crc kubenswrapper[4808]: E0217 15:55:37.147378 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z8tn8" podUID="b88c3e5f-7390-477c-ae74-aced26a8ddf9" Feb 17 15:55:37 crc kubenswrapper[4808]: E0217 15:55:37.147446 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:55:37 crc kubenswrapper[4808]: E0217 15:55:37.147730 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:55:39 crc kubenswrapper[4808]: I0217 15:55:39.144967 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:55:39 crc kubenswrapper[4808]: I0217 15:55:39.145063 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:55:39 crc kubenswrapper[4808]: I0217 15:55:39.144967 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:39 crc kubenswrapper[4808]: I0217 15:55:39.145175 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:55:39 crc kubenswrapper[4808]: E0217 15:55:39.145724 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z8tn8" podUID="b88c3e5f-7390-477c-ae74-aced26a8ddf9" Feb 17 15:55:39 crc kubenswrapper[4808]: E0217 15:55:39.145878 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:55:39 crc kubenswrapper[4808]: E0217 15:55:39.145997 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:55:39 crc kubenswrapper[4808]: E0217 15:55:39.146084 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:55:39 crc kubenswrapper[4808]: I0217 15:55:39.163221 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rpl76" podStartSLOduration=82.16319733 podStartE2EDuration="1m22.16319733s" podCreationTimestamp="2026-02-17 15:54:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:55:25.777770523 +0000 UTC m=+89.294129606" watchObservedRunningTime="2026-02-17 15:55:39.16319733 +0000 UTC m=+102.679556413" Feb 17 15:55:39 crc kubenswrapper[4808]: I0217 15:55:39.164298 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Feb 17 15:55:41 crc kubenswrapper[4808]: I0217 15:55:41.145181 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:55:41 crc kubenswrapper[4808]: I0217 15:55:41.145276 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:55:41 crc kubenswrapper[4808]: I0217 15:55:41.145372 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:55:41 crc kubenswrapper[4808]: E0217 15:55:41.146619 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z8tn8" podUID="b88c3e5f-7390-477c-ae74-aced26a8ddf9" Feb 17 15:55:41 crc kubenswrapper[4808]: I0217 15:55:41.145405 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:41 crc kubenswrapper[4808]: E0217 15:55:41.146791 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:55:41 crc kubenswrapper[4808]: E0217 15:55:41.146786 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:55:41 crc kubenswrapper[4808]: E0217 15:55:41.146604 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:55:43 crc kubenswrapper[4808]: I0217 15:55:43.145341 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:55:43 crc kubenswrapper[4808]: E0217 15:55:43.145532 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z8tn8" podUID="b88c3e5f-7390-477c-ae74-aced26a8ddf9" Feb 17 15:55:43 crc kubenswrapper[4808]: I0217 15:55:43.145367 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:43 crc kubenswrapper[4808]: I0217 15:55:43.145553 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:55:43 crc kubenswrapper[4808]: E0217 15:55:43.145879 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:55:43 crc kubenswrapper[4808]: E0217 15:55:43.146007 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:55:43 crc kubenswrapper[4808]: I0217 15:55:43.146790 4808 scope.go:117] "RemoveContainer" containerID="a3c59386483fde848e69cdd193832875e9c1cbe4725d43032090c9a62494c40f" Feb 17 15:55:43 crc kubenswrapper[4808]: E0217 15:55:43.146983 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-tgvlh_openshift-ovn-kubernetes(5748f02a-e3dd-47c7-b89d-b472c718e593)\"" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" podUID="5748f02a-e3dd-47c7-b89d-b472c718e593" Feb 17 15:55:43 crc kubenswrapper[4808]: I0217 15:55:43.147023 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:55:43 crc kubenswrapper[4808]: E0217 15:55:43.147092 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:55:45 crc kubenswrapper[4808]: I0217 15:55:45.145512 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:55:45 crc kubenswrapper[4808]: I0217 15:55:45.145694 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:55:45 crc kubenswrapper[4808]: I0217 15:55:45.145687 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:55:45 crc kubenswrapper[4808]: E0217 15:55:45.145979 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z8tn8" podUID="b88c3e5f-7390-477c-ae74-aced26a8ddf9" Feb 17 15:55:45 crc kubenswrapper[4808]: I0217 15:55:45.146070 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:45 crc kubenswrapper[4808]: E0217 15:55:45.146522 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:55:45 crc kubenswrapper[4808]: E0217 15:55:45.146695 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:55:45 crc kubenswrapper[4808]: E0217 15:55:45.146862 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:55:47 crc kubenswrapper[4808]: I0217 15:55:47.145225 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:47 crc kubenswrapper[4808]: I0217 15:55:47.145470 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:55:47 crc kubenswrapper[4808]: E0217 15:55:47.147412 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:55:47 crc kubenswrapper[4808]: I0217 15:55:47.147509 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:55:47 crc kubenswrapper[4808]: I0217 15:55:47.147648 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:55:47 crc kubenswrapper[4808]: E0217 15:55:47.147871 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:55:47 crc kubenswrapper[4808]: E0217 15:55:47.148018 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z8tn8" podUID="b88c3e5f-7390-477c-ae74-aced26a8ddf9" Feb 17 15:55:47 crc kubenswrapper[4808]: E0217 15:55:47.148197 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:55:47 crc kubenswrapper[4808]: I0217 15:55:47.193552 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=8.193519713 podStartE2EDuration="8.193519713s" podCreationTimestamp="2026-02-17 15:55:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:55:47.189495967 +0000 UTC m=+110.705855100" watchObservedRunningTime="2026-02-17 15:55:47.193519713 +0000 UTC m=+110.709878816" Feb 17 15:55:49 crc kubenswrapper[4808]: I0217 15:55:49.144856 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:55:49 crc kubenswrapper[4808]: I0217 15:55:49.145018 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:55:49 crc kubenswrapper[4808]: E0217 15:55:49.145433 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:55:49 crc kubenswrapper[4808]: I0217 15:55:49.145546 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:55:49 crc kubenswrapper[4808]: E0217 15:55:49.145671 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z8tn8" podUID="b88c3e5f-7390-477c-ae74-aced26a8ddf9" Feb 17 15:55:49 crc kubenswrapper[4808]: E0217 15:55:49.145871 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:55:49 crc kubenswrapper[4808]: I0217 15:55:49.145218 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:49 crc kubenswrapper[4808]: E0217 15:55:49.146137 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:55:51 crc kubenswrapper[4808]: I0217 15:55:51.144981 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:55:51 crc kubenswrapper[4808]: E0217 15:55:51.145186 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z8tn8" podUID="b88c3e5f-7390-477c-ae74-aced26a8ddf9" Feb 17 15:55:51 crc kubenswrapper[4808]: I0217 15:55:51.145492 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:55:51 crc kubenswrapper[4808]: E0217 15:55:51.145630 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:55:51 crc kubenswrapper[4808]: I0217 15:55:51.145866 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:55:51 crc kubenswrapper[4808]: I0217 15:55:51.145870 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:51 crc kubenswrapper[4808]: E0217 15:55:51.145964 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:55:51 crc kubenswrapper[4808]: E0217 15:55:51.146133 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:55:52 crc kubenswrapper[4808]: I0217 15:55:52.867553 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-msgfd_18916d6d-e063-40a0-816f-554f95cd2956/kube-multus/1.log" Feb 17 15:55:52 crc kubenswrapper[4808]: I0217 15:55:52.868383 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-msgfd_18916d6d-e063-40a0-816f-554f95cd2956/kube-multus/0.log" Feb 17 15:55:52 crc kubenswrapper[4808]: I0217 15:55:52.868447 4808 generic.go:334] "Generic (PLEG): container finished" podID="18916d6d-e063-40a0-816f-554f95cd2956" containerID="7bdc6e86716d40b6c433ccb24a97665384190bfe2ab5ddf0868109d78826917e" exitCode=1 Feb 17 15:55:52 crc kubenswrapper[4808]: I0217 15:55:52.868495 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-msgfd" event={"ID":"18916d6d-e063-40a0-816f-554f95cd2956","Type":"ContainerDied","Data":"7bdc6e86716d40b6c433ccb24a97665384190bfe2ab5ddf0868109d78826917e"} Feb 17 15:55:52 crc kubenswrapper[4808]: I0217 15:55:52.868555 4808 scope.go:117] "RemoveContainer" containerID="d94a7bfe9ebc3fcec167acc2f840374566394d9425801a71bd3626ce196ee3a1" Feb 17 15:55:52 crc kubenswrapper[4808]: I0217 15:55:52.869350 4808 scope.go:117] "RemoveContainer" containerID="7bdc6e86716d40b6c433ccb24a97665384190bfe2ab5ddf0868109d78826917e" Feb 17 15:55:52 crc kubenswrapper[4808]: E0217 15:55:52.869645 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-msgfd_openshift-multus(18916d6d-e063-40a0-816f-554f95cd2956)\"" pod="openshift-multus/multus-msgfd" podUID="18916d6d-e063-40a0-816f-554f95cd2956" Feb 17 15:55:53 crc kubenswrapper[4808]: I0217 15:55:53.145263 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:55:53 crc kubenswrapper[4808]: I0217 15:55:53.145380 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:55:53 crc kubenswrapper[4808]: E0217 15:55:53.145492 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:55:53 crc kubenswrapper[4808]: E0217 15:55:53.145702 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z8tn8" podUID="b88c3e5f-7390-477c-ae74-aced26a8ddf9" Feb 17 15:55:53 crc kubenswrapper[4808]: I0217 15:55:53.145298 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:53 crc kubenswrapper[4808]: E0217 15:55:53.145894 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:55:53 crc kubenswrapper[4808]: I0217 15:55:53.145272 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:55:53 crc kubenswrapper[4808]: E0217 15:55:53.146049 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:55:53 crc kubenswrapper[4808]: I0217 15:55:53.876486 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-msgfd_18916d6d-e063-40a0-816f-554f95cd2956/kube-multus/1.log" Feb 17 15:55:55 crc kubenswrapper[4808]: I0217 15:55:55.145422 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:55 crc kubenswrapper[4808]: I0217 15:55:55.145475 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:55:55 crc kubenswrapper[4808]: E0217 15:55:55.145796 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:55:55 crc kubenswrapper[4808]: I0217 15:55:55.145536 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:55:55 crc kubenswrapper[4808]: I0217 15:55:55.145542 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:55:55 crc kubenswrapper[4808]: E0217 15:55:55.146057 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:55:55 crc kubenswrapper[4808]: E0217 15:55:55.146185 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z8tn8" podUID="b88c3e5f-7390-477c-ae74-aced26a8ddf9" Feb 17 15:55:55 crc kubenswrapper[4808]: E0217 15:55:55.146329 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:55:57 crc kubenswrapper[4808]: E0217 15:55:57.129754 4808 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Feb 17 15:55:57 crc kubenswrapper[4808]: I0217 15:55:57.144892 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:57 crc kubenswrapper[4808]: E0217 15:55:57.146770 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:55:57 crc kubenswrapper[4808]: I0217 15:55:57.146829 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:55:57 crc kubenswrapper[4808]: I0217 15:55:57.146839 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:55:57 crc kubenswrapper[4808]: I0217 15:55:57.146921 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:55:57 crc kubenswrapper[4808]: E0217 15:55:57.146955 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z8tn8" podUID="b88c3e5f-7390-477c-ae74-aced26a8ddf9" Feb 17 15:55:57 crc kubenswrapper[4808]: E0217 15:55:57.147152 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:55:57 crc kubenswrapper[4808]: E0217 15:55:57.147412 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:55:57 crc kubenswrapper[4808]: E0217 15:55:57.254931 4808 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 15:55:58 crc kubenswrapper[4808]: I0217 15:55:58.147069 4808 scope.go:117] "RemoveContainer" containerID="a3c59386483fde848e69cdd193832875e9c1cbe4725d43032090c9a62494c40f" Feb 17 15:55:58 crc kubenswrapper[4808]: E0217 15:55:58.147509 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-tgvlh_openshift-ovn-kubernetes(5748f02a-e3dd-47c7-b89d-b472c718e593)\"" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" podUID="5748f02a-e3dd-47c7-b89d-b472c718e593" Feb 17 15:55:59 crc kubenswrapper[4808]: I0217 15:55:59.145349 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:55:59 crc kubenswrapper[4808]: I0217 15:55:59.145434 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:55:59 crc kubenswrapper[4808]: I0217 15:55:59.145365 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:59 crc kubenswrapper[4808]: E0217 15:55:59.145638 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:55:59 crc kubenswrapper[4808]: E0217 15:55:59.145828 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:55:59 crc kubenswrapper[4808]: E0217 15:55:59.145961 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:55:59 crc kubenswrapper[4808]: I0217 15:55:59.145971 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:55:59 crc kubenswrapper[4808]: E0217 15:55:59.146292 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z8tn8" podUID="b88c3e5f-7390-477c-ae74-aced26a8ddf9" Feb 17 15:56:01 crc kubenswrapper[4808]: I0217 15:56:01.145551 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:56:01 crc kubenswrapper[4808]: I0217 15:56:01.145712 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:56:01 crc kubenswrapper[4808]: I0217 15:56:01.145561 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:56:01 crc kubenswrapper[4808]: E0217 15:56:01.145828 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z8tn8" podUID="b88c3e5f-7390-477c-ae74-aced26a8ddf9" Feb 17 15:56:01 crc kubenswrapper[4808]: I0217 15:56:01.145984 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:56:01 crc kubenswrapper[4808]: E0217 15:56:01.145964 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:56:01 crc kubenswrapper[4808]: E0217 15:56:01.146099 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:56:01 crc kubenswrapper[4808]: E0217 15:56:01.146275 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:56:02 crc kubenswrapper[4808]: E0217 15:56:02.256353 4808 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 15:56:03 crc kubenswrapper[4808]: I0217 15:56:03.145813 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:56:03 crc kubenswrapper[4808]: E0217 15:56:03.146055 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:56:03 crc kubenswrapper[4808]: I0217 15:56:03.146849 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:56:03 crc kubenswrapper[4808]: I0217 15:56:03.146911 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:56:03 crc kubenswrapper[4808]: I0217 15:56:03.147034 4808 scope.go:117] "RemoveContainer" containerID="7bdc6e86716d40b6c433ccb24a97665384190bfe2ab5ddf0868109d78826917e" Feb 17 15:56:03 crc kubenswrapper[4808]: E0217 15:56:03.147036 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:56:03 crc kubenswrapper[4808]: E0217 15:56:03.147397 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:56:03 crc kubenswrapper[4808]: I0217 15:56:03.147562 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:56:03 crc kubenswrapper[4808]: E0217 15:56:03.148848 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z8tn8" podUID="b88c3e5f-7390-477c-ae74-aced26a8ddf9" Feb 17 15:56:03 crc kubenswrapper[4808]: I0217 15:56:03.924309 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-msgfd_18916d6d-e063-40a0-816f-554f95cd2956/kube-multus/1.log" Feb 17 15:56:03 crc kubenswrapper[4808]: I0217 15:56:03.924964 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-msgfd" event={"ID":"18916d6d-e063-40a0-816f-554f95cd2956","Type":"ContainerStarted","Data":"a6961e0c67ed7d26f44519f3b555fda05bf5219f4205ed2528b68394bcb91f2c"} Feb 17 15:56:05 crc kubenswrapper[4808]: I0217 15:56:05.145528 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:56:05 crc kubenswrapper[4808]: I0217 15:56:05.145653 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:56:05 crc kubenswrapper[4808]: E0217 15:56:05.145848 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:56:05 crc kubenswrapper[4808]: I0217 15:56:05.145551 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:56:05 crc kubenswrapper[4808]: I0217 15:56:05.145996 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:56:05 crc kubenswrapper[4808]: E0217 15:56:05.146165 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:56:05 crc kubenswrapper[4808]: E0217 15:56:05.146232 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:56:05 crc kubenswrapper[4808]: E0217 15:56:05.146363 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z8tn8" podUID="b88c3e5f-7390-477c-ae74-aced26a8ddf9" Feb 17 15:56:07 crc kubenswrapper[4808]: I0217 15:56:07.145252 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:56:07 crc kubenswrapper[4808]: I0217 15:56:07.145417 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:56:07 crc kubenswrapper[4808]: E0217 15:56:07.147324 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:56:07 crc kubenswrapper[4808]: I0217 15:56:07.147386 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:56:07 crc kubenswrapper[4808]: I0217 15:56:07.147352 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:56:07 crc kubenswrapper[4808]: E0217 15:56:07.147479 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:56:07 crc kubenswrapper[4808]: E0217 15:56:07.147614 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:56:07 crc kubenswrapper[4808]: E0217 15:56:07.147707 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z8tn8" podUID="b88c3e5f-7390-477c-ae74-aced26a8ddf9" Feb 17 15:56:07 crc kubenswrapper[4808]: E0217 15:56:07.257210 4808 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 15:56:09 crc kubenswrapper[4808]: I0217 15:56:09.145797 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:56:09 crc kubenswrapper[4808]: I0217 15:56:09.145902 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:56:09 crc kubenswrapper[4808]: E0217 15:56:09.146012 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:56:09 crc kubenswrapper[4808]: I0217 15:56:09.146041 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:56:09 crc kubenswrapper[4808]: E0217 15:56:09.146107 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:56:09 crc kubenswrapper[4808]: I0217 15:56:09.146159 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:56:09 crc kubenswrapper[4808]: E0217 15:56:09.146371 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z8tn8" podUID="b88c3e5f-7390-477c-ae74-aced26a8ddf9" Feb 17 15:56:09 crc kubenswrapper[4808]: E0217 15:56:09.146429 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:56:11 crc kubenswrapper[4808]: I0217 15:56:11.145630 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:56:11 crc kubenswrapper[4808]: I0217 15:56:11.145745 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:56:11 crc kubenswrapper[4808]: I0217 15:56:11.145891 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:56:11 crc kubenswrapper[4808]: E0217 15:56:11.145906 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z8tn8" podUID="b88c3e5f-7390-477c-ae74-aced26a8ddf9" Feb 17 15:56:11 crc kubenswrapper[4808]: I0217 15:56:11.146002 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:56:11 crc kubenswrapper[4808]: E0217 15:56:11.146191 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:56:11 crc kubenswrapper[4808]: E0217 15:56:11.146320 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:56:11 crc kubenswrapper[4808]: E0217 15:56:11.146500 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:56:11 crc kubenswrapper[4808]: I0217 15:56:11.147786 4808 scope.go:117] "RemoveContainer" containerID="a3c59386483fde848e69cdd193832875e9c1cbe4725d43032090c9a62494c40f" Feb 17 15:56:11 crc kubenswrapper[4808]: I0217 15:56:11.969261 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tgvlh_5748f02a-e3dd-47c7-b89d-b472c718e593/ovnkube-controller/3.log" Feb 17 15:56:11 crc kubenswrapper[4808]: I0217 15:56:11.973737 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" event={"ID":"5748f02a-e3dd-47c7-b89d-b472c718e593","Type":"ContainerStarted","Data":"1385665b452c9c54279b496b70105068cc9ac986718df98cc735fc09bcd4ac05"} Feb 17 15:56:11 crc kubenswrapper[4808]: I0217 15:56:11.974774 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:56:12 crc kubenswrapper[4808]: I0217 15:56:12.178695 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" podStartSLOduration=115.178660586 podStartE2EDuration="1m55.178660586s" podCreationTimestamp="2026-02-17 15:54:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:56:12.026785759 +0000 UTC m=+135.543144882" watchObservedRunningTime="2026-02-17 15:56:12.178660586 +0000 UTC m=+135.695019699" Feb 17 15:56:12 crc kubenswrapper[4808]: I0217 15:56:12.180394 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-z8tn8"] Feb 17 15:56:12 crc kubenswrapper[4808]: I0217 15:56:12.180560 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:56:12 crc kubenswrapper[4808]: E0217 15:56:12.180815 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z8tn8" podUID="b88c3e5f-7390-477c-ae74-aced26a8ddf9" Feb 17 15:56:12 crc kubenswrapper[4808]: E0217 15:56:12.259372 4808 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 15:56:13 crc kubenswrapper[4808]: I0217 15:56:13.145403 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:56:13 crc kubenswrapper[4808]: I0217 15:56:13.145481 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:56:13 crc kubenswrapper[4808]: I0217 15:56:13.145403 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:56:13 crc kubenswrapper[4808]: E0217 15:56:13.145724 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:56:13 crc kubenswrapper[4808]: E0217 15:56:13.146101 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:56:13 crc kubenswrapper[4808]: E0217 15:56:13.146338 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:56:14 crc kubenswrapper[4808]: I0217 15:56:14.145373 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:56:14 crc kubenswrapper[4808]: E0217 15:56:14.145674 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z8tn8" podUID="b88c3e5f-7390-477c-ae74-aced26a8ddf9" Feb 17 15:56:15 crc kubenswrapper[4808]: I0217 15:56:15.145763 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:56:15 crc kubenswrapper[4808]: I0217 15:56:15.145886 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:56:15 crc kubenswrapper[4808]: I0217 15:56:15.145890 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:56:15 crc kubenswrapper[4808]: E0217 15:56:15.146025 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:56:15 crc kubenswrapper[4808]: E0217 15:56:15.146281 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:56:15 crc kubenswrapper[4808]: E0217 15:56:15.146735 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:56:16 crc kubenswrapper[4808]: I0217 15:56:16.145033 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:56:16 crc kubenswrapper[4808]: E0217 15:56:16.145279 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z8tn8" podUID="b88c3e5f-7390-477c-ae74-aced26a8ddf9" Feb 17 15:56:17 crc kubenswrapper[4808]: I0217 15:56:17.145868 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:56:17 crc kubenswrapper[4808]: I0217 15:56:17.146022 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:56:17 crc kubenswrapper[4808]: E0217 15:56:17.147829 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:56:17 crc kubenswrapper[4808]: E0217 15:56:17.148015 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:56:17 crc kubenswrapper[4808]: I0217 15:56:17.148119 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:56:17 crc kubenswrapper[4808]: E0217 15:56:17.148295 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:56:18 crc kubenswrapper[4808]: I0217 15:56:18.144957 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:56:18 crc kubenswrapper[4808]: I0217 15:56:18.148550 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 17 15:56:18 crc kubenswrapper[4808]: I0217 15:56:18.150378 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 17 15:56:19 crc kubenswrapper[4808]: I0217 15:56:19.145631 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:56:19 crc kubenswrapper[4808]: I0217 15:56:19.145946 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:56:19 crc kubenswrapper[4808]: I0217 15:56:19.145960 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:56:19 crc kubenswrapper[4808]: I0217 15:56:19.148643 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 17 15:56:19 crc kubenswrapper[4808]: I0217 15:56:19.149137 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 17 15:56:19 crc kubenswrapper[4808]: I0217 15:56:19.149222 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 17 15:56:19 crc kubenswrapper[4808]: I0217 15:56:19.151110 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 17 15:56:21 crc kubenswrapper[4808]: I0217 15:56:21.593104 4808 patch_prober.go:28] interesting pod/machine-config-daemon-k8v8k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 15:56:21 crc kubenswrapper[4808]: I0217 15:56:21.593210 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 15:56:24 crc kubenswrapper[4808]: I0217 15:56:24.362767 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.110617 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:56:25 crc kubenswrapper[4808]: E0217 15:56:25.110920 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:58:27.110868111 +0000 UTC m=+270.627227214 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.212812 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.212933 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.212998 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.213053 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.214816 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.224373 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.224938 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.228453 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.463636 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.468990 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.476528 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.695130 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.733348 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-7jp8q"] Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.733934 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-srhjb"] Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.734250 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-srhjb" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.734656 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-7jp8q" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.737108 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-jlwrb"] Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.737538 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-k48nr"] Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.737831 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cg82l"] Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.738103 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cg82l" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.739290 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jlwrb" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.739556 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k48nr" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.746457 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.746629 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.746734 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.746833 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.746937 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.747087 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.747198 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.747335 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.748034 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.748195 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.748224 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.748390 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.748451 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.748592 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.748766 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.748895 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.748954 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.749091 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.749128 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.774705 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.774532 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.775051 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.780539 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-4x6s2"] Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.781299 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-4x6s2" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.784969 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.785128 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.785260 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.785403 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.785565 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.787036 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.789474 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.789792 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.789985 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.790125 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.790333 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.790912 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.791125 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.791248 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.793737 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-j6vm5"] Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.794521 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-j6dgq"] Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.797012 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-j6dgq" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.796600 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j6vm5" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.797715 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.811621 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-s2fz5"] Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.811850 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.811888 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.812105 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.812136 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.812200 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-s2fz5" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.812487 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.814925 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-mxgf8"] Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.815315 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-cvqck"] Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.815517 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bz4bz"] Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.817934 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-mxgf8" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.818773 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.818992 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.819168 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.819405 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.819678 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-cvqck" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.819681 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bz4bz" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.820953 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/681a57d4-bd74-4910-a3f3-517b96a15123-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-k48nr\" (UID: \"681a57d4-bd74-4910-a3f3-517b96a15123\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k48nr" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.820988 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10596b8a-e57a-498e-a7e8-e017fde34d54-config\") pod \"openshift-apiserver-operator-796bbdcf4f-cg82l\" (UID: \"10596b8a-e57a-498e-a7e8-e017fde34d54\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cg82l" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.821007 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d0ee93f1-93ac-4db2-b35e-5be5bded6541-audit\") pod \"apiserver-76f77b778f-7jp8q\" (UID: \"d0ee93f1-93ac-4db2-b35e-5be5bded6541\") " pod="openshift-apiserver/apiserver-76f77b778f-7jp8q" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.821030 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d0ee93f1-93ac-4db2-b35e-5be5bded6541-trusted-ca-bundle\") pod \"apiserver-76f77b778f-7jp8q\" (UID: \"d0ee93f1-93ac-4db2-b35e-5be5bded6541\") " pod="openshift-apiserver/apiserver-76f77b778f-7jp8q" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.821048 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b5592d9-5fbf-49ac-bab6-bf0e11f43706-config\") pod \"authentication-operator-69f744f599-4x6s2\" (UID: \"5b5592d9-5fbf-49ac-bab6-bf0e11f43706\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4x6s2" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.821067 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xcvb\" (UniqueName: \"kubernetes.io/projected/b9a99858-5ada-47b7-855c-8d3b43ab9fee-kube-api-access-7xcvb\") pod \"machine-approver-56656f9798-jlwrb\" (UID: \"b9a99858-5ada-47b7-855c-8d3b43ab9fee\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jlwrb" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.821085 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d0ee93f1-93ac-4db2-b35e-5be5bded6541-node-pullsecrets\") pod \"apiserver-76f77b778f-7jp8q\" (UID: \"d0ee93f1-93ac-4db2-b35e-5be5bded6541\") " pod="openshift-apiserver/apiserver-76f77b778f-7jp8q" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.821101 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/b9a99858-5ada-47b7-855c-8d3b43ab9fee-machine-approver-tls\") pod \"machine-approver-56656f9798-jlwrb\" (UID: \"b9a99858-5ada-47b7-855c-8d3b43ab9fee\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jlwrb" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.821117 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/656b06bf-9660-4c18-941b-5e5589f0301a-config\") pod \"machine-api-operator-5694c8668f-srhjb\" (UID: \"656b06bf-9660-4c18-941b-5e5589f0301a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-srhjb" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.821133 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldfqj\" (UniqueName: \"kubernetes.io/projected/10596b8a-e57a-498e-a7e8-e017fde34d54-kube-api-access-ldfqj\") pod \"openshift-apiserver-operator-796bbdcf4f-cg82l\" (UID: \"10596b8a-e57a-498e-a7e8-e017fde34d54\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cg82l" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.821154 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0ee93f1-93ac-4db2-b35e-5be5bded6541-config\") pod \"apiserver-76f77b778f-7jp8q\" (UID: \"d0ee93f1-93ac-4db2-b35e-5be5bded6541\") " pod="openshift-apiserver/apiserver-76f77b778f-7jp8q" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.821169 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9b5xt\" (UniqueName: \"kubernetes.io/projected/681a57d4-bd74-4910-a3f3-517b96a15123-kube-api-access-9b5xt\") pod \"apiserver-7bbb656c7d-k48nr\" (UID: \"681a57d4-bd74-4910-a3f3-517b96a15123\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k48nr" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.821187 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d0ee93f1-93ac-4db2-b35e-5be5bded6541-encryption-config\") pod \"apiserver-76f77b778f-7jp8q\" (UID: \"d0ee93f1-93ac-4db2-b35e-5be5bded6541\") " pod="openshift-apiserver/apiserver-76f77b778f-7jp8q" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.821206 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/681a57d4-bd74-4910-a3f3-517b96a15123-audit-policies\") pod \"apiserver-7bbb656c7d-k48nr\" (UID: \"681a57d4-bd74-4910-a3f3-517b96a15123\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k48nr" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.821223 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b9a99858-5ada-47b7-855c-8d3b43ab9fee-auth-proxy-config\") pod \"machine-approver-56656f9798-jlwrb\" (UID: \"b9a99858-5ada-47b7-855c-8d3b43ab9fee\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jlwrb" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.821239 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d0ee93f1-93ac-4db2-b35e-5be5bded6541-image-import-ca\") pod \"apiserver-76f77b778f-7jp8q\" (UID: \"d0ee93f1-93ac-4db2-b35e-5be5bded6541\") " pod="openshift-apiserver/apiserver-76f77b778f-7jp8q" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.821254 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d0ee93f1-93ac-4db2-b35e-5be5bded6541-serving-cert\") pod \"apiserver-76f77b778f-7jp8q\" (UID: \"d0ee93f1-93ac-4db2-b35e-5be5bded6541\") " pod="openshift-apiserver/apiserver-76f77b778f-7jp8q" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.821271 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/10596b8a-e57a-498e-a7e8-e017fde34d54-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-cg82l\" (UID: \"10596b8a-e57a-498e-a7e8-e017fde34d54\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cg82l" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.821287 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/681a57d4-bd74-4910-a3f3-517b96a15123-encryption-config\") pod \"apiserver-7bbb656c7d-k48nr\" (UID: \"681a57d4-bd74-4910-a3f3-517b96a15123\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k48nr" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.821303 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbmd2\" (UniqueName: \"kubernetes.io/projected/656b06bf-9660-4c18-941b-5e5589f0301a-kube-api-access-vbmd2\") pod \"machine-api-operator-5694c8668f-srhjb\" (UID: \"656b06bf-9660-4c18-941b-5e5589f0301a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-srhjb" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.821320 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/681a57d4-bd74-4910-a3f3-517b96a15123-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-k48nr\" (UID: \"681a57d4-bd74-4910-a3f3-517b96a15123\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k48nr" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.821343 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d0ee93f1-93ac-4db2-b35e-5be5bded6541-etcd-serving-ca\") pod \"apiserver-76f77b778f-7jp8q\" (UID: \"d0ee93f1-93ac-4db2-b35e-5be5bded6541\") " pod="openshift-apiserver/apiserver-76f77b778f-7jp8q" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.821358 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/656b06bf-9660-4c18-941b-5e5589f0301a-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-srhjb\" (UID: \"656b06bf-9660-4c18-941b-5e5589f0301a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-srhjb" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.821376 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d0ee93f1-93ac-4db2-b35e-5be5bded6541-etcd-client\") pod \"apiserver-76f77b778f-7jp8q\" (UID: \"d0ee93f1-93ac-4db2-b35e-5be5bded6541\") " pod="openshift-apiserver/apiserver-76f77b778f-7jp8q" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.821393 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/681a57d4-bd74-4910-a3f3-517b96a15123-serving-cert\") pod \"apiserver-7bbb656c7d-k48nr\" (UID: \"681a57d4-bd74-4910-a3f3-517b96a15123\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k48nr" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.821413 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8fth\" (UniqueName: \"kubernetes.io/projected/5b5592d9-5fbf-49ac-bab6-bf0e11f43706-kube-api-access-s8fth\") pod \"authentication-operator-69f744f599-4x6s2\" (UID: \"5b5592d9-5fbf-49ac-bab6-bf0e11f43706\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4x6s2" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.821434 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5b5592d9-5fbf-49ac-bab6-bf0e11f43706-service-ca-bundle\") pod \"authentication-operator-69f744f599-4x6s2\" (UID: \"5b5592d9-5fbf-49ac-bab6-bf0e11f43706\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4x6s2" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.821502 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9a99858-5ada-47b7-855c-8d3b43ab9fee-config\") pod \"machine-approver-56656f9798-jlwrb\" (UID: \"b9a99858-5ada-47b7-855c-8d3b43ab9fee\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jlwrb" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.821589 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/681a57d4-bd74-4910-a3f3-517b96a15123-etcd-client\") pod \"apiserver-7bbb656c7d-k48nr\" (UID: \"681a57d4-bd74-4910-a3f3-517b96a15123\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k48nr" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.821616 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/656b06bf-9660-4c18-941b-5e5589f0301a-images\") pod \"machine-api-operator-5694c8668f-srhjb\" (UID: \"656b06bf-9660-4c18-941b-5e5589f0301a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-srhjb" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.821645 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5b5592d9-5fbf-49ac-bab6-bf0e11f43706-serving-cert\") pod \"authentication-operator-69f744f599-4x6s2\" (UID: \"5b5592d9-5fbf-49ac-bab6-bf0e11f43706\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4x6s2" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.821697 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxs6p\" (UniqueName: \"kubernetes.io/projected/d0ee93f1-93ac-4db2-b35e-5be5bded6541-kube-api-access-wxs6p\") pod \"apiserver-76f77b778f-7jp8q\" (UID: \"d0ee93f1-93ac-4db2-b35e-5be5bded6541\") " pod="openshift-apiserver/apiserver-76f77b778f-7jp8q" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.821736 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d0ee93f1-93ac-4db2-b35e-5be5bded6541-audit-dir\") pod \"apiserver-76f77b778f-7jp8q\" (UID: \"d0ee93f1-93ac-4db2-b35e-5be5bded6541\") " pod="openshift-apiserver/apiserver-76f77b778f-7jp8q" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.821753 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/681a57d4-bd74-4910-a3f3-517b96a15123-audit-dir\") pod \"apiserver-7bbb656c7d-k48nr\" (UID: \"681a57d4-bd74-4910-a3f3-517b96a15123\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k48nr" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.823182 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5b5592d9-5fbf-49ac-bab6-bf0e11f43706-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-4x6s2\" (UID: \"5b5592d9-5fbf-49ac-bab6-bf0e11f43706\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4x6s2" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.825960 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.832244 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.832479 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.832623 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.835930 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.835992 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.836197 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.836282 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.836382 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.836422 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.836504 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.836546 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.836658 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.836785 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.836888 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.836944 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.837029 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.837062 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.837151 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.837180 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.837269 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.837480 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.837604 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.838802 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.842851 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.843690 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.843906 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.844022 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.844216 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.844434 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.844507 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-wlj8d"] Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.845052 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-wlj8d" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.845162 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-hdg74"] Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.845883 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.846153 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.846249 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.847176 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-hdg74" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.847846 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.849217 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.850272 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.858642 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.861561 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.861707 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.862747 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.863472 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.863655 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.863656 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.878788 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9l858"] Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.881504 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fmfh5"] Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.882984 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cbwrs"] Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.884363 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cbwrs" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.882987 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.887224 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9l858" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.888648 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.890159 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.890780 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.891287 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.891561 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.891917 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.892114 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.892305 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.892510 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.892867 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.895364 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-8mjrc"] Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.899360 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-p8js4"] Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.913359 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.913961 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.915128 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-p8js4" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.915141 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8mjrc" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.915941 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.916343 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vsl5p"] Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.917008 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-2lsb7"] Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.917974 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-2lsb7" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.918246 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vsl5p" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.918335 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.921000 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.921520 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.922302 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-7jp8q"] Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.923424 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mggmj"] Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.924143 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mggmj" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.924783 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lzvjs"] Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.925537 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/656b06bf-9660-4c18-941b-5e5589f0301a-config\") pod \"machine-api-operator-5694c8668f-srhjb\" (UID: \"656b06bf-9660-4c18-941b-5e5589f0301a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-srhjb" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.925607 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e489a46b-9123-44c6-94e0-692621760dd6-console-oauth-config\") pod \"console-f9d7485db-hdg74\" (UID: \"e489a46b-9123-44c6-94e0-692621760dd6\") " pod="openshift-console/console-f9d7485db-hdg74" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.926638 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8227d3a9-60f5-4d19-b4d1-8a0143864837-client-ca\") pod \"route-controller-manager-6576b87f9c-j6vm5\" (UID: \"8227d3a9-60f5-4d19-b4d1-8a0143864837\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j6vm5" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.926673 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldfqj\" (UniqueName: \"kubernetes.io/projected/10596b8a-e57a-498e-a7e8-e017fde34d54-kube-api-access-ldfqj\") pod \"openshift-apiserver-operator-796bbdcf4f-cg82l\" (UID: \"10596b8a-e57a-498e-a7e8-e017fde34d54\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cg82l" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.926695 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e489a46b-9123-44c6-94e0-692621760dd6-oauth-serving-cert\") pod \"console-f9d7485db-hdg74\" (UID: \"e489a46b-9123-44c6-94e0-692621760dd6\") " pod="openshift-console/console-f9d7485db-hdg74" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.926716 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/33978535-84b2-4def-af5a-d2819171e202-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-j6dgq\" (UID: \"33978535-84b2-4def-af5a-d2819171e202\") " pod="openshift-authentication/oauth-openshift-558db77b4-j6dgq" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.926742 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/25b3b271-e6e0-49c4-8fa2-17d8f8f2d5fa-serving-cert\") pod \"console-operator-58897d9998-mxgf8\" (UID: \"25b3b271-e6e0-49c4-8fa2-17d8f8f2d5fa\") " pod="openshift-console-operator/console-operator-58897d9998-mxgf8" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.926764 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/33978535-84b2-4def-af5a-d2819171e202-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-j6dgq\" (UID: \"33978535-84b2-4def-af5a-d2819171e202\") " pod="openshift-authentication/oauth-openshift-558db77b4-j6dgq" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.926788 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0ee93f1-93ac-4db2-b35e-5be5bded6541-config\") pod \"apiserver-76f77b778f-7jp8q\" (UID: \"d0ee93f1-93ac-4db2-b35e-5be5bded6541\") " pod="openshift-apiserver/apiserver-76f77b778f-7jp8q" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.926811 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9b5xt\" (UniqueName: \"kubernetes.io/projected/681a57d4-bd74-4910-a3f3-517b96a15123-kube-api-access-9b5xt\") pod \"apiserver-7bbb656c7d-k48nr\" (UID: \"681a57d4-bd74-4910-a3f3-517b96a15123\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k48nr" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.926832 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c8c0b903-63ed-4811-a991-9a5751a4c640-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-cbwrs\" (UID: \"c8c0b903-63ed-4811-a991-9a5751a4c640\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cbwrs" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.926852 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a7649915-6408-4c30-8faa-0fb3ea55007a-client-ca\") pod \"controller-manager-879f6c89f-cvqck\" (UID: \"a7649915-6408-4c30-8faa-0fb3ea55007a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cvqck" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.926871 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d0ee93f1-93ac-4db2-b35e-5be5bded6541-encryption-config\") pod \"apiserver-76f77b778f-7jp8q\" (UID: \"d0ee93f1-93ac-4db2-b35e-5be5bded6541\") " pod="openshift-apiserver/apiserver-76f77b778f-7jp8q" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.926891 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e489a46b-9123-44c6-94e0-692621760dd6-trusted-ca-bundle\") pod \"console-f9d7485db-hdg74\" (UID: \"e489a46b-9123-44c6-94e0-692621760dd6\") " pod="openshift-console/console-f9d7485db-hdg74" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.926912 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/25b3b271-e6e0-49c4-8fa2-17d8f8f2d5fa-trusted-ca\") pod \"console-operator-58897d9998-mxgf8\" (UID: \"25b3b271-e6e0-49c4-8fa2-17d8f8f2d5fa\") " pod="openshift-console-operator/console-operator-58897d9998-mxgf8" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.926933 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/681a57d4-bd74-4910-a3f3-517b96a15123-audit-policies\") pod \"apiserver-7bbb656c7d-k48nr\" (UID: \"681a57d4-bd74-4910-a3f3-517b96a15123\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k48nr" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.926954 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b9a99858-5ada-47b7-855c-8d3b43ab9fee-auth-proxy-config\") pod \"machine-approver-56656f9798-jlwrb\" (UID: \"b9a99858-5ada-47b7-855c-8d3b43ab9fee\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jlwrb" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.926976 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e489a46b-9123-44c6-94e0-692621760dd6-service-ca\") pod \"console-f9d7485db-hdg74\" (UID: \"e489a46b-9123-44c6-94e0-692621760dd6\") " pod="openshift-console/console-f9d7485db-hdg74" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.926995 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/33978535-84b2-4def-af5a-d2819171e202-audit-policies\") pod \"oauth-openshift-558db77b4-j6dgq\" (UID: \"33978535-84b2-4def-af5a-d2819171e202\") " pod="openshift-authentication/oauth-openshift-558db77b4-j6dgq" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.927015 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nx4t\" (UniqueName: \"kubernetes.io/projected/8227d3a9-60f5-4d19-b4d1-8a0143864837-kube-api-access-6nx4t\") pod \"route-controller-manager-6576b87f9c-j6vm5\" (UID: \"8227d3a9-60f5-4d19-b4d1-8a0143864837\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j6vm5" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.927035 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d0ee93f1-93ac-4db2-b35e-5be5bded6541-image-import-ca\") pod \"apiserver-76f77b778f-7jp8q\" (UID: \"d0ee93f1-93ac-4db2-b35e-5be5bded6541\") " pod="openshift-apiserver/apiserver-76f77b778f-7jp8q" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.927052 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d0ee93f1-93ac-4db2-b35e-5be5bded6541-serving-cert\") pod \"apiserver-76f77b778f-7jp8q\" (UID: \"d0ee93f1-93ac-4db2-b35e-5be5bded6541\") " pod="openshift-apiserver/apiserver-76f77b778f-7jp8q" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.927070 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/10596b8a-e57a-498e-a7e8-e017fde34d54-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-cg82l\" (UID: \"10596b8a-e57a-498e-a7e8-e017fde34d54\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cg82l" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.927089 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/0131c573-bf76-49f4-9581-dd39ef60b27f-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-bz4bz\" (UID: \"0131c573-bf76-49f4-9581-dd39ef60b27f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bz4bz" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.927106 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8srf\" (UniqueName: \"kubernetes.io/projected/a7649915-6408-4c30-8faa-0fb3ea55007a-kube-api-access-v8srf\") pod \"controller-manager-879f6c89f-cvqck\" (UID: \"a7649915-6408-4c30-8faa-0fb3ea55007a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cvqck" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.927122 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25b3b271-e6e0-49c4-8fa2-17d8f8f2d5fa-config\") pod \"console-operator-58897d9998-mxgf8\" (UID: \"25b3b271-e6e0-49c4-8fa2-17d8f8f2d5fa\") " pod="openshift-console-operator/console-operator-58897d9998-mxgf8" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.927474 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-54vjj"] Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.926559 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/656b06bf-9660-4c18-941b-5e5589f0301a-config\") pod \"machine-api-operator-5694c8668f-srhjb\" (UID: \"656b06bf-9660-4c18-941b-5e5589f0301a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-srhjb" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.925692 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lzvjs" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.928909 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/681a57d4-bd74-4910-a3f3-517b96a15123-encryption-config\") pod \"apiserver-7bbb656c7d-k48nr\" (UID: \"681a57d4-bd74-4910-a3f3-517b96a15123\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k48nr" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.928942 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbmd2\" (UniqueName: \"kubernetes.io/projected/656b06bf-9660-4c18-941b-5e5589f0301a-kube-api-access-vbmd2\") pod \"machine-api-operator-5694c8668f-srhjb\" (UID: \"656b06bf-9660-4c18-941b-5e5589f0301a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-srhjb" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.929074 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a7649915-6408-4c30-8faa-0fb3ea55007a-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-cvqck\" (UID: \"a7649915-6408-4c30-8faa-0fb3ea55007a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cvqck" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.929094 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8c0b903-63ed-4811-a991-9a5751a4c640-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-cbwrs\" (UID: \"c8c0b903-63ed-4811-a991-9a5751a4c640\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cbwrs" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.929218 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/33978535-84b2-4def-af5a-d2819171e202-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-j6dgq\" (UID: \"33978535-84b2-4def-af5a-d2819171e202\") " pod="openshift-authentication/oauth-openshift-558db77b4-j6dgq" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.929245 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/681a57d4-bd74-4910-a3f3-517b96a15123-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-k48nr\" (UID: \"681a57d4-bd74-4910-a3f3-517b96a15123\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k48nr" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.929262 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d0ee93f1-93ac-4db2-b35e-5be5bded6541-etcd-serving-ca\") pod \"apiserver-76f77b778f-7jp8q\" (UID: \"d0ee93f1-93ac-4db2-b35e-5be5bded6541\") " pod="openshift-apiserver/apiserver-76f77b778f-7jp8q" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.929522 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/33978535-84b2-4def-af5a-d2819171e202-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-j6dgq\" (UID: \"33978535-84b2-4def-af5a-d2819171e202\") " pod="openshift-authentication/oauth-openshift-558db77b4-j6dgq" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.929542 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a7649915-6408-4c30-8faa-0fb3ea55007a-serving-cert\") pod \"controller-manager-879f6c89f-cvqck\" (UID: \"a7649915-6408-4c30-8faa-0fb3ea55007a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cvqck" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.929581 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d0ee93f1-93ac-4db2-b35e-5be5bded6541-etcd-client\") pod \"apiserver-76f77b778f-7jp8q\" (UID: \"d0ee93f1-93ac-4db2-b35e-5be5bded6541\") " pod="openshift-apiserver/apiserver-76f77b778f-7jp8q" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.929601 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/681a57d4-bd74-4910-a3f3-517b96a15123-serving-cert\") pod \"apiserver-7bbb656c7d-k48nr\" (UID: \"681a57d4-bd74-4910-a3f3-517b96a15123\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k48nr" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.929620 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/656b06bf-9660-4c18-941b-5e5589f0301a-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-srhjb\" (UID: \"656b06bf-9660-4c18-941b-5e5589f0301a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-srhjb" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.929639 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e489a46b-9123-44c6-94e0-692621760dd6-console-serving-cert\") pod \"console-f9d7485db-hdg74\" (UID: \"e489a46b-9123-44c6-94e0-692621760dd6\") " pod="openshift-console/console-f9d7485db-hdg74" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.929655 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8227d3a9-60f5-4d19-b4d1-8a0143864837-config\") pod \"route-controller-manager-6576b87f9c-j6vm5\" (UID: \"8227d3a9-60f5-4d19-b4d1-8a0143864837\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j6vm5" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.929681 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwn6m\" (UniqueName: \"kubernetes.io/projected/9c7096e1-8ca1-483d-8e12-1cc79d28182a-kube-api-access-jwn6m\") pod \"cluster-image-registry-operator-dc59b4c8b-9l858\" (UID: \"9c7096e1-8ca1-483d-8e12-1cc79d28182a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9l858" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.929706 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8fth\" (UniqueName: \"kubernetes.io/projected/5b5592d9-5fbf-49ac-bab6-bf0e11f43706-kube-api-access-s8fth\") pod \"authentication-operator-69f744f599-4x6s2\" (UID: \"5b5592d9-5fbf-49ac-bab6-bf0e11f43706\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4x6s2" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.929728 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwlfb\" (UniqueName: \"kubernetes.io/projected/25b3b271-e6e0-49c4-8fa2-17d8f8f2d5fa-kube-api-access-pwlfb\") pod \"console-operator-58897d9998-mxgf8\" (UID: \"25b3b271-e6e0-49c4-8fa2-17d8f8f2d5fa\") " pod="openshift-console-operator/console-operator-58897d9998-mxgf8" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.929760 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5b5592d9-5fbf-49ac-bab6-bf0e11f43706-service-ca-bundle\") pod \"authentication-operator-69f744f599-4x6s2\" (UID: \"5b5592d9-5fbf-49ac-bab6-bf0e11f43706\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4x6s2" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.929821 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/33978535-84b2-4def-af5a-d2819171e202-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-j6dgq\" (UID: \"33978535-84b2-4def-af5a-d2819171e202\") " pod="openshift-authentication/oauth-openshift-558db77b4-j6dgq" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.929891 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9a99858-5ada-47b7-855c-8d3b43ab9fee-config\") pod \"machine-approver-56656f9798-jlwrb\" (UID: \"b9a99858-5ada-47b7-855c-8d3b43ab9fee\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jlwrb" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.929912 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8227d3a9-60f5-4d19-b4d1-8a0143864837-serving-cert\") pod \"route-controller-manager-6576b87f9c-j6vm5\" (UID: \"8227d3a9-60f5-4d19-b4d1-8a0143864837\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j6vm5" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.929933 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lnfm\" (UniqueName: \"kubernetes.io/projected/e489a46b-9123-44c6-94e0-692621760dd6-kube-api-access-6lnfm\") pod \"console-f9d7485db-hdg74\" (UID: \"e489a46b-9123-44c6-94e0-692621760dd6\") " pod="openshift-console/console-f9d7485db-hdg74" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.929972 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/681a57d4-bd74-4910-a3f3-517b96a15123-etcd-client\") pod \"apiserver-7bbb656c7d-k48nr\" (UID: \"681a57d4-bd74-4910-a3f3-517b96a15123\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k48nr" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.929992 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/656b06bf-9660-4c18-941b-5e5589f0301a-images\") pod \"machine-api-operator-5694c8668f-srhjb\" (UID: \"656b06bf-9660-4c18-941b-5e5589f0301a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-srhjb" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.930014 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5b5592d9-5fbf-49ac-bab6-bf0e11f43706-serving-cert\") pod \"authentication-operator-69f744f599-4x6s2\" (UID: \"5b5592d9-5fbf-49ac-bab6-bf0e11f43706\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4x6s2" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.930072 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9c7096e1-8ca1-483d-8e12-1cc79d28182a-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-9l858\" (UID: \"9c7096e1-8ca1-483d-8e12-1cc79d28182a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9l858" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.930100 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxs6p\" (UniqueName: \"kubernetes.io/projected/d0ee93f1-93ac-4db2-b35e-5be5bded6541-kube-api-access-wxs6p\") pod \"apiserver-76f77b778f-7jp8q\" (UID: \"d0ee93f1-93ac-4db2-b35e-5be5bded6541\") " pod="openshift-apiserver/apiserver-76f77b778f-7jp8q" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.930145 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e489a46b-9123-44c6-94e0-692621760dd6-console-config\") pod \"console-f9d7485db-hdg74\" (UID: \"e489a46b-9123-44c6-94e0-692621760dd6\") " pod="openshift-console/console-f9d7485db-hdg74" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.930166 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5tzz\" (UniqueName: \"kubernetes.io/projected/c8c0b903-63ed-4811-a991-9a5751a4c640-kube-api-access-k5tzz\") pod \"openshift-controller-manager-operator-756b6f6bc6-cbwrs\" (UID: \"c8c0b903-63ed-4811-a991-9a5751a4c640\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cbwrs" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.930191 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-cw29n"] Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.930238 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d0ee93f1-93ac-4db2-b35e-5be5bded6541-audit-dir\") pod \"apiserver-76f77b778f-7jp8q\" (UID: \"d0ee93f1-93ac-4db2-b35e-5be5bded6541\") " pod="openshift-apiserver/apiserver-76f77b778f-7jp8q" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.930657 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-z82w8"] Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.931219 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-z82w8" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.931291 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-54vjj" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.930189 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d0ee93f1-93ac-4db2-b35e-5be5bded6541-audit-dir\") pod \"apiserver-76f77b778f-7jp8q\" (UID: \"d0ee93f1-93ac-4db2-b35e-5be5bded6541\") " pod="openshift-apiserver/apiserver-76f77b778f-7jp8q" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.931348 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/33978535-84b2-4def-af5a-d2819171e202-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-j6dgq\" (UID: \"33978535-84b2-4def-af5a-d2819171e202\") " pod="openshift-authentication/oauth-openshift-558db77b4-j6dgq" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.931371 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/33978535-84b2-4def-af5a-d2819171e202-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-j6dgq\" (UID: \"33978535-84b2-4def-af5a-d2819171e202\") " pod="openshift-authentication/oauth-openshift-558db77b4-j6dgq" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.931409 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/681a57d4-bd74-4910-a3f3-517b96a15123-audit-dir\") pod \"apiserver-7bbb656c7d-k48nr\" (UID: \"681a57d4-bd74-4910-a3f3-517b96a15123\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k48nr" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.931430 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnnfd\" (UniqueName: \"kubernetes.io/projected/0131c573-bf76-49f4-9581-dd39ef60b27f-kube-api-access-pnnfd\") pod \"cluster-samples-operator-665b6dd947-bz4bz\" (UID: \"0131c573-bf76-49f4-9581-dd39ef60b27f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bz4bz" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.931449 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/33978535-84b2-4def-af5a-d2819171e202-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-j6dgq\" (UID: \"33978535-84b2-4def-af5a-d2819171e202\") " pod="openshift-authentication/oauth-openshift-558db77b4-j6dgq" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.931469 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7649915-6408-4c30-8faa-0fb3ea55007a-config\") pod \"controller-manager-879f6c89f-cvqck\" (UID: \"a7649915-6408-4c30-8faa-0fb3ea55007a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cvqck" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.931493 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/9c7096e1-8ca1-483d-8e12-1cc79d28182a-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-9l858\" (UID: \"9c7096e1-8ca1-483d-8e12-1cc79d28182a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9l858" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.931517 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5b5592d9-5fbf-49ac-bab6-bf0e11f43706-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-4x6s2\" (UID: \"5b5592d9-5fbf-49ac-bab6-bf0e11f43706\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4x6s2" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.931538 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hw8ff\" (UniqueName: \"kubernetes.io/projected/33978535-84b2-4def-af5a-d2819171e202-kube-api-access-hw8ff\") pod \"oauth-openshift-558db77b4-j6dgq\" (UID: \"33978535-84b2-4def-af5a-d2819171e202\") " pod="openshift-authentication/oauth-openshift-558db77b4-j6dgq" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.931557 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9c7096e1-8ca1-483d-8e12-1cc79d28182a-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-9l858\" (UID: \"9c7096e1-8ca1-483d-8e12-1cc79d28182a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9l858" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.931613 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/681a57d4-bd74-4910-a3f3-517b96a15123-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-k48nr\" (UID: \"681a57d4-bd74-4910-a3f3-517b96a15123\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k48nr" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.931638 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/33978535-84b2-4def-af5a-d2819171e202-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-j6dgq\" (UID: \"33978535-84b2-4def-af5a-d2819171e202\") " pod="openshift-authentication/oauth-openshift-558db77b4-j6dgq" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.931665 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10596b8a-e57a-498e-a7e8-e017fde34d54-config\") pod \"openshift-apiserver-operator-796bbdcf4f-cg82l\" (UID: \"10596b8a-e57a-498e-a7e8-e017fde34d54\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cg82l" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.931694 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d0ee93f1-93ac-4db2-b35e-5be5bded6541-audit\") pod \"apiserver-76f77b778f-7jp8q\" (UID: \"d0ee93f1-93ac-4db2-b35e-5be5bded6541\") " pod="openshift-apiserver/apiserver-76f77b778f-7jp8q" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.931713 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d0ee93f1-93ac-4db2-b35e-5be5bded6541-trusted-ca-bundle\") pod \"apiserver-76f77b778f-7jp8q\" (UID: \"d0ee93f1-93ac-4db2-b35e-5be5bded6541\") " pod="openshift-apiserver/apiserver-76f77b778f-7jp8q" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.931741 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/33978535-84b2-4def-af5a-d2819171e202-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-j6dgq\" (UID: \"33978535-84b2-4def-af5a-d2819171e202\") " pod="openshift-authentication/oauth-openshift-558db77b4-j6dgq" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.931767 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fttb4\" (UniqueName: \"kubernetes.io/projected/116ae5bc-cf7e-45ad-9800-501bcfc04ff7-kube-api-access-fttb4\") pod \"downloads-7954f5f757-wlj8d\" (UID: \"116ae5bc-cf7e-45ad-9800-501bcfc04ff7\") " pod="openshift-console/downloads-7954f5f757-wlj8d" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.931785 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b5592d9-5fbf-49ac-bab6-bf0e11f43706-config\") pod \"authentication-operator-69f744f599-4x6s2\" (UID: \"5b5592d9-5fbf-49ac-bab6-bf0e11f43706\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4x6s2" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.931805 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xcvb\" (UniqueName: \"kubernetes.io/projected/b9a99858-5ada-47b7-855c-8d3b43ab9fee-kube-api-access-7xcvb\") pod \"machine-approver-56656f9798-jlwrb\" (UID: \"b9a99858-5ada-47b7-855c-8d3b43ab9fee\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jlwrb" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.931826 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d0ee93f1-93ac-4db2-b35e-5be5bded6541-node-pullsecrets\") pod \"apiserver-76f77b778f-7jp8q\" (UID: \"d0ee93f1-93ac-4db2-b35e-5be5bded6541\") " pod="openshift-apiserver/apiserver-76f77b778f-7jp8q" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.931849 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/b9a99858-5ada-47b7-855c-8d3b43ab9fee-machine-approver-tls\") pod \"machine-approver-56656f9798-jlwrb\" (UID: \"b9a99858-5ada-47b7-855c-8d3b43ab9fee\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jlwrb" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.931866 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/33978535-84b2-4def-af5a-d2819171e202-audit-dir\") pod \"oauth-openshift-558db77b4-j6dgq\" (UID: \"33978535-84b2-4def-af5a-d2819171e202\") " pod="openshift-authentication/oauth-openshift-558db77b4-j6dgq" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.931884 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/33978535-84b2-4def-af5a-d2819171e202-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-j6dgq\" (UID: \"33978535-84b2-4def-af5a-d2819171e202\") " pod="openshift-authentication/oauth-openshift-558db77b4-j6dgq" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.932010 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cw29n" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.932077 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/681a57d4-bd74-4910-a3f3-517b96a15123-audit-dir\") pod \"apiserver-7bbb656c7d-k48nr\" (UID: \"681a57d4-bd74-4910-a3f3-517b96a15123\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k48nr" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.931491 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-9bcck"] Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.932786 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-n5p8z"] Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.933544 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-n5p8z" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.933951 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/681a57d4-bd74-4910-a3f3-517b96a15123-audit-policies\") pod \"apiserver-7bbb656c7d-k48nr\" (UID: \"681a57d4-bd74-4910-a3f3-517b96a15123\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k48nr" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.934398 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d0ee93f1-93ac-4db2-b35e-5be5bded6541-encryption-config\") pod \"apiserver-76f77b778f-7jp8q\" (UID: \"d0ee93f1-93ac-4db2-b35e-5be5bded6541\") " pod="openshift-apiserver/apiserver-76f77b778f-7jp8q" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.934467 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0ee93f1-93ac-4db2-b35e-5be5bded6541-config\") pod \"apiserver-76f77b778f-7jp8q\" (UID: \"d0ee93f1-93ac-4db2-b35e-5be5bded6541\") " pod="openshift-apiserver/apiserver-76f77b778f-7jp8q" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.935426 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d0ee93f1-93ac-4db2-b35e-5be5bded6541-image-import-ca\") pod \"apiserver-76f77b778f-7jp8q\" (UID: \"d0ee93f1-93ac-4db2-b35e-5be5bded6541\") " pod="openshift-apiserver/apiserver-76f77b778f-7jp8q" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.936264 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/656b06bf-9660-4c18-941b-5e5589f0301a-images\") pod \"machine-api-operator-5694c8668f-srhjb\" (UID: \"656b06bf-9660-4c18-941b-5e5589f0301a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-srhjb" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.937009 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/681a57d4-bd74-4910-a3f3-517b96a15123-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-k48nr\" (UID: \"681a57d4-bd74-4910-a3f3-517b96a15123\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k48nr" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.937236 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10596b8a-e57a-498e-a7e8-e017fde34d54-config\") pod \"openshift-apiserver-operator-796bbdcf4f-cg82l\" (UID: \"10596b8a-e57a-498e-a7e8-e017fde34d54\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cg82l" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.937737 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d0ee93f1-93ac-4db2-b35e-5be5bded6541-audit\") pod \"apiserver-76f77b778f-7jp8q\" (UID: \"d0ee93f1-93ac-4db2-b35e-5be5bded6541\") " pod="openshift-apiserver/apiserver-76f77b778f-7jp8q" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.938787 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d0ee93f1-93ac-4db2-b35e-5be5bded6541-trusted-ca-bundle\") pod \"apiserver-76f77b778f-7jp8q\" (UID: \"d0ee93f1-93ac-4db2-b35e-5be5bded6541\") " pod="openshift-apiserver/apiserver-76f77b778f-7jp8q" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.939568 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.940697 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d0ee93f1-93ac-4db2-b35e-5be5bded6541-etcd-client\") pod \"apiserver-76f77b778f-7jp8q\" (UID: \"d0ee93f1-93ac-4db2-b35e-5be5bded6541\") " pod="openshift-apiserver/apiserver-76f77b778f-7jp8q" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.941223 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9a99858-5ada-47b7-855c-8d3b43ab9fee-config\") pod \"machine-approver-56656f9798-jlwrb\" (UID: \"b9a99858-5ada-47b7-855c-8d3b43ab9fee\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jlwrb" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.943656 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/681a57d4-bd74-4910-a3f3-517b96a15123-serving-cert\") pod \"apiserver-7bbb656c7d-k48nr\" (UID: \"681a57d4-bd74-4910-a3f3-517b96a15123\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k48nr" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.945402 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/656b06bf-9660-4c18-941b-5e5589f0301a-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-srhjb\" (UID: \"656b06bf-9660-4c18-941b-5e5589f0301a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-srhjb" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.945652 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-k48nr"] Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.945706 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cg82l"] Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.945722 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8zrdj"] Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.946561 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8zrdj" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.946912 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b9a99858-5ada-47b7-855c-8d3b43ab9fee-auth-proxy-config\") pod \"machine-approver-56656f9798-jlwrb\" (UID: \"b9a99858-5ada-47b7-855c-8d3b43ab9fee\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jlwrb" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.947928 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d0ee93f1-93ac-4db2-b35e-5be5bded6541-serving-cert\") pod \"apiserver-76f77b778f-7jp8q\" (UID: \"d0ee93f1-93ac-4db2-b35e-5be5bded6541\") " pod="openshift-apiserver/apiserver-76f77b778f-7jp8q" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.948173 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d0ee93f1-93ac-4db2-b35e-5be5bded6541-node-pullsecrets\") pod \"apiserver-76f77b778f-7jp8q\" (UID: \"d0ee93f1-93ac-4db2-b35e-5be5bded6541\") " pod="openshift-apiserver/apiserver-76f77b778f-7jp8q" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.948433 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9bcck" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.950457 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522385-74pvr"] Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.951415 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-jwcd2"] Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.951620 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d0ee93f1-93ac-4db2-b35e-5be5bded6541-etcd-serving-ca\") pod \"apiserver-76f77b778f-7jp8q\" (UID: \"d0ee93f1-93ac-4db2-b35e-5be5bded6541\") " pod="openshift-apiserver/apiserver-76f77b778f-7jp8q" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.951998 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522385-74pvr" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.952070 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-jwcd2" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.952970 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-sbr84"] Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.953561 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-sbr84" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.955772 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/681a57d4-bd74-4910-a3f3-517b96a15123-encryption-config\") pod \"apiserver-7bbb656c7d-k48nr\" (UID: \"681a57d4-bd74-4910-a3f3-517b96a15123\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k48nr" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.956635 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/681a57d4-bd74-4910-a3f3-517b96a15123-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-k48nr\" (UID: \"681a57d4-bd74-4910-a3f3-517b96a15123\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k48nr" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.959647 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pd6wv"] Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.960272 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.961991 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/b9a99858-5ada-47b7-855c-8d3b43ab9fee-machine-approver-tls\") pod \"machine-approver-56656f9798-jlwrb\" (UID: \"b9a99858-5ada-47b7-855c-8d3b43ab9fee\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jlwrb" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.965714 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-bqslk"] Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.989288 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pd6wv" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.990333 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/10596b8a-e57a-498e-a7e8-e017fde34d54-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-cg82l\" (UID: \"10596b8a-e57a-498e-a7e8-e017fde34d54\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cg82l" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.990821 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/681a57d4-bd74-4910-a3f3-517b96a15123-etcd-client\") pod \"apiserver-7bbb656c7d-k48nr\" (UID: \"681a57d4-bd74-4910-a3f3-517b96a15123\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k48nr" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.991616 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5b5592d9-5fbf-49ac-bab6-bf0e11f43706-service-ca-bundle\") pod \"authentication-operator-69f744f599-4x6s2\" (UID: \"5b5592d9-5fbf-49ac-bab6-bf0e11f43706\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4x6s2" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.992799 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5b5592d9-5fbf-49ac-bab6-bf0e11f43706-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-4x6s2\" (UID: \"5b5592d9-5fbf-49ac-bab6-bf0e11f43706\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4x6s2" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.997049 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b5592d9-5fbf-49ac-bab6-bf0e11f43706-config\") pod \"authentication-operator-69f744f599-4x6s2\" (UID: \"5b5592d9-5fbf-49ac-bab6-bf0e11f43706\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4x6s2" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.997377 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5b5592d9-5fbf-49ac-bab6-bf0e11f43706-serving-cert\") pod \"authentication-operator-69f744f599-4x6s2\" (UID: \"5b5592d9-5fbf-49ac-bab6-bf0e11f43706\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4x6s2" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.009427 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-cvqck"] Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.009522 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-j6vm5"] Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.009542 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-mxgf8"] Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.009627 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-s2fz5"] Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.009644 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-t8ws2"] Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.011237 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-spzc7"] Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.011506 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-bqslk" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.012751 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-t8ws2" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.023619 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-jw4gs"] Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.023689 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.023807 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-spzc7" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.025865 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-dgt46"] Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.026072 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-jw4gs" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.028227 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bmq9l"] Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.028470 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-dgt46" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.034964 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-8mjrc"] Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.035034 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-2lsb7"] Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.035071 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-srhjb"] Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.035253 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bmq9l" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.036439 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-p8js4"] Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.036541 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/33978535-84b2-4def-af5a-d2819171e202-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-j6dgq\" (UID: \"33978535-84b2-4def-af5a-d2819171e202\") " pod="openshift-authentication/oauth-openshift-558db77b4-j6dgq" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.036598 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/33978535-84b2-4def-af5a-d2819171e202-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-j6dgq\" (UID: \"33978535-84b2-4def-af5a-d2819171e202\") " pod="openshift-authentication/oauth-openshift-558db77b4-j6dgq" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.036630 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnnfd\" (UniqueName: \"kubernetes.io/projected/0131c573-bf76-49f4-9581-dd39ef60b27f-kube-api-access-pnnfd\") pod \"cluster-samples-operator-665b6dd947-bz4bz\" (UID: \"0131c573-bf76-49f4-9581-dd39ef60b27f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bz4bz" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.036648 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/33978535-84b2-4def-af5a-d2819171e202-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-j6dgq\" (UID: \"33978535-84b2-4def-af5a-d2819171e202\") " pod="openshift-authentication/oauth-openshift-558db77b4-j6dgq" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.036668 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7649915-6408-4c30-8faa-0fb3ea55007a-config\") pod \"controller-manager-879f6c89f-cvqck\" (UID: \"a7649915-6408-4c30-8faa-0fb3ea55007a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cvqck" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.036695 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/9c7096e1-8ca1-483d-8e12-1cc79d28182a-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-9l858\" (UID: \"9c7096e1-8ca1-483d-8e12-1cc79d28182a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9l858" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.036716 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hw8ff\" (UniqueName: \"kubernetes.io/projected/33978535-84b2-4def-af5a-d2819171e202-kube-api-access-hw8ff\") pod \"oauth-openshift-558db77b4-j6dgq\" (UID: \"33978535-84b2-4def-af5a-d2819171e202\") " pod="openshift-authentication/oauth-openshift-558db77b4-j6dgq" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.036735 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9c7096e1-8ca1-483d-8e12-1cc79d28182a-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-9l858\" (UID: \"9c7096e1-8ca1-483d-8e12-1cc79d28182a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9l858" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.036755 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/33978535-84b2-4def-af5a-d2819171e202-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-j6dgq\" (UID: \"33978535-84b2-4def-af5a-d2819171e202\") " pod="openshift-authentication/oauth-openshift-558db77b4-j6dgq" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.036789 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/33978535-84b2-4def-af5a-d2819171e202-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-j6dgq\" (UID: \"33978535-84b2-4def-af5a-d2819171e202\") " pod="openshift-authentication/oauth-openshift-558db77b4-j6dgq" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.036810 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fttb4\" (UniqueName: \"kubernetes.io/projected/116ae5bc-cf7e-45ad-9800-501bcfc04ff7-kube-api-access-fttb4\") pod \"downloads-7954f5f757-wlj8d\" (UID: \"116ae5bc-cf7e-45ad-9800-501bcfc04ff7\") " pod="openshift-console/downloads-7954f5f757-wlj8d" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.036828 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/33978535-84b2-4def-af5a-d2819171e202-audit-dir\") pod \"oauth-openshift-558db77b4-j6dgq\" (UID: \"33978535-84b2-4def-af5a-d2819171e202\") " pod="openshift-authentication/oauth-openshift-558db77b4-j6dgq" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.036846 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/33978535-84b2-4def-af5a-d2819171e202-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-j6dgq\" (UID: \"33978535-84b2-4def-af5a-d2819171e202\") " pod="openshift-authentication/oauth-openshift-558db77b4-j6dgq" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.036864 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e489a46b-9123-44c6-94e0-692621760dd6-console-oauth-config\") pod \"console-f9d7485db-hdg74\" (UID: \"e489a46b-9123-44c6-94e0-692621760dd6\") " pod="openshift-console/console-f9d7485db-hdg74" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.036882 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8227d3a9-60f5-4d19-b4d1-8a0143864837-client-ca\") pod \"route-controller-manager-6576b87f9c-j6vm5\" (UID: \"8227d3a9-60f5-4d19-b4d1-8a0143864837\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j6vm5" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.036905 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e489a46b-9123-44c6-94e0-692621760dd6-oauth-serving-cert\") pod \"console-f9d7485db-hdg74\" (UID: \"e489a46b-9123-44c6-94e0-692621760dd6\") " pod="openshift-console/console-f9d7485db-hdg74" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.036942 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/33978535-84b2-4def-af5a-d2819171e202-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-j6dgq\" (UID: \"33978535-84b2-4def-af5a-d2819171e202\") " pod="openshift-authentication/oauth-openshift-558db77b4-j6dgq" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.036984 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/25b3b271-e6e0-49c4-8fa2-17d8f8f2d5fa-serving-cert\") pod \"console-operator-58897d9998-mxgf8\" (UID: \"25b3b271-e6e0-49c4-8fa2-17d8f8f2d5fa\") " pod="openshift-console-operator/console-operator-58897d9998-mxgf8" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.037005 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/33978535-84b2-4def-af5a-d2819171e202-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-j6dgq\" (UID: \"33978535-84b2-4def-af5a-d2819171e202\") " pod="openshift-authentication/oauth-openshift-558db77b4-j6dgq" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.037035 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c8c0b903-63ed-4811-a991-9a5751a4c640-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-cbwrs\" (UID: \"c8c0b903-63ed-4811-a991-9a5751a4c640\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cbwrs" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.037054 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a7649915-6408-4c30-8faa-0fb3ea55007a-client-ca\") pod \"controller-manager-879f6c89f-cvqck\" (UID: \"a7649915-6408-4c30-8faa-0fb3ea55007a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cvqck" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.037075 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e489a46b-9123-44c6-94e0-692621760dd6-trusted-ca-bundle\") pod \"console-f9d7485db-hdg74\" (UID: \"e489a46b-9123-44c6-94e0-692621760dd6\") " pod="openshift-console/console-f9d7485db-hdg74" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.037101 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/25b3b271-e6e0-49c4-8fa2-17d8f8f2d5fa-trusted-ca\") pod \"console-operator-58897d9998-mxgf8\" (UID: \"25b3b271-e6e0-49c4-8fa2-17d8f8f2d5fa\") " pod="openshift-console-operator/console-operator-58897d9998-mxgf8" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.037127 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e489a46b-9123-44c6-94e0-692621760dd6-service-ca\") pod \"console-f9d7485db-hdg74\" (UID: \"e489a46b-9123-44c6-94e0-692621760dd6\") " pod="openshift-console/console-f9d7485db-hdg74" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.037150 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/33978535-84b2-4def-af5a-d2819171e202-audit-policies\") pod \"oauth-openshift-558db77b4-j6dgq\" (UID: \"33978535-84b2-4def-af5a-d2819171e202\") " pod="openshift-authentication/oauth-openshift-558db77b4-j6dgq" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.037172 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6nx4t\" (UniqueName: \"kubernetes.io/projected/8227d3a9-60f5-4d19-b4d1-8a0143864837-kube-api-access-6nx4t\") pod \"route-controller-manager-6576b87f9c-j6vm5\" (UID: \"8227d3a9-60f5-4d19-b4d1-8a0143864837\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j6vm5" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.037195 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/0131c573-bf76-49f4-9581-dd39ef60b27f-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-bz4bz\" (UID: \"0131c573-bf76-49f4-9581-dd39ef60b27f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bz4bz" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.037214 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8srf\" (UniqueName: \"kubernetes.io/projected/a7649915-6408-4c30-8faa-0fb3ea55007a-kube-api-access-v8srf\") pod \"controller-manager-879f6c89f-cvqck\" (UID: \"a7649915-6408-4c30-8faa-0fb3ea55007a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cvqck" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.037237 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25b3b271-e6e0-49c4-8fa2-17d8f8f2d5fa-config\") pod \"console-operator-58897d9998-mxgf8\" (UID: \"25b3b271-e6e0-49c4-8fa2-17d8f8f2d5fa\") " pod="openshift-console-operator/console-operator-58897d9998-mxgf8" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.037263 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a7649915-6408-4c30-8faa-0fb3ea55007a-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-cvqck\" (UID: \"a7649915-6408-4c30-8faa-0fb3ea55007a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cvqck" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.037280 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8c0b903-63ed-4811-a991-9a5751a4c640-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-cbwrs\" (UID: \"c8c0b903-63ed-4811-a991-9a5751a4c640\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cbwrs" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.037297 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/33978535-84b2-4def-af5a-d2819171e202-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-j6dgq\" (UID: \"33978535-84b2-4def-af5a-d2819171e202\") " pod="openshift-authentication/oauth-openshift-558db77b4-j6dgq" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.037346 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/33978535-84b2-4def-af5a-d2819171e202-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-j6dgq\" (UID: \"33978535-84b2-4def-af5a-d2819171e202\") " pod="openshift-authentication/oauth-openshift-558db77b4-j6dgq" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.037374 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a7649915-6408-4c30-8faa-0fb3ea55007a-serving-cert\") pod \"controller-manager-879f6c89f-cvqck\" (UID: \"a7649915-6408-4c30-8faa-0fb3ea55007a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cvqck" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.037394 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e489a46b-9123-44c6-94e0-692621760dd6-console-serving-cert\") pod \"console-f9d7485db-hdg74\" (UID: \"e489a46b-9123-44c6-94e0-692621760dd6\") " pod="openshift-console/console-f9d7485db-hdg74" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.037480 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8227d3a9-60f5-4d19-b4d1-8a0143864837-config\") pod \"route-controller-manager-6576b87f9c-j6vm5\" (UID: \"8227d3a9-60f5-4d19-b4d1-8a0143864837\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j6vm5" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.037499 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwn6m\" (UniqueName: \"kubernetes.io/projected/9c7096e1-8ca1-483d-8e12-1cc79d28182a-kube-api-access-jwn6m\") pod \"cluster-image-registry-operator-dc59b4c8b-9l858\" (UID: \"9c7096e1-8ca1-483d-8e12-1cc79d28182a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9l858" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.037525 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwlfb\" (UniqueName: \"kubernetes.io/projected/25b3b271-e6e0-49c4-8fa2-17d8f8f2d5fa-kube-api-access-pwlfb\") pod \"console-operator-58897d9998-mxgf8\" (UID: \"25b3b271-e6e0-49c4-8fa2-17d8f8f2d5fa\") " pod="openshift-console-operator/console-operator-58897d9998-mxgf8" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.037550 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/33978535-84b2-4def-af5a-d2819171e202-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-j6dgq\" (UID: \"33978535-84b2-4def-af5a-d2819171e202\") " pod="openshift-authentication/oauth-openshift-558db77b4-j6dgq" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.037567 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8227d3a9-60f5-4d19-b4d1-8a0143864837-serving-cert\") pod \"route-controller-manager-6576b87f9c-j6vm5\" (UID: \"8227d3a9-60f5-4d19-b4d1-8a0143864837\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j6vm5" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.037626 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6lnfm\" (UniqueName: \"kubernetes.io/projected/e489a46b-9123-44c6-94e0-692621760dd6-kube-api-access-6lnfm\") pod \"console-f9d7485db-hdg74\" (UID: \"e489a46b-9123-44c6-94e0-692621760dd6\") " pod="openshift-console/console-f9d7485db-hdg74" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.037645 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9c7096e1-8ca1-483d-8e12-1cc79d28182a-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-9l858\" (UID: \"9c7096e1-8ca1-483d-8e12-1cc79d28182a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9l858" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.037669 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e489a46b-9123-44c6-94e0-692621760dd6-console-config\") pod \"console-f9d7485db-hdg74\" (UID: \"e489a46b-9123-44c6-94e0-692621760dd6\") " pod="openshift-console/console-f9d7485db-hdg74" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.037687 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5tzz\" (UniqueName: \"kubernetes.io/projected/c8c0b903-63ed-4811-a991-9a5751a4c640-kube-api-access-k5tzz\") pod \"openshift-controller-manager-operator-756b6f6bc6-cbwrs\" (UID: \"c8c0b903-63ed-4811-a991-9a5751a4c640\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cbwrs" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.038919 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7649915-6408-4c30-8faa-0fb3ea55007a-config\") pod \"controller-manager-879f6c89f-cvqck\" (UID: \"a7649915-6408-4c30-8faa-0fb3ea55007a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cvqck" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.038934 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/33978535-84b2-4def-af5a-d2819171e202-audit-policies\") pod \"oauth-openshift-558db77b4-j6dgq\" (UID: \"33978535-84b2-4def-af5a-d2819171e202\") " pod="openshift-authentication/oauth-openshift-558db77b4-j6dgq" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.041231 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a7649915-6408-4c30-8faa-0fb3ea55007a-client-ca\") pod \"controller-manager-879f6c89f-cvqck\" (UID: \"a7649915-6408-4c30-8faa-0fb3ea55007a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cvqck" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.041781 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8227d3a9-60f5-4d19-b4d1-8a0143864837-client-ca\") pod \"route-controller-manager-6576b87f9c-j6vm5\" (UID: \"8227d3a9-60f5-4d19-b4d1-8a0143864837\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j6vm5" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.042506 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/33978535-84b2-4def-af5a-d2819171e202-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-j6dgq\" (UID: \"33978535-84b2-4def-af5a-d2819171e202\") " pod="openshift-authentication/oauth-openshift-558db77b4-j6dgq" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.042729 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e489a46b-9123-44c6-94e0-692621760dd6-trusted-ca-bundle\") pod \"console-f9d7485db-hdg74\" (UID: \"e489a46b-9123-44c6-94e0-692621760dd6\") " pod="openshift-console/console-f9d7485db-hdg74" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.042694 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/33978535-84b2-4def-af5a-d2819171e202-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-j6dgq\" (UID: \"33978535-84b2-4def-af5a-d2819171e202\") " pod="openshift-authentication/oauth-openshift-558db77b4-j6dgq" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.042777 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/33978535-84b2-4def-af5a-d2819171e202-audit-dir\") pod \"oauth-openshift-558db77b4-j6dgq\" (UID: \"33978535-84b2-4def-af5a-d2819171e202\") " pod="openshift-authentication/oauth-openshift-558db77b4-j6dgq" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.043385 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.044195 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/25b3b271-e6e0-49c4-8fa2-17d8f8f2d5fa-trusted-ca\") pod \"console-operator-58897d9998-mxgf8\" (UID: \"25b3b271-e6e0-49c4-8fa2-17d8f8f2d5fa\") " pod="openshift-console-operator/console-operator-58897d9998-mxgf8" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.044414 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e489a46b-9123-44c6-94e0-692621760dd6-service-ca\") pod \"console-f9d7485db-hdg74\" (UID: \"e489a46b-9123-44c6-94e0-692621760dd6\") " pod="openshift-console/console-f9d7485db-hdg74" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.044653 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/33978535-84b2-4def-af5a-d2819171e202-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-j6dgq\" (UID: \"33978535-84b2-4def-af5a-d2819171e202\") " pod="openshift-authentication/oauth-openshift-558db77b4-j6dgq" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.045133 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/33978535-84b2-4def-af5a-d2819171e202-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-j6dgq\" (UID: \"33978535-84b2-4def-af5a-d2819171e202\") " pod="openshift-authentication/oauth-openshift-558db77b4-j6dgq" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.045241 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/33978535-84b2-4def-af5a-d2819171e202-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-j6dgq\" (UID: \"33978535-84b2-4def-af5a-d2819171e202\") " pod="openshift-authentication/oauth-openshift-558db77b4-j6dgq" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.045949 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e489a46b-9123-44c6-94e0-692621760dd6-oauth-serving-cert\") pod \"console-f9d7485db-hdg74\" (UID: \"e489a46b-9123-44c6-94e0-692621760dd6\") " pod="openshift-console/console-f9d7485db-hdg74" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.047358 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e489a46b-9123-44c6-94e0-692621760dd6-console-serving-cert\") pod \"console-f9d7485db-hdg74\" (UID: \"e489a46b-9123-44c6-94e0-692621760dd6\") " pod="openshift-console/console-f9d7485db-hdg74" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.037433 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bz4bz"] Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.048480 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"598dc183dd2b9e8a46b146f48602e9a7534af890e299ed52ca5218c75e2d22bb"} Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.048521 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lzvjs"] Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.048540 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-hdg74"] Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.048550 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8zrdj"] Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.048561 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-j6dgq"] Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.048584 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9l858"] Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.048316 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e489a46b-9123-44c6-94e0-692621760dd6-console-config\") pod \"console-f9d7485db-hdg74\" (UID: \"e489a46b-9123-44c6-94e0-692621760dd6\") " pod="openshift-console/console-f9d7485db-hdg74" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.047476 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9c7096e1-8ca1-483d-8e12-1cc79d28182a-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-9l858\" (UID: \"9c7096e1-8ca1-483d-8e12-1cc79d28182a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9l858" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.047530 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/33978535-84b2-4def-af5a-d2819171e202-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-j6dgq\" (UID: \"33978535-84b2-4def-af5a-d2819171e202\") " pod="openshift-authentication/oauth-openshift-558db77b4-j6dgq" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.047610 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a7649915-6408-4c30-8faa-0fb3ea55007a-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-cvqck\" (UID: \"a7649915-6408-4c30-8faa-0fb3ea55007a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cvqck" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.047804 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/33978535-84b2-4def-af5a-d2819171e202-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-j6dgq\" (UID: \"33978535-84b2-4def-af5a-d2819171e202\") " pod="openshift-authentication/oauth-openshift-558db77b4-j6dgq" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.047948 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/33978535-84b2-4def-af5a-d2819171e202-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-j6dgq\" (UID: \"33978535-84b2-4def-af5a-d2819171e202\") " pod="openshift-authentication/oauth-openshift-558db77b4-j6dgq" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.048821 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c8c0b903-63ed-4811-a991-9a5751a4c640-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-cbwrs\" (UID: \"c8c0b903-63ed-4811-a991-9a5751a4c640\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cbwrs" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.049196 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e489a46b-9123-44c6-94e0-692621760dd6-console-oauth-config\") pod \"console-f9d7485db-hdg74\" (UID: \"e489a46b-9123-44c6-94e0-692621760dd6\") " pod="openshift-console/console-f9d7485db-hdg74" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.049233 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/33978535-84b2-4def-af5a-d2819171e202-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-j6dgq\" (UID: \"33978535-84b2-4def-af5a-d2819171e202\") " pod="openshift-authentication/oauth-openshift-558db77b4-j6dgq" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.049457 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/33978535-84b2-4def-af5a-d2819171e202-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-j6dgq\" (UID: \"33978535-84b2-4def-af5a-d2819171e202\") " pod="openshift-authentication/oauth-openshift-558db77b4-j6dgq" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.049732 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8227d3a9-60f5-4d19-b4d1-8a0143864837-serving-cert\") pod \"route-controller-manager-6576b87f9c-j6vm5\" (UID: \"8227d3a9-60f5-4d19-b4d1-8a0143864837\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j6vm5" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.049765 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8227d3a9-60f5-4d19-b4d1-8a0143864837-config\") pod \"route-controller-manager-6576b87f9c-j6vm5\" (UID: \"8227d3a9-60f5-4d19-b4d1-8a0143864837\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j6vm5" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.049868 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/25b3b271-e6e0-49c4-8fa2-17d8f8f2d5fa-serving-cert\") pod \"console-operator-58897d9998-mxgf8\" (UID: \"25b3b271-e6e0-49c4-8fa2-17d8f8f2d5fa\") " pod="openshift-console-operator/console-operator-58897d9998-mxgf8" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.049869 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8c0b903-63ed-4811-a991-9a5751a4c640-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-cbwrs\" (UID: \"c8c0b903-63ed-4811-a991-9a5751a4c640\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cbwrs" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.050514 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-4x6s2"] Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.050997 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25b3b271-e6e0-49c4-8fa2-17d8f8f2d5fa-config\") pod \"console-operator-58897d9998-mxgf8\" (UID: \"25b3b271-e6e0-49c4-8fa2-17d8f8f2d5fa\") " pod="openshift-console-operator/console-operator-58897d9998-mxgf8" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.051500 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-z82w8"] Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.052273 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/0131c573-bf76-49f4-9581-dd39ef60b27f-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-bz4bz\" (UID: \"0131c573-bf76-49f4-9581-dd39ef60b27f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bz4bz" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.052465 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cbwrs"] Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.052765 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a7649915-6408-4c30-8faa-0fb3ea55007a-serving-cert\") pod \"controller-manager-879f6c89f-cvqck\" (UID: \"a7649915-6408-4c30-8faa-0fb3ea55007a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cvqck" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.053459 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-t8ws2"] Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.053974 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"82e541556e6ce0442b09137b0858a03054cd7e7a18942157809b43a8880c3d02"} Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.054417 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-wlj8d"] Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.054726 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/9c7096e1-8ca1-483d-8e12-1cc79d28182a-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-9l858\" (UID: \"9c7096e1-8ca1-483d-8e12-1cc79d28182a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9l858" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.056507 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-sbr84"] Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.057339 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/33978535-84b2-4def-af5a-d2819171e202-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-j6dgq\" (UID: \"33978535-84b2-4def-af5a-d2819171e202\") " pod="openshift-authentication/oauth-openshift-558db77b4-j6dgq" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.057489 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-54vjj"] Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.057794 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.058625 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mggmj"] Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.059405 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"6cdd5cb18e1bdebffd9820b4e73b86bc68c6546abca2d803fe6bf1f7fb6af638"} Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.059618 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-cw29n"] Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.061283 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522385-74pvr"] Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.061537 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-spzc7"] Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.062610 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-9bcck"] Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.064183 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vsl5p"] Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.065066 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fmfh5"] Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.066125 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-n5p8z"] Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.067632 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-x2jlg"] Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.068698 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-bqslk"] Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.068828 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-x2jlg" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.069891 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pd6wv"] Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.072076 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-jw4gs"] Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.072106 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bmq9l"] Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.074371 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-x2jlg"] Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.077429 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.079470 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-dxj7b"] Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.080530 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-z4qfh"] Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.080791 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-dxj7b" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.080988 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-z4qfh" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.081620 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-dxj7b"] Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.082656 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-z4qfh"] Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.098354 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.117817 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.138667 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.169967 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.178510 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.197929 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.217498 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.244247 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.262374 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.278130 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.298348 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.318484 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.338035 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.359333 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.377683 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.398456 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.417511 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.437637 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.457436 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.478314 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.498866 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.519282 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.569752 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldfqj\" (UniqueName: \"kubernetes.io/projected/10596b8a-e57a-498e-a7e8-e017fde34d54-kube-api-access-ldfqj\") pod \"openshift-apiserver-operator-796bbdcf4f-cg82l\" (UID: \"10596b8a-e57a-498e-a7e8-e017fde34d54\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cg82l" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.588407 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbmd2\" (UniqueName: \"kubernetes.io/projected/656b06bf-9660-4c18-941b-5e5589f0301a-kube-api-access-vbmd2\") pod \"machine-api-operator-5694c8668f-srhjb\" (UID: \"656b06bf-9660-4c18-941b-5e5589f0301a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-srhjb" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.599543 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.600830 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9b5xt\" (UniqueName: \"kubernetes.io/projected/681a57d4-bd74-4910-a3f3-517b96a15123-kube-api-access-9b5xt\") pod \"apiserver-7bbb656c7d-k48nr\" (UID: \"681a57d4-bd74-4910-a3f3-517b96a15123\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k48nr" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.617517 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.658514 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxs6p\" (UniqueName: \"kubernetes.io/projected/d0ee93f1-93ac-4db2-b35e-5be5bded6541-kube-api-access-wxs6p\") pod \"apiserver-76f77b778f-7jp8q\" (UID: \"d0ee93f1-93ac-4db2-b35e-5be5bded6541\") " pod="openshift-apiserver/apiserver-76f77b778f-7jp8q" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.679078 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8fth\" (UniqueName: \"kubernetes.io/projected/5b5592d9-5fbf-49ac-bab6-bf0e11f43706-kube-api-access-s8fth\") pod \"authentication-operator-69f744f599-4x6s2\" (UID: \"5b5592d9-5fbf-49ac-bab6-bf0e11f43706\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4x6s2" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.681398 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-srhjb" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.695325 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xcvb\" (UniqueName: \"kubernetes.io/projected/b9a99858-5ada-47b7-855c-8d3b43ab9fee-kube-api-access-7xcvb\") pod \"machine-approver-56656f9798-jlwrb\" (UID: \"b9a99858-5ada-47b7-855c-8d3b43ab9fee\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jlwrb" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.698738 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.718250 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.739905 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.743932 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-7jp8q" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.758919 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.780775 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cg82l" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.780861 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.828304 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jlwrb" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.828552 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k48nr" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.831457 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.834380 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.837685 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.859521 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.879658 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.892554 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-4x6s2" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.898330 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.917682 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.938073 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.956056 4808 request.go:700] Waited for 1.004957451s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dkube-controller-manager-operator-config&limit=500&resourceVersion=0 Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.957939 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.978498 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.990714 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-srhjb"] Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:26.998414 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 17 15:56:27 crc kubenswrapper[4808]: W0217 15:56:27.015313 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod656b06bf_9660_4c18_941b_5e5589f0301a.slice/crio-5e563afa4930fb66b948ef11f25d64ff546003f7fa1ce0c3b63acce7c9033251 WatchSource:0}: Error finding container 5e563afa4930fb66b948ef11f25d64ff546003f7fa1ce0c3b63acce7c9033251: Status 404 returned error can't find the container with id 5e563afa4930fb66b948ef11f25d64ff546003f7fa1ce0c3b63acce7c9033251 Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.020000 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.024745 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-7jp8q"] Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.027257 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cg82l"] Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.039428 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 17 15:56:27 crc kubenswrapper[4808]: W0217 15:56:27.051862 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd0ee93f1_93ac_4db2_b35e_5be5bded6541.slice/crio-c9bef38d109ca11009a6f0cc93174fd1f33bc4520f641fbed7f054d6037ee959 WatchSource:0}: Error finding container c9bef38d109ca11009a6f0cc93174fd1f33bc4520f641fbed7f054d6037ee959: Status 404 returned error can't find the container with id c9bef38d109ca11009a6f0cc93174fd1f33bc4520f641fbed7f054d6037ee959 Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.057139 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.085849 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.089453 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-srhjb" event={"ID":"656b06bf-9660-4c18-941b-5e5589f0301a","Type":"ContainerStarted","Data":"5e563afa4930fb66b948ef11f25d64ff546003f7fa1ce0c3b63acce7c9033251"} Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.092892 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"9b53404cf9f369504e347bb0f59ad736ebc746180be4233f4ce52cde59acdbb6"} Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.093454 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.098807 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.101763 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-k48nr"] Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.102093 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"f4548123c62df5178f29eacbe19cd33a5d6082a8ea61dd747d0fff4c6c2a9ee4"} Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.112042 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jlwrb" event={"ID":"b9a99858-5ada-47b7-855c-8d3b43ab9fee","Type":"ContainerStarted","Data":"9ec72c46f7cf7687f5d5ecfe6b876370e2c5440f0f9428a29b45160d0a3d1ed1"} Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.113938 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cg82l" event={"ID":"10596b8a-e57a-498e-a7e8-e017fde34d54","Type":"ContainerStarted","Data":"f7e0bc1dfc7dffda94fa4f82a03a79bbb9edf48aa7e048c81228c0ad50aed0e8"} Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.117058 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"ba4f7b2e5f7e52f93605f2507c380d0b72e9d8edee07184f123f56d7662913f5"} Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.117946 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.120567 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-7jp8q" event={"ID":"d0ee93f1-93ac-4db2-b35e-5be5bded6541","Type":"ContainerStarted","Data":"c9bef38d109ca11009a6f0cc93174fd1f33bc4520f641fbed7f054d6037ee959"} Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.131270 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-4x6s2"] Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.141158 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.159543 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.178127 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.199921 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.218619 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.247241 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.257313 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.278413 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.297536 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.318014 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.343199 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.358132 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.377753 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.397631 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.455298 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/98bde021-9860-4b02-9223-512db6787eff-serving-cert\") pod \"openshift-config-operator-7777fb866f-s2fz5\" (UID: \"98bde021-9860-4b02-9223-512db6787eff\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-s2fz5" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.455386 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/98bde021-9860-4b02-9223-512db6787eff-available-featuregates\") pod \"openshift-config-operator-7777fb866f-s2fz5\" (UID: \"98bde021-9860-4b02-9223-512db6787eff\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-s2fz5" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.455437 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ddc3801d-3513-460c-a719-ed9dc92697e7-trusted-ca\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.455463 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ddc3801d-3513-460c-a719-ed9dc92697e7-bound-sa-token\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.455508 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ql2z2\" (UniqueName: \"kubernetes.io/projected/98bde021-9860-4b02-9223-512db6787eff-kube-api-access-ql2z2\") pod \"openshift-config-operator-7777fb866f-s2fz5\" (UID: \"98bde021-9860-4b02-9223-512db6787eff\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-s2fz5" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.455528 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ddc3801d-3513-460c-a719-ed9dc92697e7-registry-certificates\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.455563 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l78nd\" (UniqueName: \"kubernetes.io/projected/ddc3801d-3513-460c-a719-ed9dc92697e7-kube-api-access-l78nd\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.455612 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ddc3801d-3513-460c-a719-ed9dc92697e7-installation-pull-secrets\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.455653 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ddc3801d-3513-460c-a719-ed9dc92697e7-registry-tls\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.455687 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.455783 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ddc3801d-3513-460c-a719-ed9dc92697e7-ca-trust-extracted\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:27 crc kubenswrapper[4808]: E0217 15:56:27.456537 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:56:27.956507984 +0000 UTC m=+151.472867237 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fmfh5" (UID: "ddc3801d-3513-460c-a719-ed9dc92697e7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.460918 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.481206 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.497790 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.518937 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.539038 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.556591 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:56:27 crc kubenswrapper[4808]: E0217 15:56:27.556799 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:56:28.056732474 +0000 UTC m=+151.573091587 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.556978 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ql2z2\" (UniqueName: \"kubernetes.io/projected/98bde021-9860-4b02-9223-512db6787eff-kube-api-access-ql2z2\") pod \"openshift-config-operator-7777fb866f-s2fz5\" (UID: \"98bde021-9860-4b02-9223-512db6787eff\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-s2fz5" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.557067 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/69e8c398-683b-47dc-a517-633d625cbd97-plugins-dir\") pod \"csi-hostpathplugin-dxj7b\" (UID: \"69e8c398-683b-47dc-a517-633d625cbd97\") " pod="hostpath-provisioner/csi-hostpathplugin-dxj7b" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.557122 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ddc3801d-3513-460c-a719-ed9dc92697e7-registry-certificates\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.557175 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2d6f6cc0-7fc0-411c-800f-f98dc61b5035-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-mggmj\" (UID: \"2d6f6cc0-7fc0-411c-800f-f98dc61b5035\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mggmj" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.557263 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfvt4\" (UniqueName: \"kubernetes.io/projected/e20a6284-be62-4671-b75f-38b32dc20813-kube-api-access-vfvt4\") pod \"etcd-operator-b45778765-2lsb7\" (UID: \"e20a6284-be62-4671-b75f-38b32dc20813\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2lsb7" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.557327 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l78nd\" (UniqueName: \"kubernetes.io/projected/ddc3801d-3513-460c-a719-ed9dc92697e7-kube-api-access-l78nd\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.557386 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3267bf97-7e39-410a-8502-3737bfb7f963-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-54vjj\" (UID: \"3267bf97-7e39-410a-8502-3737bfb7f963\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-54vjj" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.557438 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7baa3ebb-6bb0-4744-b096-971958bcd263-config-volume\") pod \"collect-profiles-29522385-74pvr\" (UID: \"7baa3ebb-6bb0-4744-b096-971958bcd263\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522385-74pvr" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.557496 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/e20a6284-be62-4671-b75f-38b32dc20813-etcd-ca\") pod \"etcd-operator-b45778765-2lsb7\" (UID: \"e20a6284-be62-4671-b75f-38b32dc20813\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2lsb7" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.557558 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ddc3801d-3513-460c-a719-ed9dc92697e7-installation-pull-secrets\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.557696 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/fddf9ec8-447f-487c-a863-73ec68b90737-node-bootstrap-token\") pod \"machine-config-server-dgt46\" (UID: \"fddf9ec8-447f-487c-a863-73ec68b90737\") " pod="openshift-machine-config-operator/machine-config-server-dgt46" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.557820 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbwnc\" (UniqueName: \"kubernetes.io/projected/94f0bc0d-40c0-45b7-b6c4-7b285ba26c52-kube-api-access-bbwnc\") pod \"control-plane-machine-set-operator-78cbb6b69f-t8ws2\" (UID: \"94f0bc0d-40c0-45b7-b6c4-7b285ba26c52\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-t8ws2" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.557880 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmv2c\" (UniqueName: \"kubernetes.io/projected/7baa3ebb-6bb0-4744-b096-971958bcd263-kube-api-access-gmv2c\") pod \"collect-profiles-29522385-74pvr\" (UID: \"7baa3ebb-6bb0-4744-b096-971958bcd263\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522385-74pvr" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.557940 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlrx9\" (UniqueName: \"kubernetes.io/projected/0b9e5453-e92d-46cd-b8fb-c989f00809ae-kube-api-access-rlrx9\") pod \"kube-storage-version-migrator-operator-b67b599dd-vsl5p\" (UID: \"0b9e5453-e92d-46cd-b8fb-c989f00809ae\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vsl5p" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.557989 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3267bf97-7e39-410a-8502-3737bfb7f963-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-54vjj\" (UID: \"3267bf97-7e39-410a-8502-3737bfb7f963\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-54vjj" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.558039 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-df94p\" (UniqueName: \"kubernetes.io/projected/3ba06ea2-9714-49b5-8477-8eb056bb45a4-kube-api-access-df94p\") pod \"service-ca-9c57cc56f-bqslk\" (UID: \"3ba06ea2-9714-49b5-8477-8eb056bb45a4\") " pod="openshift-service-ca/service-ca-9c57cc56f-bqslk" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.558086 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wns2k\" (UniqueName: \"kubernetes.io/projected/b7697c8e-8996-44b9-8b66-965584ab26e2-kube-api-access-wns2k\") pod \"packageserver-d55dfcdfc-bmq9l\" (UID: \"b7697c8e-8996-44b9-8b66-965584ab26e2\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bmq9l" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.558154 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lx9v6\" (UniqueName: \"kubernetes.io/projected/71acbaae-e241-4c8e-ac2b-6dd40b15b494-kube-api-access-lx9v6\") pod \"machine-config-controller-84d6567774-9bcck\" (UID: \"71acbaae-e241-4c8e-ac2b-6dd40b15b494\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9bcck" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.558235 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ddc3801d-3513-460c-a719-ed9dc92697e7-registry-tls\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.558281 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hx5pw\" (UniqueName: \"kubernetes.io/projected/8ce31dac-90ec-4aa8-b765-1ee1add26c2d-kube-api-access-hx5pw\") pod \"olm-operator-6b444d44fb-pd6wv\" (UID: \"8ce31dac-90ec-4aa8-b765-1ee1add26c2d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pd6wv" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.558365 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8ce31dac-90ec-4aa8-b765-1ee1add26c2d-profile-collector-cert\") pod \"olm-operator-6b444d44fb-pd6wv\" (UID: \"8ce31dac-90ec-4aa8-b765-1ee1add26c2d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pd6wv" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.558409 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8b124f4-97ab-4512-a1a2-b93bc4e724e8-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-lzvjs\" (UID: \"b8b124f4-97ab-4512-a1a2-b93bc4e724e8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lzvjs" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.558453 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/69e8c398-683b-47dc-a517-633d625cbd97-csi-data-dir\") pod \"csi-hostpathplugin-dxj7b\" (UID: \"69e8c398-683b-47dc-a517-633d625cbd97\") " pod="hostpath-provisioner/csi-hostpathplugin-dxj7b" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.558502 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/b26b861c-ec52-4685-846c-ea022517e9fb-default-certificate\") pod \"router-default-5444994796-jwcd2\" (UID: \"b26b861c-ec52-4685-846c-ea022517e9fb\") " pod="openshift-ingress/router-default-5444994796-jwcd2" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.558616 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.558671 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ddc3801d-3513-460c-a719-ed9dc92697e7-ca-trust-extracted\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.558762 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b9e5453-e92d-46cd-b8fb-c989f00809ae-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-vsl5p\" (UID: \"0b9e5453-e92d-46cd-b8fb-c989f00809ae\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vsl5p" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.558848 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/b26b861c-ec52-4685-846c-ea022517e9fb-stats-auth\") pod \"router-default-5444994796-jwcd2\" (UID: \"b26b861c-ec52-4685-846c-ea022517e9fb\") " pod="openshift-ingress/router-default-5444994796-jwcd2" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.558902 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jcp4\" (UniqueName: \"kubernetes.io/projected/14c6770e-9659-4e77-a7f1-f3ef06ec332d-kube-api-access-5jcp4\") pod \"package-server-manager-789f6589d5-spzc7\" (UID: \"14c6770e-9659-4e77-a7f1-f3ef06ec332d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-spzc7" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.558923 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ddc3801d-3513-460c-a719-ed9dc92697e7-registry-certificates\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.559226 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7baa3ebb-6bb0-4744-b096-971958bcd263-secret-volume\") pod \"collect-profiles-29522385-74pvr\" (UID: \"7baa3ebb-6bb0-4744-b096-971958bcd263\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522385-74pvr" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.559261 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b7697c8e-8996-44b9-8b66-965584ab26e2-webhook-cert\") pod \"packageserver-d55dfcdfc-bmq9l\" (UID: \"b7697c8e-8996-44b9-8b66-965584ab26e2\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bmq9l" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.559285 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/445cb05c-ac1a-44a2-864f-a87e0e7b29a5-srv-cert\") pod \"catalog-operator-68c6474976-8zrdj\" (UID: \"445cb05c-ac1a-44a2-864f-a87e0e7b29a5\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8zrdj" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.559318 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/98bde021-9860-4b02-9223-512db6787eff-serving-cert\") pod \"openshift-config-operator-7777fb866f-s2fz5\" (UID: \"98bde021-9860-4b02-9223-512db6787eff\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-s2fz5" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.559343 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/26fa95d4-8240-472a-a86f-98acf35ade67-images\") pod \"machine-config-operator-74547568cd-cw29n\" (UID: \"26fa95d4-8240-472a-a86f-98acf35ade67\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cw29n" Feb 17 15:56:27 crc kubenswrapper[4808]: E0217 15:56:27.559372 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:56:28.059349424 +0000 UTC m=+151.575708677 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fmfh5" (UID: "ddc3801d-3513-460c-a719-ed9dc92697e7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.560047 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4b736927-813a-4b21-80d6-a0b4106e2c95-metrics-tls\") pod \"dns-operator-744455d44c-p8js4\" (UID: \"4b736927-813a-4b21-80d6-a0b4106e2c95\") " pod="openshift-dns-operator/dns-operator-744455d44c-p8js4" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.560097 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b8b124f4-97ab-4512-a1a2-b93bc4e724e8-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-lzvjs\" (UID: \"b8b124f4-97ab-4512-a1a2-b93bc4e724e8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lzvjs" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.560277 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b9e5453-e92d-46cd-b8fb-c989f00809ae-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-vsl5p\" (UID: \"0b9e5453-e92d-46cd-b8fb-c989f00809ae\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vsl5p" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.560325 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d6f6cc0-7fc0-411c-800f-f98dc61b5035-config\") pod \"kube-apiserver-operator-766d6c64bb-mggmj\" (UID: \"2d6f6cc0-7fc0-411c-800f-f98dc61b5035\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mggmj" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.560356 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3267bf97-7e39-410a-8502-3737bfb7f963-config\") pod \"kube-controller-manager-operator-78b949d7b-54vjj\" (UID: \"3267bf97-7e39-410a-8502-3737bfb7f963\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-54vjj" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.560385 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ct4x8\" (UniqueName: \"kubernetes.io/projected/e8aed8e7-df36-4a82-a7d6-8a65d9a28eeb-kube-api-access-ct4x8\") pod \"service-ca-operator-777779d784-jw4gs\" (UID: \"e8aed8e7-df36-4a82-a7d6-8a65d9a28eeb\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-jw4gs" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.560465 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fr4p7\" (UniqueName: \"kubernetes.io/projected/4b736927-813a-4b21-80d6-a0b4106e2c95-kube-api-access-fr4p7\") pod \"dns-operator-744455d44c-p8js4\" (UID: \"4b736927-813a-4b21-80d6-a0b4106e2c95\") " pod="openshift-dns-operator/dns-operator-744455d44c-p8js4" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.560493 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e8aed8e7-df36-4a82-a7d6-8a65d9a28eeb-serving-cert\") pod \"service-ca-operator-777779d784-jw4gs\" (UID: \"e8aed8e7-df36-4a82-a7d6-8a65d9a28eeb\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-jw4gs" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.560531 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/98bde021-9860-4b02-9223-512db6787eff-available-featuregates\") pod \"openshift-config-operator-7777fb866f-s2fz5\" (UID: \"98bde021-9860-4b02-9223-512db6787eff\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-s2fz5" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.560622 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdhmj\" (UniqueName: \"kubernetes.io/projected/b0793347-d948-480b-b5a7-d0fed7e12b38-kube-api-access-cdhmj\") pod \"marketplace-operator-79b997595-sbr84\" (UID: \"b0793347-d948-480b-b5a7-d0fed7e12b38\") " pod="openshift-marketplace/marketplace-operator-79b997595-sbr84" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.560664 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/71acbaae-e241-4c8e-ac2b-6dd40b15b494-proxy-tls\") pod \"machine-config-controller-84d6567774-9bcck\" (UID: \"71acbaae-e241-4c8e-ac2b-6dd40b15b494\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9bcck" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.560696 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ddc3801d-3513-460c-a719-ed9dc92697e7-bound-sa-token\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.560718 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e20a6284-be62-4671-b75f-38b32dc20813-serving-cert\") pod \"etcd-operator-b45778765-2lsb7\" (UID: \"e20a6284-be62-4671-b75f-38b32dc20813\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2lsb7" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.560739 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d6f6cc0-7fc0-411c-800f-f98dc61b5035-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-mggmj\" (UID: \"2d6f6cc0-7fc0-411c-800f-f98dc61b5035\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mggmj" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.561265 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/98bde021-9860-4b02-9223-512db6787eff-available-featuregates\") pod \"openshift-config-operator-7777fb866f-s2fz5\" (UID: \"98bde021-9860-4b02-9223-512db6787eff\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-s2fz5" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.562101 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ddc3801d-3513-460c-a719-ed9dc92697e7-ca-trust-extracted\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.562475 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/e20a6284-be62-4671-b75f-38b32dc20813-etcd-service-ca\") pod \"etcd-operator-b45778765-2lsb7\" (UID: \"e20a6284-be62-4671-b75f-38b32dc20813\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2lsb7" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.562527 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/092b0577-f19f-413d-afc5-bdc3a40f7f75-trusted-ca\") pod \"ingress-operator-5b745b69d9-8mjrc\" (UID: \"092b0577-f19f-413d-afc5-bdc3a40f7f75\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8mjrc" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.562623 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b0793347-d948-480b-b5a7-d0fed7e12b38-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-sbr84\" (UID: \"b0793347-d948-480b-b5a7-d0fed7e12b38\") " pod="openshift-marketplace/marketplace-operator-79b997595-sbr84" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.562736 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4sbh\" (UniqueName: \"kubernetes.io/projected/b26b861c-ec52-4685-846c-ea022517e9fb-kube-api-access-t4sbh\") pod \"router-default-5444994796-jwcd2\" (UID: \"b26b861c-ec52-4685-846c-ea022517e9fb\") " pod="openshift-ingress/router-default-5444994796-jwcd2" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.562770 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssq98\" (UniqueName: \"kubernetes.io/projected/9bca2625-c55d-4a28-b37d-2ac43d181e26-kube-api-access-ssq98\") pod \"ingress-canary-z4qfh\" (UID: \"9bca2625-c55d-4a28-b37d-2ac43d181e26\") " pod="openshift-ingress-canary/ingress-canary-z4qfh" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.562823 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e20a6284-be62-4671-b75f-38b32dc20813-config\") pod \"etcd-operator-b45778765-2lsb7\" (UID: \"e20a6284-be62-4671-b75f-38b32dc20813\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2lsb7" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.562863 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/092b0577-f19f-413d-afc5-bdc3a40f7f75-bound-sa-token\") pod \"ingress-operator-5b745b69d9-8mjrc\" (UID: \"092b0577-f19f-413d-afc5-bdc3a40f7f75\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8mjrc" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.562988 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/683fb061-dc67-431d-8a8a-d5a383794fef-config-volume\") pod \"dns-default-x2jlg\" (UID: \"683fb061-dc67-431d-8a8a-d5a383794fef\") " pod="openshift-dns/dns-default-x2jlg" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.563111 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/092b0577-f19f-413d-afc5-bdc3a40f7f75-metrics-tls\") pod \"ingress-operator-5b745b69d9-8mjrc\" (UID: \"092b0577-f19f-413d-afc5-bdc3a40f7f75\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8mjrc" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.563242 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbxgq\" (UniqueName: \"kubernetes.io/projected/26fa95d4-8240-472a-a86f-98acf35ade67-kube-api-access-mbxgq\") pod \"machine-config-operator-74547568cd-cw29n\" (UID: \"26fa95d4-8240-472a-a86f-98acf35ade67\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cw29n" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.563896 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e20a6284-be62-4671-b75f-38b32dc20813-etcd-client\") pod \"etcd-operator-b45778765-2lsb7\" (UID: \"e20a6284-be62-4671-b75f-38b32dc20813\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2lsb7" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.563940 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/445cb05c-ac1a-44a2-864f-a87e0e7b29a5-profile-collector-cert\") pod \"catalog-operator-68c6474976-8zrdj\" (UID: \"445cb05c-ac1a-44a2-864f-a87e0e7b29a5\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8zrdj" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.564007 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b26b861c-ec52-4685-846c-ea022517e9fb-service-ca-bundle\") pod \"router-default-5444994796-jwcd2\" (UID: \"b26b861c-ec52-4685-846c-ea022517e9fb\") " pod="openshift-ingress/router-default-5444994796-jwcd2" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.564055 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/69e8c398-683b-47dc-a517-633d625cbd97-socket-dir\") pod \"csi-hostpathplugin-dxj7b\" (UID: \"69e8c398-683b-47dc-a517-633d625cbd97\") " pod="hostpath-provisioner/csi-hostpathplugin-dxj7b" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.564086 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrwgm\" (UniqueName: \"kubernetes.io/projected/69e8c398-683b-47dc-a517-633d625cbd97-kube-api-access-zrwgm\") pod \"csi-hostpathplugin-dxj7b\" (UID: \"69e8c398-683b-47dc-a517-633d625cbd97\") " pod="hostpath-provisioner/csi-hostpathplugin-dxj7b" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.564158 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rljkk\" (UniqueName: \"kubernetes.io/projected/fddf9ec8-447f-487c-a863-73ec68b90737-kube-api-access-rljkk\") pod \"machine-config-server-dgt46\" (UID: \"fddf9ec8-447f-487c-a863-73ec68b90737\") " pod="openshift-machine-config-operator/machine-config-server-dgt46" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.564182 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b8b124f4-97ab-4512-a1a2-b93bc4e724e8-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-lzvjs\" (UID: \"b8b124f4-97ab-4512-a1a2-b93bc4e724e8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lzvjs" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.564212 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hz8cc\" (UniqueName: \"kubernetes.io/projected/728793ed-1e89-455c-8d45-92c4ab08c1f6-kube-api-access-hz8cc\") pod \"multus-admission-controller-857f4d67dd-z82w8\" (UID: \"728793ed-1e89-455c-8d45-92c4ab08c1f6\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-z82w8" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.564257 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/94f0bc0d-40c0-45b7-b6c4-7b285ba26c52-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-t8ws2\" (UID: \"94f0bc0d-40c0-45b7-b6c4-7b285ba26c52\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-t8ws2" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.564439 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b26b861c-ec52-4685-846c-ea022517e9fb-metrics-certs\") pod \"router-default-5444994796-jwcd2\" (UID: \"b26b861c-ec52-4685-846c-ea022517e9fb\") " pod="openshift-ingress/router-default-5444994796-jwcd2" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.564484 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/fddf9ec8-447f-487c-a863-73ec68b90737-certs\") pod \"machine-config-server-dgt46\" (UID: \"fddf9ec8-447f-487c-a863-73ec68b90737\") " pod="openshift-machine-config-operator/machine-config-server-dgt46" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.564703 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/26fa95d4-8240-472a-a86f-98acf35ade67-proxy-tls\") pod \"machine-config-operator-74547568cd-cw29n\" (UID: \"26fa95d4-8240-472a-a86f-98acf35ade67\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cw29n" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.564778 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/3ba06ea2-9714-49b5-8477-8eb056bb45a4-signing-key\") pod \"service-ca-9c57cc56f-bqslk\" (UID: \"3ba06ea2-9714-49b5-8477-8eb056bb45a4\") " pod="openshift-service-ca/service-ca-9c57cc56f-bqslk" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.566191 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.566361 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/26fa95d4-8240-472a-a86f-98acf35ade67-auth-proxy-config\") pod \"machine-config-operator-74547568cd-cw29n\" (UID: \"26fa95d4-8240-472a-a86f-98acf35ade67\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cw29n" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.566598 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ddc3801d-3513-460c-a719-ed9dc92697e7-installation-pull-secrets\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.566701 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/14c6770e-9659-4e77-a7f1-f3ef06ec332d-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-spzc7\" (UID: \"14c6770e-9659-4e77-a7f1-f3ef06ec332d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-spzc7" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.566748 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/683fb061-dc67-431d-8a8a-d5a383794fef-metrics-tls\") pod \"dns-default-x2jlg\" (UID: \"683fb061-dc67-431d-8a8a-d5a383794fef\") " pod="openshift-dns/dns-default-x2jlg" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.566854 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhkfd\" (UniqueName: \"kubernetes.io/projected/683fb061-dc67-431d-8a8a-d5a383794fef-kube-api-access-rhkfd\") pod \"dns-default-x2jlg\" (UID: \"683fb061-dc67-431d-8a8a-d5a383794fef\") " pod="openshift-dns/dns-default-x2jlg" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.566947 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8thp\" (UniqueName: \"kubernetes.io/projected/4f9ab75e-8898-4a0c-8630-c657450b648e-kube-api-access-s8thp\") pod \"migrator-59844c95c7-n5p8z\" (UID: \"4f9ab75e-8898-4a0c-8630-c657450b648e\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-n5p8z" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.567045 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/728793ed-1e89-455c-8d45-92c4ab08c1f6-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-z82w8\" (UID: \"728793ed-1e89-455c-8d45-92c4ab08c1f6\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-z82w8" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.567132 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/98bde021-9860-4b02-9223-512db6787eff-serving-cert\") pod \"openshift-config-operator-7777fb866f-s2fz5\" (UID: \"98bde021-9860-4b02-9223-512db6787eff\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-s2fz5" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.567156 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b0793347-d948-480b-b5a7-d0fed7e12b38-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-sbr84\" (UID: \"b0793347-d948-480b-b5a7-d0fed7e12b38\") " pod="openshift-marketplace/marketplace-operator-79b997595-sbr84" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.569742 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ddc3801d-3513-460c-a719-ed9dc92697e7-registry-tls\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.569871 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/71acbaae-e241-4c8e-ac2b-6dd40b15b494-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-9bcck\" (UID: \"71acbaae-e241-4c8e-ac2b-6dd40b15b494\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9bcck" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.569908 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9c2k\" (UniqueName: \"kubernetes.io/projected/445cb05c-ac1a-44a2-864f-a87e0e7b29a5-kube-api-access-f9c2k\") pod \"catalog-operator-68c6474976-8zrdj\" (UID: \"445cb05c-ac1a-44a2-864f-a87e0e7b29a5\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8zrdj" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.570144 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8aed8e7-df36-4a82-a7d6-8a65d9a28eeb-config\") pod \"service-ca-operator-777779d784-jw4gs\" (UID: \"e8aed8e7-df36-4a82-a7d6-8a65d9a28eeb\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-jw4gs" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.570238 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b7697c8e-8996-44b9-8b66-965584ab26e2-apiservice-cert\") pod \"packageserver-d55dfcdfc-bmq9l\" (UID: \"b7697c8e-8996-44b9-8b66-965584ab26e2\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bmq9l" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.570265 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/69e8c398-683b-47dc-a517-633d625cbd97-mountpoint-dir\") pod \"csi-hostpathplugin-dxj7b\" (UID: \"69e8c398-683b-47dc-a517-633d625cbd97\") " pod="hostpath-provisioner/csi-hostpathplugin-dxj7b" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.570284 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/3ba06ea2-9714-49b5-8477-8eb056bb45a4-signing-cabundle\") pod \"service-ca-9c57cc56f-bqslk\" (UID: \"3ba06ea2-9714-49b5-8477-8eb056bb45a4\") " pod="openshift-service-ca/service-ca-9c57cc56f-bqslk" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.570349 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9bca2625-c55d-4a28-b37d-2ac43d181e26-cert\") pod \"ingress-canary-z4qfh\" (UID: \"9bca2625-c55d-4a28-b37d-2ac43d181e26\") " pod="openshift-ingress-canary/ingress-canary-z4qfh" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.570811 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rng2l\" (UniqueName: \"kubernetes.io/projected/092b0577-f19f-413d-afc5-bdc3a40f7f75-kube-api-access-rng2l\") pod \"ingress-operator-5b745b69d9-8mjrc\" (UID: \"092b0577-f19f-413d-afc5-bdc3a40f7f75\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8mjrc" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.570894 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/b7697c8e-8996-44b9-8b66-965584ab26e2-tmpfs\") pod \"packageserver-d55dfcdfc-bmq9l\" (UID: \"b7697c8e-8996-44b9-8b66-965584ab26e2\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bmq9l" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.571364 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ddc3801d-3513-460c-a719-ed9dc92697e7-trusted-ca\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.571552 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/69e8c398-683b-47dc-a517-633d625cbd97-registration-dir\") pod \"csi-hostpathplugin-dxj7b\" (UID: \"69e8c398-683b-47dc-a517-633d625cbd97\") " pod="hostpath-provisioner/csi-hostpathplugin-dxj7b" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.571734 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8ce31dac-90ec-4aa8-b765-1ee1add26c2d-srv-cert\") pod \"olm-operator-6b444d44fb-pd6wv\" (UID: \"8ce31dac-90ec-4aa8-b765-1ee1add26c2d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pd6wv" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.581445 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ddc3801d-3513-460c-a719-ed9dc92697e7-trusted-ca\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.583221 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.599436 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.619866 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.639365 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.659001 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.674846 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.675098 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/683fb061-dc67-431d-8a8a-d5a383794fef-config-volume\") pod \"dns-default-x2jlg\" (UID: \"683fb061-dc67-431d-8a8a-d5a383794fef\") " pod="openshift-dns/dns-default-x2jlg" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.675145 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/092b0577-f19f-413d-afc5-bdc3a40f7f75-bound-sa-token\") pod \"ingress-operator-5b745b69d9-8mjrc\" (UID: \"092b0577-f19f-413d-afc5-bdc3a40f7f75\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8mjrc" Feb 17 15:56:27 crc kubenswrapper[4808]: E0217 15:56:27.675245 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:56:28.175200049 +0000 UTC m=+151.691559152 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.675374 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/092b0577-f19f-413d-afc5-bdc3a40f7f75-metrics-tls\") pod \"ingress-operator-5b745b69d9-8mjrc\" (UID: \"092b0577-f19f-413d-afc5-bdc3a40f7f75\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8mjrc" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.675469 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbxgq\" (UniqueName: \"kubernetes.io/projected/26fa95d4-8240-472a-a86f-98acf35ade67-kube-api-access-mbxgq\") pod \"machine-config-operator-74547568cd-cw29n\" (UID: \"26fa95d4-8240-472a-a86f-98acf35ade67\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cw29n" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.675526 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e20a6284-be62-4671-b75f-38b32dc20813-etcd-client\") pod \"etcd-operator-b45778765-2lsb7\" (UID: \"e20a6284-be62-4671-b75f-38b32dc20813\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2lsb7" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.675549 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/445cb05c-ac1a-44a2-864f-a87e0e7b29a5-profile-collector-cert\") pod \"catalog-operator-68c6474976-8zrdj\" (UID: \"445cb05c-ac1a-44a2-864f-a87e0e7b29a5\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8zrdj" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.675641 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b26b861c-ec52-4685-846c-ea022517e9fb-service-ca-bundle\") pod \"router-default-5444994796-jwcd2\" (UID: \"b26b861c-ec52-4685-846c-ea022517e9fb\") " pod="openshift-ingress/router-default-5444994796-jwcd2" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.675663 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrwgm\" (UniqueName: \"kubernetes.io/projected/69e8c398-683b-47dc-a517-633d625cbd97-kube-api-access-zrwgm\") pod \"csi-hostpathplugin-dxj7b\" (UID: \"69e8c398-683b-47dc-a517-633d625cbd97\") " pod="hostpath-provisioner/csi-hostpathplugin-dxj7b" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.675685 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rljkk\" (UniqueName: \"kubernetes.io/projected/fddf9ec8-447f-487c-a863-73ec68b90737-kube-api-access-rljkk\") pod \"machine-config-server-dgt46\" (UID: \"fddf9ec8-447f-487c-a863-73ec68b90737\") " pod="openshift-machine-config-operator/machine-config-server-dgt46" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.675729 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b8b124f4-97ab-4512-a1a2-b93bc4e724e8-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-lzvjs\" (UID: \"b8b124f4-97ab-4512-a1a2-b93bc4e724e8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lzvjs" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.675749 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/69e8c398-683b-47dc-a517-633d625cbd97-socket-dir\") pod \"csi-hostpathplugin-dxj7b\" (UID: \"69e8c398-683b-47dc-a517-633d625cbd97\") " pod="hostpath-provisioner/csi-hostpathplugin-dxj7b" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.675766 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hz8cc\" (UniqueName: \"kubernetes.io/projected/728793ed-1e89-455c-8d45-92c4ab08c1f6-kube-api-access-hz8cc\") pod \"multus-admission-controller-857f4d67dd-z82w8\" (UID: \"728793ed-1e89-455c-8d45-92c4ab08c1f6\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-z82w8" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.675805 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/94f0bc0d-40c0-45b7-b6c4-7b285ba26c52-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-t8ws2\" (UID: \"94f0bc0d-40c0-45b7-b6c4-7b285ba26c52\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-t8ws2" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.675832 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/fddf9ec8-447f-487c-a863-73ec68b90737-certs\") pod \"machine-config-server-dgt46\" (UID: \"fddf9ec8-447f-487c-a863-73ec68b90737\") " pod="openshift-machine-config-operator/machine-config-server-dgt46" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.675877 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b26b861c-ec52-4685-846c-ea022517e9fb-metrics-certs\") pod \"router-default-5444994796-jwcd2\" (UID: \"b26b861c-ec52-4685-846c-ea022517e9fb\") " pod="openshift-ingress/router-default-5444994796-jwcd2" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.675898 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/26fa95d4-8240-472a-a86f-98acf35ade67-proxy-tls\") pod \"machine-config-operator-74547568cd-cw29n\" (UID: \"26fa95d4-8240-472a-a86f-98acf35ade67\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cw29n" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.675918 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/3ba06ea2-9714-49b5-8477-8eb056bb45a4-signing-key\") pod \"service-ca-9c57cc56f-bqslk\" (UID: \"3ba06ea2-9714-49b5-8477-8eb056bb45a4\") " pod="openshift-service-ca/service-ca-9c57cc56f-bqslk" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.675956 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/26fa95d4-8240-472a-a86f-98acf35ade67-auth-proxy-config\") pod \"machine-config-operator-74547568cd-cw29n\" (UID: \"26fa95d4-8240-472a-a86f-98acf35ade67\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cw29n" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.675980 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/14c6770e-9659-4e77-a7f1-f3ef06ec332d-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-spzc7\" (UID: \"14c6770e-9659-4e77-a7f1-f3ef06ec332d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-spzc7" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.676001 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/683fb061-dc67-431d-8a8a-d5a383794fef-metrics-tls\") pod \"dns-default-x2jlg\" (UID: \"683fb061-dc67-431d-8a8a-d5a383794fef\") " pod="openshift-dns/dns-default-x2jlg" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.676038 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8thp\" (UniqueName: \"kubernetes.io/projected/4f9ab75e-8898-4a0c-8630-c657450b648e-kube-api-access-s8thp\") pod \"migrator-59844c95c7-n5p8z\" (UID: \"4f9ab75e-8898-4a0c-8630-c657450b648e\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-n5p8z" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.676057 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/728793ed-1e89-455c-8d45-92c4ab08c1f6-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-z82w8\" (UID: \"728793ed-1e89-455c-8d45-92c4ab08c1f6\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-z82w8" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.676076 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rhkfd\" (UniqueName: \"kubernetes.io/projected/683fb061-dc67-431d-8a8a-d5a383794fef-kube-api-access-rhkfd\") pod \"dns-default-x2jlg\" (UID: \"683fb061-dc67-431d-8a8a-d5a383794fef\") " pod="openshift-dns/dns-default-x2jlg" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.676118 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b0793347-d948-480b-b5a7-d0fed7e12b38-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-sbr84\" (UID: \"b0793347-d948-480b-b5a7-d0fed7e12b38\") " pod="openshift-marketplace/marketplace-operator-79b997595-sbr84" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.676139 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/71acbaae-e241-4c8e-ac2b-6dd40b15b494-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-9bcck\" (UID: \"71acbaae-e241-4c8e-ac2b-6dd40b15b494\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9bcck" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.676157 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9c2k\" (UniqueName: \"kubernetes.io/projected/445cb05c-ac1a-44a2-864f-a87e0e7b29a5-kube-api-access-f9c2k\") pod \"catalog-operator-68c6474976-8zrdj\" (UID: \"445cb05c-ac1a-44a2-864f-a87e0e7b29a5\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8zrdj" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.676194 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8aed8e7-df36-4a82-a7d6-8a65d9a28eeb-config\") pod \"service-ca-operator-777779d784-jw4gs\" (UID: \"e8aed8e7-df36-4a82-a7d6-8a65d9a28eeb\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-jw4gs" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.676214 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b7697c8e-8996-44b9-8b66-965584ab26e2-apiservice-cert\") pod \"packageserver-d55dfcdfc-bmq9l\" (UID: \"b7697c8e-8996-44b9-8b66-965584ab26e2\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bmq9l" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.676230 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/69e8c398-683b-47dc-a517-633d625cbd97-mountpoint-dir\") pod \"csi-hostpathplugin-dxj7b\" (UID: \"69e8c398-683b-47dc-a517-633d625cbd97\") " pod="hostpath-provisioner/csi-hostpathplugin-dxj7b" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.676268 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9bca2625-c55d-4a28-b37d-2ac43d181e26-cert\") pod \"ingress-canary-z4qfh\" (UID: \"9bca2625-c55d-4a28-b37d-2ac43d181e26\") " pod="openshift-ingress-canary/ingress-canary-z4qfh" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.676292 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/3ba06ea2-9714-49b5-8477-8eb056bb45a4-signing-cabundle\") pod \"service-ca-9c57cc56f-bqslk\" (UID: \"3ba06ea2-9714-49b5-8477-8eb056bb45a4\") " pod="openshift-service-ca/service-ca-9c57cc56f-bqslk" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.676316 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rng2l\" (UniqueName: \"kubernetes.io/projected/092b0577-f19f-413d-afc5-bdc3a40f7f75-kube-api-access-rng2l\") pod \"ingress-operator-5b745b69d9-8mjrc\" (UID: \"092b0577-f19f-413d-afc5-bdc3a40f7f75\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8mjrc" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.676391 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/b7697c8e-8996-44b9-8b66-965584ab26e2-tmpfs\") pod \"packageserver-d55dfcdfc-bmq9l\" (UID: \"b7697c8e-8996-44b9-8b66-965584ab26e2\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bmq9l" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.676398 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/69e8c398-683b-47dc-a517-633d625cbd97-socket-dir\") pod \"csi-hostpathplugin-dxj7b\" (UID: \"69e8c398-683b-47dc-a517-633d625cbd97\") " pod="hostpath-provisioner/csi-hostpathplugin-dxj7b" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.676426 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8ce31dac-90ec-4aa8-b765-1ee1add26c2d-srv-cert\") pod \"olm-operator-6b444d44fb-pd6wv\" (UID: \"8ce31dac-90ec-4aa8-b765-1ee1add26c2d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pd6wv" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.676611 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/69e8c398-683b-47dc-a517-633d625cbd97-registration-dir\") pod \"csi-hostpathplugin-dxj7b\" (UID: \"69e8c398-683b-47dc-a517-633d625cbd97\") " pod="hostpath-provisioner/csi-hostpathplugin-dxj7b" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.676794 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/69e8c398-683b-47dc-a517-633d625cbd97-plugins-dir\") pod \"csi-hostpathplugin-dxj7b\" (UID: \"69e8c398-683b-47dc-a517-633d625cbd97\") " pod="hostpath-provisioner/csi-hostpathplugin-dxj7b" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.676879 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2d6f6cc0-7fc0-411c-800f-f98dc61b5035-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-mggmj\" (UID: \"2d6f6cc0-7fc0-411c-800f-f98dc61b5035\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mggmj" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.676963 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfvt4\" (UniqueName: \"kubernetes.io/projected/e20a6284-be62-4671-b75f-38b32dc20813-kube-api-access-vfvt4\") pod \"etcd-operator-b45778765-2lsb7\" (UID: \"e20a6284-be62-4671-b75f-38b32dc20813\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2lsb7" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.677050 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3267bf97-7e39-410a-8502-3737bfb7f963-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-54vjj\" (UID: \"3267bf97-7e39-410a-8502-3737bfb7f963\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-54vjj" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.677111 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7baa3ebb-6bb0-4744-b096-971958bcd263-config-volume\") pod \"collect-profiles-29522385-74pvr\" (UID: \"7baa3ebb-6bb0-4744-b096-971958bcd263\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522385-74pvr" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.677164 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/e20a6284-be62-4671-b75f-38b32dc20813-etcd-ca\") pod \"etcd-operator-b45778765-2lsb7\" (UID: \"e20a6284-be62-4671-b75f-38b32dc20813\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2lsb7" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.677220 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/fddf9ec8-447f-487c-a863-73ec68b90737-node-bootstrap-token\") pod \"machine-config-server-dgt46\" (UID: \"fddf9ec8-447f-487c-a863-73ec68b90737\") " pod="openshift-machine-config-operator/machine-config-server-dgt46" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.677281 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmv2c\" (UniqueName: \"kubernetes.io/projected/7baa3ebb-6bb0-4744-b096-971958bcd263-kube-api-access-gmv2c\") pod \"collect-profiles-29522385-74pvr\" (UID: \"7baa3ebb-6bb0-4744-b096-971958bcd263\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522385-74pvr" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.677334 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rlrx9\" (UniqueName: \"kubernetes.io/projected/0b9e5453-e92d-46cd-b8fb-c989f00809ae-kube-api-access-rlrx9\") pod \"kube-storage-version-migrator-operator-b67b599dd-vsl5p\" (UID: \"0b9e5453-e92d-46cd-b8fb-c989f00809ae\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vsl5p" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.677389 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3267bf97-7e39-410a-8502-3737bfb7f963-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-54vjj\" (UID: \"3267bf97-7e39-410a-8502-3737bfb7f963\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-54vjj" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.677441 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbwnc\" (UniqueName: \"kubernetes.io/projected/94f0bc0d-40c0-45b7-b6c4-7b285ba26c52-kube-api-access-bbwnc\") pod \"control-plane-machine-set-operator-78cbb6b69f-t8ws2\" (UID: \"94f0bc0d-40c0-45b7-b6c4-7b285ba26c52\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-t8ws2" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.677495 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wns2k\" (UniqueName: \"kubernetes.io/projected/b7697c8e-8996-44b9-8b66-965584ab26e2-kube-api-access-wns2k\") pod \"packageserver-d55dfcdfc-bmq9l\" (UID: \"b7697c8e-8996-44b9-8b66-965584ab26e2\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bmq9l" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.677555 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-df94p\" (UniqueName: \"kubernetes.io/projected/3ba06ea2-9714-49b5-8477-8eb056bb45a4-kube-api-access-df94p\") pod \"service-ca-9c57cc56f-bqslk\" (UID: \"3ba06ea2-9714-49b5-8477-8eb056bb45a4\") " pod="openshift-service-ca/service-ca-9c57cc56f-bqslk" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.677649 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/69e8c398-683b-47dc-a517-633d625cbd97-registration-dir\") pod \"csi-hostpathplugin-dxj7b\" (UID: \"69e8c398-683b-47dc-a517-633d625cbd97\") " pod="hostpath-provisioner/csi-hostpathplugin-dxj7b" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.677653 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lx9v6\" (UniqueName: \"kubernetes.io/projected/71acbaae-e241-4c8e-ac2b-6dd40b15b494-kube-api-access-lx9v6\") pod \"machine-config-controller-84d6567774-9bcck\" (UID: \"71acbaae-e241-4c8e-ac2b-6dd40b15b494\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9bcck" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.677789 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hx5pw\" (UniqueName: \"kubernetes.io/projected/8ce31dac-90ec-4aa8-b765-1ee1add26c2d-kube-api-access-hx5pw\") pod \"olm-operator-6b444d44fb-pd6wv\" (UID: \"8ce31dac-90ec-4aa8-b765-1ee1add26c2d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pd6wv" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.677848 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8ce31dac-90ec-4aa8-b765-1ee1add26c2d-profile-collector-cert\") pod \"olm-operator-6b444d44fb-pd6wv\" (UID: \"8ce31dac-90ec-4aa8-b765-1ee1add26c2d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pd6wv" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.677886 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8b124f4-97ab-4512-a1a2-b93bc4e724e8-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-lzvjs\" (UID: \"b8b124f4-97ab-4512-a1a2-b93bc4e724e8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lzvjs" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.677920 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/69e8c398-683b-47dc-a517-633d625cbd97-csi-data-dir\") pod \"csi-hostpathplugin-dxj7b\" (UID: \"69e8c398-683b-47dc-a517-633d625cbd97\") " pod="hostpath-provisioner/csi-hostpathplugin-dxj7b" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.677921 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b26b861c-ec52-4685-846c-ea022517e9fb-service-ca-bundle\") pod \"router-default-5444994796-jwcd2\" (UID: \"b26b861c-ec52-4685-846c-ea022517e9fb\") " pod="openshift-ingress/router-default-5444994796-jwcd2" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.677945 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/69e8c398-683b-47dc-a517-633d625cbd97-plugins-dir\") pod \"csi-hostpathplugin-dxj7b\" (UID: \"69e8c398-683b-47dc-a517-633d625cbd97\") " pod="hostpath-provisioner/csi-hostpathplugin-dxj7b" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.677953 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/b26b861c-ec52-4685-846c-ea022517e9fb-default-certificate\") pod \"router-default-5444994796-jwcd2\" (UID: \"b26b861c-ec52-4685-846c-ea022517e9fb\") " pod="openshift-ingress/router-default-5444994796-jwcd2" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.678042 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.678098 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/b26b861c-ec52-4685-846c-ea022517e9fb-stats-auth\") pod \"router-default-5444994796-jwcd2\" (UID: \"b26b861c-ec52-4685-846c-ea022517e9fb\") " pod="openshift-ingress/router-default-5444994796-jwcd2" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.678129 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jcp4\" (UniqueName: \"kubernetes.io/projected/14c6770e-9659-4e77-a7f1-f3ef06ec332d-kube-api-access-5jcp4\") pod \"package-server-manager-789f6589d5-spzc7\" (UID: \"14c6770e-9659-4e77-a7f1-f3ef06ec332d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-spzc7" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.678150 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b9e5453-e92d-46cd-b8fb-c989f00809ae-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-vsl5p\" (UID: \"0b9e5453-e92d-46cd-b8fb-c989f00809ae\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vsl5p" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.678612 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7baa3ebb-6bb0-4744-b096-971958bcd263-secret-volume\") pod \"collect-profiles-29522385-74pvr\" (UID: \"7baa3ebb-6bb0-4744-b096-971958bcd263\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522385-74pvr" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.678632 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b7697c8e-8996-44b9-8b66-965584ab26e2-webhook-cert\") pod \"packageserver-d55dfcdfc-bmq9l\" (UID: \"b7697c8e-8996-44b9-8b66-965584ab26e2\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bmq9l" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.678672 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/445cb05c-ac1a-44a2-864f-a87e0e7b29a5-srv-cert\") pod \"catalog-operator-68c6474976-8zrdj\" (UID: \"445cb05c-ac1a-44a2-864f-a87e0e7b29a5\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8zrdj" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.678690 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/26fa95d4-8240-472a-a86f-98acf35ade67-images\") pod \"machine-config-operator-74547568cd-cw29n\" (UID: \"26fa95d4-8240-472a-a86f-98acf35ade67\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cw29n" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.678742 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4b736927-813a-4b21-80d6-a0b4106e2c95-metrics-tls\") pod \"dns-operator-744455d44c-p8js4\" (UID: \"4b736927-813a-4b21-80d6-a0b4106e2c95\") " pod="openshift-dns-operator/dns-operator-744455d44c-p8js4" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.678779 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b9e5453-e92d-46cd-b8fb-c989f00809ae-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-vsl5p\" (UID: \"0b9e5453-e92d-46cd-b8fb-c989f00809ae\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vsl5p" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.678832 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d6f6cc0-7fc0-411c-800f-f98dc61b5035-config\") pod \"kube-apiserver-operator-766d6c64bb-mggmj\" (UID: \"2d6f6cc0-7fc0-411c-800f-f98dc61b5035\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mggmj" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.678856 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b8b124f4-97ab-4512-a1a2-b93bc4e724e8-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-lzvjs\" (UID: \"b8b124f4-97ab-4512-a1a2-b93bc4e724e8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lzvjs" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.678898 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ct4x8\" (UniqueName: \"kubernetes.io/projected/e8aed8e7-df36-4a82-a7d6-8a65d9a28eeb-kube-api-access-ct4x8\") pod \"service-ca-operator-777779d784-jw4gs\" (UID: \"e8aed8e7-df36-4a82-a7d6-8a65d9a28eeb\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-jw4gs" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.678929 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3267bf97-7e39-410a-8502-3737bfb7f963-config\") pod \"kube-controller-manager-operator-78b949d7b-54vjj\" (UID: \"3267bf97-7e39-410a-8502-3737bfb7f963\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-54vjj" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.678952 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fr4p7\" (UniqueName: \"kubernetes.io/projected/4b736927-813a-4b21-80d6-a0b4106e2c95-kube-api-access-fr4p7\") pod \"dns-operator-744455d44c-p8js4\" (UID: \"4b736927-813a-4b21-80d6-a0b4106e2c95\") " pod="openshift-dns-operator/dns-operator-744455d44c-p8js4" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.678996 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e8aed8e7-df36-4a82-a7d6-8a65d9a28eeb-serving-cert\") pod \"service-ca-operator-777779d784-jw4gs\" (UID: \"e8aed8e7-df36-4a82-a7d6-8a65d9a28eeb\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-jw4gs" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.679022 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdhmj\" (UniqueName: \"kubernetes.io/projected/b0793347-d948-480b-b5a7-d0fed7e12b38-kube-api-access-cdhmj\") pod \"marketplace-operator-79b997595-sbr84\" (UID: \"b0793347-d948-480b-b5a7-d0fed7e12b38\") " pod="openshift-marketplace/marketplace-operator-79b997595-sbr84" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.679077 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e20a6284-be62-4671-b75f-38b32dc20813-serving-cert\") pod \"etcd-operator-b45778765-2lsb7\" (UID: \"e20a6284-be62-4671-b75f-38b32dc20813\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2lsb7" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.679097 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d6f6cc0-7fc0-411c-800f-f98dc61b5035-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-mggmj\" (UID: \"2d6f6cc0-7fc0-411c-800f-f98dc61b5035\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mggmj" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.679139 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/71acbaae-e241-4c8e-ac2b-6dd40b15b494-proxy-tls\") pod \"machine-config-controller-84d6567774-9bcck\" (UID: \"71acbaae-e241-4c8e-ac2b-6dd40b15b494\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9bcck" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.679159 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/092b0577-f19f-413d-afc5-bdc3a40f7f75-trusted-ca\") pod \"ingress-operator-5b745b69d9-8mjrc\" (UID: \"092b0577-f19f-413d-afc5-bdc3a40f7f75\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8mjrc" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.679178 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b0793347-d948-480b-b5a7-d0fed7e12b38-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-sbr84\" (UID: \"b0793347-d948-480b-b5a7-d0fed7e12b38\") " pod="openshift-marketplace/marketplace-operator-79b997595-sbr84" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.679217 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/e20a6284-be62-4671-b75f-38b32dc20813-etcd-service-ca\") pod \"etcd-operator-b45778765-2lsb7\" (UID: \"e20a6284-be62-4671-b75f-38b32dc20813\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2lsb7" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.679238 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4sbh\" (UniqueName: \"kubernetes.io/projected/b26b861c-ec52-4685-846c-ea022517e9fb-kube-api-access-t4sbh\") pod \"router-default-5444994796-jwcd2\" (UID: \"b26b861c-ec52-4685-846c-ea022517e9fb\") " pod="openshift-ingress/router-default-5444994796-jwcd2" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.679256 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ssq98\" (UniqueName: \"kubernetes.io/projected/9bca2625-c55d-4a28-b37d-2ac43d181e26-kube-api-access-ssq98\") pod \"ingress-canary-z4qfh\" (UID: \"9bca2625-c55d-4a28-b37d-2ac43d181e26\") " pod="openshift-ingress-canary/ingress-canary-z4qfh" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.679272 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e20a6284-be62-4671-b75f-38b32dc20813-config\") pod \"etcd-operator-b45778765-2lsb7\" (UID: \"e20a6284-be62-4671-b75f-38b32dc20813\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2lsb7" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.679856 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7baa3ebb-6bb0-4744-b096-971958bcd263-config-volume\") pod \"collect-profiles-29522385-74pvr\" (UID: \"7baa3ebb-6bb0-4744-b096-971958bcd263\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522385-74pvr" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.679903 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/e20a6284-be62-4671-b75f-38b32dc20813-etcd-ca\") pod \"etcd-operator-b45778765-2lsb7\" (UID: \"e20a6284-be62-4671-b75f-38b32dc20813\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2lsb7" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.680459 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.680668 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/71acbaae-e241-4c8e-ac2b-6dd40b15b494-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-9bcck\" (UID: \"71acbaae-e241-4c8e-ac2b-6dd40b15b494\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9bcck" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.684102 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/26fa95d4-8240-472a-a86f-98acf35ade67-auth-proxy-config\") pod \"machine-config-operator-74547568cd-cw29n\" (UID: \"26fa95d4-8240-472a-a86f-98acf35ade67\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cw29n" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.684561 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d6f6cc0-7fc0-411c-800f-f98dc61b5035-config\") pod \"kube-apiserver-operator-766d6c64bb-mggmj\" (UID: \"2d6f6cc0-7fc0-411c-800f-f98dc61b5035\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mggmj" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.685179 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/26fa95d4-8240-472a-a86f-98acf35ade67-images\") pod \"machine-config-operator-74547568cd-cw29n\" (UID: \"26fa95d4-8240-472a-a86f-98acf35ade67\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cw29n" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.685385 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e20a6284-be62-4671-b75f-38b32dc20813-etcd-client\") pod \"etcd-operator-b45778765-2lsb7\" (UID: \"e20a6284-be62-4671-b75f-38b32dc20813\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2lsb7" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.685513 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/14c6770e-9659-4e77-a7f1-f3ef06ec332d-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-spzc7\" (UID: \"14c6770e-9659-4e77-a7f1-f3ef06ec332d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-spzc7" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.686125 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3267bf97-7e39-410a-8502-3737bfb7f963-config\") pod \"kube-controller-manager-operator-78b949d7b-54vjj\" (UID: \"3267bf97-7e39-410a-8502-3737bfb7f963\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-54vjj" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.687150 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/69e8c398-683b-47dc-a517-633d625cbd97-csi-data-dir\") pod \"csi-hostpathplugin-dxj7b\" (UID: \"69e8c398-683b-47dc-a517-633d625cbd97\") " pod="hostpath-provisioner/csi-hostpathplugin-dxj7b" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.687416 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4b736927-813a-4b21-80d6-a0b4106e2c95-metrics-tls\") pod \"dns-operator-744455d44c-p8js4\" (UID: \"4b736927-813a-4b21-80d6-a0b4106e2c95\") " pod="openshift-dns-operator/dns-operator-744455d44c-p8js4" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.688412 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b9e5453-e92d-46cd-b8fb-c989f00809ae-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-vsl5p\" (UID: \"0b9e5453-e92d-46cd-b8fb-c989f00809ae\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vsl5p" Feb 17 15:56:27 crc kubenswrapper[4808]: E0217 15:56:27.689021 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:56:28.188997822 +0000 UTC m=+151.705356915 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fmfh5" (UID: "ddc3801d-3513-460c-a719-ed9dc92697e7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.690047 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/69e8c398-683b-47dc-a517-633d625cbd97-mountpoint-dir\") pod \"csi-hostpathplugin-dxj7b\" (UID: \"69e8c398-683b-47dc-a517-633d625cbd97\") " pod="hostpath-provisioner/csi-hostpathplugin-dxj7b" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.691002 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/e20a6284-be62-4671-b75f-38b32dc20813-etcd-service-ca\") pod \"etcd-operator-b45778765-2lsb7\" (UID: \"e20a6284-be62-4671-b75f-38b32dc20813\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2lsb7" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.691777 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b0793347-d948-480b-b5a7-d0fed7e12b38-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-sbr84\" (UID: \"b0793347-d948-480b-b5a7-d0fed7e12b38\") " pod="openshift-marketplace/marketplace-operator-79b997595-sbr84" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.692206 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b9e5453-e92d-46cd-b8fb-c989f00809ae-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-vsl5p\" (UID: \"0b9e5453-e92d-46cd-b8fb-c989f00809ae\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vsl5p" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.692565 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/b7697c8e-8996-44b9-8b66-965584ab26e2-tmpfs\") pod \"packageserver-d55dfcdfc-bmq9l\" (UID: \"b7697c8e-8996-44b9-8b66-965584ab26e2\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bmq9l" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.692691 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8b124f4-97ab-4512-a1a2-b93bc4e724e8-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-lzvjs\" (UID: \"b8b124f4-97ab-4512-a1a2-b93bc4e724e8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lzvjs" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.692766 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/71acbaae-e241-4c8e-ac2b-6dd40b15b494-proxy-tls\") pod \"machine-config-controller-84d6567774-9bcck\" (UID: \"71acbaae-e241-4c8e-ac2b-6dd40b15b494\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9bcck" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.692804 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/092b0577-f19f-413d-afc5-bdc3a40f7f75-trusted-ca\") pod \"ingress-operator-5b745b69d9-8mjrc\" (UID: \"092b0577-f19f-413d-afc5-bdc3a40f7f75\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8mjrc" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.692880 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e20a6284-be62-4671-b75f-38b32dc20813-serving-cert\") pod \"etcd-operator-b45778765-2lsb7\" (UID: \"e20a6284-be62-4671-b75f-38b32dc20813\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2lsb7" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.693204 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7baa3ebb-6bb0-4744-b096-971958bcd263-secret-volume\") pod \"collect-profiles-29522385-74pvr\" (UID: \"7baa3ebb-6bb0-4744-b096-971958bcd263\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522385-74pvr" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.693258 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e8aed8e7-df36-4a82-a7d6-8a65d9a28eeb-serving-cert\") pod \"service-ca-operator-777779d784-jw4gs\" (UID: \"e8aed8e7-df36-4a82-a7d6-8a65d9a28eeb\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-jw4gs" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.693261 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b8b124f4-97ab-4512-a1a2-b93bc4e724e8-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-lzvjs\" (UID: \"b8b124f4-97ab-4512-a1a2-b93bc4e724e8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lzvjs" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.693818 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e20a6284-be62-4671-b75f-38b32dc20813-config\") pod \"etcd-operator-b45778765-2lsb7\" (UID: \"e20a6284-be62-4671-b75f-38b32dc20813\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2lsb7" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.694011 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/3ba06ea2-9714-49b5-8477-8eb056bb45a4-signing-key\") pod \"service-ca-9c57cc56f-bqslk\" (UID: \"3ba06ea2-9714-49b5-8477-8eb056bb45a4\") " pod="openshift-service-ca/service-ca-9c57cc56f-bqslk" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.694212 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b0793347-d948-480b-b5a7-d0fed7e12b38-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-sbr84\" (UID: \"b0793347-d948-480b-b5a7-d0fed7e12b38\") " pod="openshift-marketplace/marketplace-operator-79b997595-sbr84" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.694640 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/445cb05c-ac1a-44a2-864f-a87e0e7b29a5-profile-collector-cert\") pod \"catalog-operator-68c6474976-8zrdj\" (UID: \"445cb05c-ac1a-44a2-864f-a87e0e7b29a5\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8zrdj" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.694270 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d6f6cc0-7fc0-411c-800f-f98dc61b5035-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-mggmj\" (UID: \"2d6f6cc0-7fc0-411c-800f-f98dc61b5035\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mggmj" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.694385 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/3ba06ea2-9714-49b5-8477-8eb056bb45a4-signing-cabundle\") pod \"service-ca-9c57cc56f-bqslk\" (UID: \"3ba06ea2-9714-49b5-8477-8eb056bb45a4\") " pod="openshift-service-ca/service-ca-9c57cc56f-bqslk" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.697170 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/445cb05c-ac1a-44a2-864f-a87e0e7b29a5-srv-cert\") pod \"catalog-operator-68c6474976-8zrdj\" (UID: \"445cb05c-ac1a-44a2-864f-a87e0e7b29a5\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8zrdj" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.697331 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/092b0577-f19f-413d-afc5-bdc3a40f7f75-metrics-tls\") pod \"ingress-operator-5b745b69d9-8mjrc\" (UID: \"092b0577-f19f-413d-afc5-bdc3a40f7f75\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8mjrc" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.698608 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/728793ed-1e89-455c-8d45-92c4ab08c1f6-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-z82w8\" (UID: \"728793ed-1e89-455c-8d45-92c4ab08c1f6\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-z82w8" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.699225 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/26fa95d4-8240-472a-a86f-98acf35ade67-proxy-tls\") pod \"machine-config-operator-74547568cd-cw29n\" (UID: \"26fa95d4-8240-472a-a86f-98acf35ade67\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cw29n" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.699427 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/b26b861c-ec52-4685-846c-ea022517e9fb-default-certificate\") pod \"router-default-5444994796-jwcd2\" (UID: \"b26b861c-ec52-4685-846c-ea022517e9fb\") " pod="openshift-ingress/router-default-5444994796-jwcd2" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.699618 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b26b861c-ec52-4685-846c-ea022517e9fb-metrics-certs\") pod \"router-default-5444994796-jwcd2\" (UID: \"b26b861c-ec52-4685-846c-ea022517e9fb\") " pod="openshift-ingress/router-default-5444994796-jwcd2" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.699633 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/b26b861c-ec52-4685-846c-ea022517e9fb-stats-auth\") pod \"router-default-5444994796-jwcd2\" (UID: \"b26b861c-ec52-4685-846c-ea022517e9fb\") " pod="openshift-ingress/router-default-5444994796-jwcd2" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.699828 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8ce31dac-90ec-4aa8-b765-1ee1add26c2d-srv-cert\") pod \"olm-operator-6b444d44fb-pd6wv\" (UID: \"8ce31dac-90ec-4aa8-b765-1ee1add26c2d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pd6wv" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.700238 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8ce31dac-90ec-4aa8-b765-1ee1add26c2d-profile-collector-cert\") pod \"olm-operator-6b444d44fb-pd6wv\" (UID: \"8ce31dac-90ec-4aa8-b765-1ee1add26c2d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pd6wv" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.700772 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.701450 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/94f0bc0d-40c0-45b7-b6c4-7b285ba26c52-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-t8ws2\" (UID: \"94f0bc0d-40c0-45b7-b6c4-7b285ba26c52\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-t8ws2" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.702040 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3267bf97-7e39-410a-8502-3737bfb7f963-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-54vjj\" (UID: \"3267bf97-7e39-410a-8502-3737bfb7f963\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-54vjj" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.712454 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8aed8e7-df36-4a82-a7d6-8a65d9a28eeb-config\") pod \"service-ca-operator-777779d784-jw4gs\" (UID: \"e8aed8e7-df36-4a82-a7d6-8a65d9a28eeb\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-jw4gs" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.720228 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.736382 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/fddf9ec8-447f-487c-a863-73ec68b90737-node-bootstrap-token\") pod \"machine-config-server-dgt46\" (UID: \"fddf9ec8-447f-487c-a863-73ec68b90737\") " pod="openshift-machine-config-operator/machine-config-server-dgt46" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.738559 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.758664 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.770471 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/fddf9ec8-447f-487c-a863-73ec68b90737-certs\") pod \"machine-config-server-dgt46\" (UID: \"fddf9ec8-447f-487c-a863-73ec68b90737\") " pod="openshift-machine-config-operator/machine-config-server-dgt46" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.778636 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.780927 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:56:27 crc kubenswrapper[4808]: E0217 15:56:27.781758 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:56:28.28172792 +0000 UTC m=+151.798086993 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.793058 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b7697c8e-8996-44b9-8b66-965584ab26e2-apiservice-cert\") pod \"packageserver-d55dfcdfc-bmq9l\" (UID: \"b7697c8e-8996-44b9-8b66-965584ab26e2\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bmq9l" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.793229 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b7697c8e-8996-44b9-8b66-965584ab26e2-webhook-cert\") pod \"packageserver-d55dfcdfc-bmq9l\" (UID: \"b7697c8e-8996-44b9-8b66-965584ab26e2\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bmq9l" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.825302 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5tzz\" (UniqueName: \"kubernetes.io/projected/c8c0b903-63ed-4811-a991-9a5751a4c640-kube-api-access-k5tzz\") pod \"openshift-controller-manager-operator-756b6f6bc6-cbwrs\" (UID: \"c8c0b903-63ed-4811-a991-9a5751a4c640\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cbwrs" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.847073 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnnfd\" (UniqueName: \"kubernetes.io/projected/0131c573-bf76-49f4-9581-dd39ef60b27f-kube-api-access-pnnfd\") pod \"cluster-samples-operator-665b6dd947-bz4bz\" (UID: \"0131c573-bf76-49f4-9581-dd39ef60b27f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bz4bz" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.866759 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hw8ff\" (UniqueName: \"kubernetes.io/projected/33978535-84b2-4def-af5a-d2819171e202-kube-api-access-hw8ff\") pod \"oauth-openshift-558db77b4-j6dgq\" (UID: \"33978535-84b2-4def-af5a-d2819171e202\") " pod="openshift-authentication/oauth-openshift-558db77b4-j6dgq" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.878697 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9c7096e1-8ca1-483d-8e12-1cc79d28182a-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-9l858\" (UID: \"9c7096e1-8ca1-483d-8e12-1cc79d28182a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9l858" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.882876 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:27 crc kubenswrapper[4808]: E0217 15:56:27.883298 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:56:28.383278986 +0000 UTC m=+151.899638069 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fmfh5" (UID: "ddc3801d-3513-460c-a719-ed9dc92697e7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.902978 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bz4bz" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.909515 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fttb4\" (UniqueName: \"kubernetes.io/projected/116ae5bc-cf7e-45ad-9800-501bcfc04ff7-kube-api-access-fttb4\") pod \"downloads-7954f5f757-wlj8d\" (UID: \"116ae5bc-cf7e-45ad-9800-501bcfc04ff7\") " pod="openshift-console/downloads-7954f5f757-wlj8d" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.911130 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-wlj8d" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.919866 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwn6m\" (UniqueName: \"kubernetes.io/projected/9c7096e1-8ca1-483d-8e12-1cc79d28182a-kube-api-access-jwn6m\") pod \"cluster-image-registry-operator-dc59b4c8b-9l858\" (UID: \"9c7096e1-8ca1-483d-8e12-1cc79d28182a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9l858" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.926064 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cbwrs" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.934125 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9l858" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.942676 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwlfb\" (UniqueName: \"kubernetes.io/projected/25b3b271-e6e0-49c4-8fa2-17d8f8f2d5fa-kube-api-access-pwlfb\") pod \"console-operator-58897d9998-mxgf8\" (UID: \"25b3b271-e6e0-49c4-8fa2-17d8f8f2d5fa\") " pod="openshift-console-operator/console-operator-58897d9998-mxgf8" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.960745 4808 request.go:700] Waited for 1.912350383s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/serviceaccounts/route-controller-manager-sa/token Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.973935 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6lnfm\" (UniqueName: \"kubernetes.io/projected/e489a46b-9123-44c6-94e0-692621760dd6-kube-api-access-6lnfm\") pod \"console-f9d7485db-hdg74\" (UID: \"e489a46b-9123-44c6-94e0-692621760dd6\") " pod="openshift-console/console-f9d7485db-hdg74" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.985412 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:56:27 crc kubenswrapper[4808]: E0217 15:56:27.986545 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:56:28.486494868 +0000 UTC m=+152.002853971 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.986606 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6nx4t\" (UniqueName: \"kubernetes.io/projected/8227d3a9-60f5-4d19-b4d1-8a0143864837-kube-api-access-6nx4t\") pod \"route-controller-manager-6576b87f9c-j6vm5\" (UID: \"8227d3a9-60f5-4d19-b4d1-8a0143864837\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j6vm5" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.995653 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8srf\" (UniqueName: \"kubernetes.io/projected/a7649915-6408-4c30-8faa-0fb3ea55007a-kube-api-access-v8srf\") pod \"controller-manager-879f6c89f-cvqck\" (UID: \"a7649915-6408-4c30-8faa-0fb3ea55007a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cvqck" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.000504 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.012560 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/683fb061-dc67-431d-8a8a-d5a383794fef-metrics-tls\") pod \"dns-default-x2jlg\" (UID: \"683fb061-dc67-431d-8a8a-d5a383794fef\") " pod="openshift-dns/dns-default-x2jlg" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.053838 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.053881 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.056708 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/683fb061-dc67-431d-8a8a-d5a383794fef-config-volume\") pod \"dns-default-x2jlg\" (UID: \"683fb061-dc67-431d-8a8a-d5a383794fef\") " pod="openshift-dns/dns-default-x2jlg" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.060231 4808 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.078545 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.090801 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:28 crc kubenswrapper[4808]: E0217 15:56:28.091279 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:56:28.591256161 +0000 UTC m=+152.107615244 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fmfh5" (UID: "ddc3801d-3513-460c-a719-ed9dc92697e7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.098333 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.111852 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-j6dgq" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.118208 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.125428 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j6vm5" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.136169 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9bca2625-c55d-4a28-b37d-2ac43d181e26-cert\") pod \"ingress-canary-z4qfh\" (UID: \"9bca2625-c55d-4a28-b37d-2ac43d181e26\") " pod="openshift-ingress-canary/ingress-canary-z4qfh" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.137825 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.147143 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-7jp8q" event={"ID":"d0ee93f1-93ac-4db2-b35e-5be5bded6541","Type":"ContainerDied","Data":"c19decad51c1b69b1826c2c8e0925aa45a5bc773d28bc99648af07b790b65c35"} Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.151130 4808 generic.go:334] "Generic (PLEG): container finished" podID="d0ee93f1-93ac-4db2-b35e-5be5bded6541" containerID="c19decad51c1b69b1826c2c8e0925aa45a5bc773d28bc99648af07b790b65c35" exitCode=0 Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.157397 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.167550 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-mxgf8" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.175711 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-srhjb" event={"ID":"656b06bf-9660-4c18-941b-5e5589f0301a","Type":"ContainerStarted","Data":"b1fb9b0bb3c50dd0d5e089cc840c6da5f34844e0c492b88ce6fec93b6bb3dd8b"} Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.175782 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-srhjb" event={"ID":"656b06bf-9660-4c18-941b-5e5589f0301a","Type":"ContainerStarted","Data":"c84eddacbd701e2f4be21f89f0238d216b00bf47018ffe21f01b7c624a5bc7c9"} Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.177684 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bz4bz"] Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.178844 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.184704 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jlwrb" event={"ID":"b9a99858-5ada-47b7-855c-8d3b43ab9fee","Type":"ContainerStarted","Data":"a8946f8ba57d15ff903547b5d3afb23f3b322a750291b72b3b9220f37b8f5053"} Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.184756 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jlwrb" event={"ID":"b9a99858-5ada-47b7-855c-8d3b43ab9fee","Type":"ContainerStarted","Data":"cd04ae8543fbcb61e49789b7da0eacde06d915984a63355b376dce6b0abe2238"} Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.186772 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cg82l" event={"ID":"10596b8a-e57a-498e-a7e8-e017fde34d54","Type":"ContainerStarted","Data":"9b3cb0231c5f52b5ef2da876239e96adbe6e098823b4e3ca75f4c06c927f4847"} Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.191618 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:56:28 crc kubenswrapper[4808]: E0217 15:56:28.193384 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:56:28.693354663 +0000 UTC m=+152.209713736 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.193888 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-4x6s2" event={"ID":"5b5592d9-5fbf-49ac-bab6-bf0e11f43706","Type":"ContainerStarted","Data":"5ac52f8586bdc10b8663aa8a239c5aaec2728794ed514de1896d634d8f2ce1fc"} Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.193941 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-4x6s2" event={"ID":"5b5592d9-5fbf-49ac-bab6-bf0e11f43706","Type":"ContainerStarted","Data":"900c59c2d581818a176801999b6fa9e6b878076d4f9af2ecbee4785471fad41f"} Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.194274 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-cvqck" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.213788 4808 generic.go:334] "Generic (PLEG): container finished" podID="681a57d4-bd74-4910-a3f3-517b96a15123" containerID="642d65938791a8bb9629f1359ff2bf1885cdcece436e6ab4ec5878dfedf1c7f7" exitCode=0 Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.215300 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k48nr" event={"ID":"681a57d4-bd74-4910-a3f3-517b96a15123","Type":"ContainerDied","Data":"642d65938791a8bb9629f1359ff2bf1885cdcece436e6ab4ec5878dfedf1c7f7"} Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.215790 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k48nr" event={"ID":"681a57d4-bd74-4910-a3f3-517b96a15123","Type":"ContainerStarted","Data":"3036eb853088e7295948e66ca9264222c463f1d60ce8d0011f48a145e6120ab6"} Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.218463 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-hdg74" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.242782 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ql2z2\" (UniqueName: \"kubernetes.io/projected/98bde021-9860-4b02-9223-512db6787eff-kube-api-access-ql2z2\") pod \"openshift-config-operator-7777fb866f-s2fz5\" (UID: \"98bde021-9860-4b02-9223-512db6787eff\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-s2fz5" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.262291 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l78nd\" (UniqueName: \"kubernetes.io/projected/ddc3801d-3513-460c-a719-ed9dc92697e7-kube-api-access-l78nd\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.298495 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ddc3801d-3513-460c-a719-ed9dc92697e7-bound-sa-token\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.299339 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:28 crc kubenswrapper[4808]: E0217 15:56:28.304244 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:56:28.804215861 +0000 UTC m=+152.320574934 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fmfh5" (UID: "ddc3801d-3513-460c-a719-ed9dc92697e7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.314999 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/092b0577-f19f-413d-afc5-bdc3a40f7f75-bound-sa-token\") pod \"ingress-operator-5b745b69d9-8mjrc\" (UID: \"092b0577-f19f-413d-afc5-bdc3a40f7f75\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8mjrc" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.328715 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbxgq\" (UniqueName: \"kubernetes.io/projected/26fa95d4-8240-472a-a86f-98acf35ade67-kube-api-access-mbxgq\") pod \"machine-config-operator-74547568cd-cw29n\" (UID: \"26fa95d4-8240-472a-a86f-98acf35ade67\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cw29n" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.334452 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b8b124f4-97ab-4512-a1a2-b93bc4e724e8-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-lzvjs\" (UID: \"b8b124f4-97ab-4512-a1a2-b93bc4e724e8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lzvjs" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.355109 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rljkk\" (UniqueName: \"kubernetes.io/projected/fddf9ec8-447f-487c-a863-73ec68b90737-kube-api-access-rljkk\") pod \"machine-config-server-dgt46\" (UID: \"fddf9ec8-447f-487c-a863-73ec68b90737\") " pod="openshift-machine-config-operator/machine-config-server-dgt46" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.380809 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrwgm\" (UniqueName: \"kubernetes.io/projected/69e8c398-683b-47dc-a517-633d625cbd97-kube-api-access-zrwgm\") pod \"csi-hostpathplugin-dxj7b\" (UID: \"69e8c398-683b-47dc-a517-633d625cbd97\") " pod="hostpath-provisioner/csi-hostpathplugin-dxj7b" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.386941 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cw29n" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.398930 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hz8cc\" (UniqueName: \"kubernetes.io/projected/728793ed-1e89-455c-8d45-92c4ab08c1f6-kube-api-access-hz8cc\") pod \"multus-admission-controller-857f4d67dd-z82w8\" (UID: \"728793ed-1e89-455c-8d45-92c4ab08c1f6\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-z82w8" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.400741 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:56:28 crc kubenswrapper[4808]: E0217 15:56:28.401254 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:56:28.901232195 +0000 UTC m=+152.417591258 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.406042 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-z82w8" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.415143 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8thp\" (UniqueName: \"kubernetes.io/projected/4f9ab75e-8898-4a0c-8630-c657450b648e-kube-api-access-s8thp\") pod \"migrator-59844c95c7-n5p8z\" (UID: \"4f9ab75e-8898-4a0c-8630-c657450b648e\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-n5p8z" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.441094 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3267bf97-7e39-410a-8502-3737bfb7f963-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-54vjj\" (UID: \"3267bf97-7e39-410a-8502-3737bfb7f963\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-54vjj" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.453052 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-s2fz5" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.459769 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-wlj8d"] Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.467988 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9c2k\" (UniqueName: \"kubernetes.io/projected/445cb05c-ac1a-44a2-864f-a87e0e7b29a5-kube-api-access-f9c2k\") pod \"catalog-operator-68c6474976-8zrdj\" (UID: \"445cb05c-ac1a-44a2-864f-a87e0e7b29a5\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8zrdj" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.471735 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cbwrs"] Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.471791 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9l858"] Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.478260 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-dgt46" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.487056 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lx9v6\" (UniqueName: \"kubernetes.io/projected/71acbaae-e241-4c8e-ac2b-6dd40b15b494-kube-api-access-lx9v6\") pod \"machine-config-controller-84d6567774-9bcck\" (UID: \"71acbaae-e241-4c8e-ac2b-6dd40b15b494\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9bcck" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.506320 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:28 crc kubenswrapper[4808]: E0217 15:56:28.508333 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:56:29.008315491 +0000 UTC m=+152.524674564 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fmfh5" (UID: "ddc3801d-3513-460c-a719-ed9dc92697e7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.515093 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rhkfd\" (UniqueName: \"kubernetes.io/projected/683fb061-dc67-431d-8a8a-d5a383794fef-kube-api-access-rhkfd\") pod \"dns-default-x2jlg\" (UID: \"683fb061-dc67-431d-8a8a-d5a383794fef\") " pod="openshift-dns/dns-default-x2jlg" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.519473 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2d6f6cc0-7fc0-411c-800f-f98dc61b5035-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-mggmj\" (UID: \"2d6f6cc0-7fc0-411c-800f-f98dc61b5035\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mggmj" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.533024 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-dxj7b" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.539899 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfvt4\" (UniqueName: \"kubernetes.io/projected/e20a6284-be62-4671-b75f-38b32dc20813-kube-api-access-vfvt4\") pod \"etcd-operator-b45778765-2lsb7\" (UID: \"e20a6284-be62-4671-b75f-38b32dc20813\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2lsb7" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.565301 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hx5pw\" (UniqueName: \"kubernetes.io/projected/8ce31dac-90ec-4aa8-b765-1ee1add26c2d-kube-api-access-hx5pw\") pod \"olm-operator-6b444d44fb-pd6wv\" (UID: \"8ce31dac-90ec-4aa8-b765-1ee1add26c2d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pd6wv" Feb 17 15:56:28 crc kubenswrapper[4808]: W0217 15:56:28.570269 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc8c0b903_63ed_4811_a991_9a5751a4c640.slice/crio-6a92581d96f5ce106de955d0377d19380dc8e249c7afa67d973cce7eda45abe9 WatchSource:0}: Error finding container 6a92581d96f5ce106de955d0377d19380dc8e249c7afa67d973cce7eda45abe9: Status 404 returned error can't find the container with id 6a92581d96f5ce106de955d0377d19380dc8e249c7afa67d973cce7eda45abe9 Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.572073 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-2lsb7" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.581452 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wns2k\" (UniqueName: \"kubernetes.io/projected/b7697c8e-8996-44b9-8b66-965584ab26e2-kube-api-access-wns2k\") pod \"packageserver-d55dfcdfc-bmq9l\" (UID: \"b7697c8e-8996-44b9-8b66-965584ab26e2\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bmq9l" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.600018 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lzvjs" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.600367 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mggmj" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.606518 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-df94p\" (UniqueName: \"kubernetes.io/projected/3ba06ea2-9714-49b5-8477-8eb056bb45a4-kube-api-access-df94p\") pod \"service-ca-9c57cc56f-bqslk\" (UID: \"3ba06ea2-9714-49b5-8477-8eb056bb45a4\") " pod="openshift-service-ca/service-ca-9c57cc56f-bqslk" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.608381 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9bcck" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.609262 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:56:28 crc kubenswrapper[4808]: E0217 15:56:28.609643 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:56:29.109621701 +0000 UTC m=+152.625980774 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.612410 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8zrdj" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.618976 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdhmj\" (UniqueName: \"kubernetes.io/projected/b0793347-d948-480b-b5a7-d0fed7e12b38-kube-api-access-cdhmj\") pod \"marketplace-operator-79b997595-sbr84\" (UID: \"b0793347-d948-480b-b5a7-d0fed7e12b38\") " pod="openshift-marketplace/marketplace-operator-79b997595-sbr84" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.623146 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-54vjj" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.657269 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ct4x8\" (UniqueName: \"kubernetes.io/projected/e8aed8e7-df36-4a82-a7d6-8a65d9a28eeb-kube-api-access-ct4x8\") pod \"service-ca-operator-777779d784-jw4gs\" (UID: \"e8aed8e7-df36-4a82-a7d6-8a65d9a28eeb\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-jw4gs" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.667806 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jcp4\" (UniqueName: \"kubernetes.io/projected/14c6770e-9659-4e77-a7f1-f3ef06ec332d-kube-api-access-5jcp4\") pod \"package-server-manager-789f6589d5-spzc7\" (UID: \"14c6770e-9659-4e77-a7f1-f3ef06ec332d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-spzc7" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.674842 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-j6vm5"] Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.675199 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-n5p8z" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.677149 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4sbh\" (UniqueName: \"kubernetes.io/projected/b26b861c-ec52-4685-846c-ea022517e9fb-kube-api-access-t4sbh\") pod \"router-default-5444994796-jwcd2\" (UID: \"b26b861c-ec52-4685-846c-ea022517e9fb\") " pod="openshift-ingress/router-default-5444994796-jwcd2" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.700375 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rng2l\" (UniqueName: \"kubernetes.io/projected/092b0577-f19f-413d-afc5-bdc3a40f7f75-kube-api-access-rng2l\") pod \"ingress-operator-5b745b69d9-8mjrc\" (UID: \"092b0577-f19f-413d-afc5-bdc3a40f7f75\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8mjrc" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.711010 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-jwcd2" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.714107 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-mxgf8"] Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.715392 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ssq98\" (UniqueName: \"kubernetes.io/projected/9bca2625-c55d-4a28-b37d-2ac43d181e26-kube-api-access-ssq98\") pod \"ingress-canary-z4qfh\" (UID: \"9bca2625-c55d-4a28-b37d-2ac43d181e26\") " pod="openshift-ingress-canary/ingress-canary-z4qfh" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.716534 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:28 crc kubenswrapper[4808]: E0217 15:56:28.717077 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:56:29.217059818 +0000 UTC m=+152.733418891 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fmfh5" (UID: "ddc3801d-3513-460c-a719-ed9dc92697e7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.719948 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-sbr84" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.730762 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pd6wv" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.731491 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-hdg74"] Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.736671 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbwnc\" (UniqueName: \"kubernetes.io/projected/94f0bc0d-40c0-45b7-b6c4-7b285ba26c52-kube-api-access-bbwnc\") pod \"control-plane-machine-set-operator-78cbb6b69f-t8ws2\" (UID: \"94f0bc0d-40c0-45b7-b6c4-7b285ba26c52\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-t8ws2" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.739638 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-bqslk" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.756888 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-t8ws2" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.757536 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-spzc7" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.766359 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fr4p7\" (UniqueName: \"kubernetes.io/projected/4b736927-813a-4b21-80d6-a0b4106e2c95-kube-api-access-fr4p7\") pod \"dns-operator-744455d44c-p8js4\" (UID: \"4b736927-813a-4b21-80d6-a0b4106e2c95\") " pod="openshift-dns-operator/dns-operator-744455d44c-p8js4" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.766827 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-jw4gs" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.784604 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-cvqck"] Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.789775 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bmq9l" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.790084 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmv2c\" (UniqueName: \"kubernetes.io/projected/7baa3ebb-6bb0-4744-b096-971958bcd263-kube-api-access-gmv2c\") pod \"collect-profiles-29522385-74pvr\" (UID: \"7baa3ebb-6bb0-4744-b096-971958bcd263\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522385-74pvr" Feb 17 15:56:28 crc kubenswrapper[4808]: W0217 15:56:28.805498 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode489a46b_9123_44c6_94e0_692621760dd6.slice/crio-0209add398700228e0fcc883ac99d37768a000d7cf9532764ef3bc88a5c87df2 WatchSource:0}: Error finding container 0209add398700228e0fcc883ac99d37768a000d7cf9532764ef3bc88a5c87df2: Status 404 returned error can't find the container with id 0209add398700228e0fcc883ac99d37768a000d7cf9532764ef3bc88a5c87df2 Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.807979 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-x2jlg" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.810218 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rlrx9\" (UniqueName: \"kubernetes.io/projected/0b9e5453-e92d-46cd-b8fb-c989f00809ae-kube-api-access-rlrx9\") pod \"kube-storage-version-migrator-operator-b67b599dd-vsl5p\" (UID: \"0b9e5453-e92d-46cd-b8fb-c989f00809ae\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vsl5p" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.827712 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:56:28 crc kubenswrapper[4808]: E0217 15:56:28.828121 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:56:29.32810067 +0000 UTC m=+152.844459743 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:28 crc kubenswrapper[4808]: W0217 15:56:28.829827 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod25b3b271_e6e0_49c4_8fa2_17d8f8f2d5fa.slice/crio-ed1f4c6d6c88c4b4542456888ff4d284d0a9aa668f50172407b3b791503bd784 WatchSource:0}: Error finding container ed1f4c6d6c88c4b4542456888ff4d284d0a9aa668f50172407b3b791503bd784: Status 404 returned error can't find the container with id ed1f4c6d6c88c4b4542456888ff4d284d0a9aa668f50172407b3b791503bd784 Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.830363 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-cw29n"] Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.836331 4808 csr.go:261] certificate signing request csr-s7dzb is approved, waiting to be issued Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.837195 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-z4qfh" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.850050 4808 csr.go:257] certificate signing request csr-s7dzb is issued Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.851284 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-p8js4" Feb 17 15:56:28 crc kubenswrapper[4808]: W0217 15:56:28.855872 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda7649915_6408_4c30_8faa_0fb3ea55007a.slice/crio-82fbd205cacd70de3bd72105fabd5651b63f3ef10de2b4bbb91392f1254ffcb7 WatchSource:0}: Error finding container 82fbd205cacd70de3bd72105fabd5651b63f3ef10de2b4bbb91392f1254ffcb7: Status 404 returned error can't find the container with id 82fbd205cacd70de3bd72105fabd5651b63f3ef10de2b4bbb91392f1254ffcb7 Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.862360 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8mjrc" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.865559 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-j6dgq"] Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.871189 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-z82w8"] Feb 17 15:56:28 crc kubenswrapper[4808]: W0217 15:56:28.877804 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod33978535_84b2_4def_af5a_d2819171e202.slice/crio-844de191c1be070d299b4c3076870b370dc0d9ba311dfdcbe654f429c1b19e41 WatchSource:0}: Error finding container 844de191c1be070d299b4c3076870b370dc0d9ba311dfdcbe654f429c1b19e41: Status 404 returned error can't find the container with id 844de191c1be070d299b4c3076870b370dc0d9ba311dfdcbe654f429c1b19e41 Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.879890 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vsl5p" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.930093 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:28 crc kubenswrapper[4808]: E0217 15:56:28.930683 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:56:29.430661245 +0000 UTC m=+152.947020318 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fmfh5" (UID: "ddc3801d-3513-460c-a719-ed9dc92697e7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.994191 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522385-74pvr" Feb 17 15:56:29 crc kubenswrapper[4808]: I0217 15:56:29.020833 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-s2fz5"] Feb 17 15:56:29 crc kubenswrapper[4808]: I0217 15:56:29.031259 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:56:29 crc kubenswrapper[4808]: E0217 15:56:29.031492 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:56:29.531454401 +0000 UTC m=+153.047813474 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:29 crc kubenswrapper[4808]: I0217 15:56:29.031741 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:29 crc kubenswrapper[4808]: E0217 15:56:29.032100 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:56:29.532085508 +0000 UTC m=+153.048444581 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fmfh5" (UID: "ddc3801d-3513-460c-a719-ed9dc92697e7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:29 crc kubenswrapper[4808]: I0217 15:56:29.135982 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:56:29 crc kubenswrapper[4808]: E0217 15:56:29.136267 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:56:29.636247825 +0000 UTC m=+153.152606898 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:29 crc kubenswrapper[4808]: I0217 15:56:29.149312 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:29 crc kubenswrapper[4808]: E0217 15:56:29.149800 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:56:29.649784611 +0000 UTC m=+153.166143684 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fmfh5" (UID: "ddc3801d-3513-460c-a719-ed9dc92697e7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:29 crc kubenswrapper[4808]: I0217 15:56:29.259739 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-7jp8q" event={"ID":"d0ee93f1-93ac-4db2-b35e-5be5bded6541","Type":"ContainerStarted","Data":"e6ae78a7a3d903296ea675e4bc85775c5deb4343fce73afb22a46d2dd260eb2b"} Feb 17 15:56:29 crc kubenswrapper[4808]: I0217 15:56:29.269922 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-j6dgq" event={"ID":"33978535-84b2-4def-af5a-d2819171e202","Type":"ContainerStarted","Data":"844de191c1be070d299b4c3076870b370dc0d9ba311dfdcbe654f429c1b19e41"} Feb 17 15:56:29 crc kubenswrapper[4808]: I0217 15:56:29.269977 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j6vm5" event={"ID":"8227d3a9-60f5-4d19-b4d1-8a0143864837","Type":"ContainerStarted","Data":"87a30c2a90c4016dabeb2fd3e6331db8b801e3a30d3bec36b1482acb813df460"} Feb 17 15:56:29 crc kubenswrapper[4808]: I0217 15:56:29.269994 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k48nr" event={"ID":"681a57d4-bd74-4910-a3f3-517b96a15123","Type":"ContainerStarted","Data":"321947dd480cd7b15b3faa5a3e64c3d9f25bd01d43547606487454ebdfe13c32"} Feb 17 15:56:29 crc kubenswrapper[4808]: E0217 15:56:29.260382 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:56:29.760365342 +0000 UTC m=+153.276724415 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:29 crc kubenswrapper[4808]: I0217 15:56:29.260317 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:56:29 crc kubenswrapper[4808]: I0217 15:56:29.270454 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:29 crc kubenswrapper[4808]: E0217 15:56:29.271191 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:56:29.771175594 +0000 UTC m=+153.287534667 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fmfh5" (UID: "ddc3801d-3513-460c-a719-ed9dc92697e7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:29 crc kubenswrapper[4808]: I0217 15:56:29.283090 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cbwrs" event={"ID":"c8c0b903-63ed-4811-a991-9a5751a4c640","Type":"ContainerStarted","Data":"0efbd4b20b52726670445669c69fa3d84a33cf7a9a1513f4adf2847935e90206"} Feb 17 15:56:29 crc kubenswrapper[4808]: I0217 15:56:29.283151 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cbwrs" event={"ID":"c8c0b903-63ed-4811-a991-9a5751a4c640","Type":"ContainerStarted","Data":"6a92581d96f5ce106de955d0377d19380dc8e249c7afa67d973cce7eda45abe9"} Feb 17 15:56:29 crc kubenswrapper[4808]: I0217 15:56:29.287890 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-dgt46" event={"ID":"fddf9ec8-447f-487c-a863-73ec68b90737","Type":"ContainerStarted","Data":"4fc09a408ae428519ff850f04e1ece64a9e06a09d945240a4178e82219634ddd"} Feb 17 15:56:29 crc kubenswrapper[4808]: I0217 15:56:29.291467 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mggmj"] Feb 17 15:56:29 crc kubenswrapper[4808]: I0217 15:56:29.295881 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-z82w8" event={"ID":"728793ed-1e89-455c-8d45-92c4ab08c1f6","Type":"ContainerStarted","Data":"62720599c23d59a119c24066564cef1ed432a3f75bd093a41ebedb1728306105"} Feb 17 15:56:29 crc kubenswrapper[4808]: I0217 15:56:29.297831 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-wlj8d" event={"ID":"116ae5bc-cf7e-45ad-9800-501bcfc04ff7","Type":"ContainerStarted","Data":"58c8b94806c545d56a550be6d5318f72da5d4f264e00031f9559fbabcc901c8a"} Feb 17 15:56:29 crc kubenswrapper[4808]: I0217 15:56:29.297865 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-wlj8d" event={"ID":"116ae5bc-cf7e-45ad-9800-501bcfc04ff7","Type":"ContainerStarted","Data":"e9fd786b7fdde5022035c172a3376a3a0c0e9583045af8d035ac7dc1cd54b6fb"} Feb 17 15:56:29 crc kubenswrapper[4808]: I0217 15:56:29.299754 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-wlj8d" Feb 17 15:56:29 crc kubenswrapper[4808]: I0217 15:56:29.303824 4808 patch_prober.go:28] interesting pod/downloads-7954f5f757-wlj8d container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Feb 17 15:56:29 crc kubenswrapper[4808]: I0217 15:56:29.303928 4808 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-wlj8d" podUID="116ae5bc-cf7e-45ad-9800-501bcfc04ff7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Feb 17 15:56:29 crc kubenswrapper[4808]: I0217 15:56:29.315583 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-2lsb7"] Feb 17 15:56:29 crc kubenswrapper[4808]: I0217 15:56:29.316369 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-9bcck"] Feb 17 15:56:29 crc kubenswrapper[4808]: I0217 15:56:29.327563 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9l858" event={"ID":"9c7096e1-8ca1-483d-8e12-1cc79d28182a","Type":"ContainerStarted","Data":"4540b69253f3420e20d6978d9585183bf8fdbe0b979b02a5c2377a9b2a29ace6"} Feb 17 15:56:29 crc kubenswrapper[4808]: I0217 15:56:29.329871 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-hdg74" event={"ID":"e489a46b-9123-44c6-94e0-692621760dd6","Type":"ContainerStarted","Data":"0209add398700228e0fcc883ac99d37768a000d7cf9532764ef3bc88a5c87df2"} Feb 17 15:56:29 crc kubenswrapper[4808]: I0217 15:56:29.335887 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-cvqck" event={"ID":"a7649915-6408-4c30-8faa-0fb3ea55007a","Type":"ContainerStarted","Data":"82fbd205cacd70de3bd72105fabd5651b63f3ef10de2b4bbb91392f1254ffcb7"} Feb 17 15:56:29 crc kubenswrapper[4808]: I0217 15:56:29.345784 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-mxgf8" event={"ID":"25b3b271-e6e0-49c4-8fa2-17d8f8f2d5fa","Type":"ContainerStarted","Data":"ed1f4c6d6c88c4b4542456888ff4d284d0a9aa668f50172407b3b791503bd784"} Feb 17 15:56:29 crc kubenswrapper[4808]: I0217 15:56:29.353535 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bz4bz" event={"ID":"0131c573-bf76-49f4-9581-dd39ef60b27f","Type":"ContainerStarted","Data":"767ad1226894880b9a5000e35b613fef9ade48f52d41faa5fb859779ef7a64fc"} Feb 17 15:56:29 crc kubenswrapper[4808]: I0217 15:56:29.353652 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bz4bz" event={"ID":"0131c573-bf76-49f4-9581-dd39ef60b27f","Type":"ContainerStarted","Data":"fbddaaafdcb10be11c9a676fc963e5e0d238265a4f79731f8f5f177d19ba9003"} Feb 17 15:56:29 crc kubenswrapper[4808]: I0217 15:56:29.364554 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cw29n" event={"ID":"26fa95d4-8240-472a-a86f-98acf35ade67","Type":"ContainerStarted","Data":"30ad84d9d762a2a57f1e25cb2a8142689ce9b165ac2b500002cff9aadc52f08a"} Feb 17 15:56:29 crc kubenswrapper[4808]: I0217 15:56:29.376275 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:56:29 crc kubenswrapper[4808]: E0217 15:56:29.377891 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:56:29.877868081 +0000 UTC m=+153.394227154 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:29 crc kubenswrapper[4808]: I0217 15:56:29.445741 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lzvjs"] Feb 17 15:56:29 crc kubenswrapper[4808]: I0217 15:56:29.453460 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jlwrb" podStartSLOduration=133.453431024 podStartE2EDuration="2m13.453431024s" podCreationTimestamp="2026-02-17 15:54:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:56:29.453247068 +0000 UTC m=+152.969606141" watchObservedRunningTime="2026-02-17 15:56:29.453431024 +0000 UTC m=+152.969790107" Feb 17 15:56:29 crc kubenswrapper[4808]: I0217 15:56:29.471106 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-dxj7b"] Feb 17 15:56:29 crc kubenswrapper[4808]: I0217 15:56:29.477881 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:29 crc kubenswrapper[4808]: E0217 15:56:29.479473 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:56:29.979452628 +0000 UTC m=+153.495811701 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fmfh5" (UID: "ddc3801d-3513-460c-a719-ed9dc92697e7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:29 crc kubenswrapper[4808]: I0217 15:56:29.585428 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:56:29 crc kubenswrapper[4808]: E0217 15:56:29.586116 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:56:30.086098502 +0000 UTC m=+153.602457575 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:29 crc kubenswrapper[4808]: W0217 15:56:29.636383 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb26b861c_ec52_4685_846c_ea022517e9fb.slice/crio-3f79d6b9fcdc485bbc4f2a9c50e5848aa8428ba8b850f9b53eead931b8bbe676 WatchSource:0}: Error finding container 3f79d6b9fcdc485bbc4f2a9c50e5848aa8428ba8b850f9b53eead931b8bbe676: Status 404 returned error can't find the container with id 3f79d6b9fcdc485bbc4f2a9c50e5848aa8428ba8b850f9b53eead931b8bbe676 Feb 17 15:56:29 crc kubenswrapper[4808]: I0217 15:56:29.688870 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:29 crc kubenswrapper[4808]: E0217 15:56:29.689259 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:56:30.189245402 +0000 UTC m=+153.705604475 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fmfh5" (UID: "ddc3801d-3513-460c-a719-ed9dc92697e7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:29 crc kubenswrapper[4808]: I0217 15:56:29.701616 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k48nr" podStartSLOduration=131.701592696 podStartE2EDuration="2m11.701592696s" podCreationTimestamp="2026-02-17 15:54:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:56:29.700848306 +0000 UTC m=+153.217207399" watchObservedRunningTime="2026-02-17 15:56:29.701592696 +0000 UTC m=+153.217951769" Feb 17 15:56:29 crc kubenswrapper[4808]: I0217 15:56:29.752222 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8zrdj"] Feb 17 15:56:29 crc kubenswrapper[4808]: I0217 15:56:29.790853 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:56:29 crc kubenswrapper[4808]: E0217 15:56:29.791661 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:56:30.291636571 +0000 UTC m=+153.807995644 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:29 crc kubenswrapper[4808]: I0217 15:56:29.830038 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-sbr84"] Feb 17 15:56:29 crc kubenswrapper[4808]: I0217 15:56:29.832713 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-bqslk"] Feb 17 15:56:29 crc kubenswrapper[4808]: I0217 15:56:29.855214 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-17 15:51:28 +0000 UTC, rotation deadline is 2026-11-29 09:10:09.7865851 +0000 UTC Feb 17 15:56:29 crc kubenswrapper[4808]: I0217 15:56:29.855303 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6833h13m39.931284957s for next certificate rotation Feb 17 15:56:29 crc kubenswrapper[4808]: I0217 15:56:29.893755 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:29 crc kubenswrapper[4808]: E0217 15:56:29.904132 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:56:30.404108973 +0000 UTC m=+153.920468046 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fmfh5" (UID: "ddc3801d-3513-460c-a719-ed9dc92697e7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:29 crc kubenswrapper[4808]: I0217 15:56:29.942848 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-54vjj"] Feb 17 15:56:29 crc kubenswrapper[4808]: W0217 15:56:29.943090 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3ba06ea2_9714_49b5_8477_8eb056bb45a4.slice/crio-02dd5d7b58edf49fd1e85175a803d2a8024bd4a6a6c96449839f3d310f3b9d42 WatchSource:0}: Error finding container 02dd5d7b58edf49fd1e85175a803d2a8024bd4a6a6c96449839f3d310f3b9d42: Status 404 returned error can't find the container with id 02dd5d7b58edf49fd1e85175a803d2a8024bd4a6a6c96449839f3d310f3b9d42 Feb 17 15:56:29 crc kubenswrapper[4808]: W0217 15:56:29.999678 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3267bf97_7e39_410a_8502_3737bfb7f963.slice/crio-535ae32eb6f2ea3ba0ed154b1b92dca3d81d27d6eb74531225f25eb06233123c WatchSource:0}: Error finding container 535ae32eb6f2ea3ba0ed154b1b92dca3d81d27d6eb74531225f25eb06233123c: Status 404 returned error can't find the container with id 535ae32eb6f2ea3ba0ed154b1b92dca3d81d27d6eb74531225f25eb06233123c Feb 17 15:56:30 crc kubenswrapper[4808]: I0217 15:56:30.002113 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:56:30 crc kubenswrapper[4808]: E0217 15:56:30.002637 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:56:30.502606578 +0000 UTC m=+154.018965651 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:30 crc kubenswrapper[4808]: I0217 15:56:30.013723 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pd6wv"] Feb 17 15:56:30 crc kubenswrapper[4808]: I0217 15:56:30.020147 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-8mjrc"] Feb 17 15:56:30 crc kubenswrapper[4808]: I0217 15:56:30.028689 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-n5p8z"] Feb 17 15:56:30 crc kubenswrapper[4808]: I0217 15:56:30.096929 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-srhjb" podStartSLOduration=132.096903318 podStartE2EDuration="2m12.096903318s" podCreationTimestamp="2026-02-17 15:54:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:56:30.094571374 +0000 UTC m=+153.610930447" watchObservedRunningTime="2026-02-17 15:56:30.096903318 +0000 UTC m=+153.613262391" Feb 17 15:56:30 crc kubenswrapper[4808]: I0217 15:56:30.105901 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:30 crc kubenswrapper[4808]: E0217 15:56:30.106304 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:56:30.606288402 +0000 UTC m=+154.122647475 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fmfh5" (UID: "ddc3801d-3513-460c-a719-ed9dc92697e7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:30 crc kubenswrapper[4808]: I0217 15:56:30.108955 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-x2jlg"] Feb 17 15:56:30 crc kubenswrapper[4808]: W0217 15:56:30.122605 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod092b0577_f19f_413d_afc5_bdc3a40f7f75.slice/crio-faf7562009ff6319cf2977233e4d63812224f9df6b0fc904ad604c768dd6d53b WatchSource:0}: Error finding container faf7562009ff6319cf2977233e4d63812224f9df6b0fc904ad604c768dd6d53b: Status 404 returned error can't find the container with id faf7562009ff6319cf2977233e4d63812224f9df6b0fc904ad604c768dd6d53b Feb 17 15:56:30 crc kubenswrapper[4808]: I0217 15:56:30.207167 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:56:30 crc kubenswrapper[4808]: E0217 15:56:30.207501 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:56:30.707480919 +0000 UTC m=+154.223839992 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:30 crc kubenswrapper[4808]: I0217 15:56:30.213887 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-4x6s2" podStartSLOduration=133.213859121 podStartE2EDuration="2m13.213859121s" podCreationTimestamp="2026-02-17 15:54:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:56:30.208482976 +0000 UTC m=+153.724842049" watchObservedRunningTime="2026-02-17 15:56:30.213859121 +0000 UTC m=+153.730218204" Feb 17 15:56:30 crc kubenswrapper[4808]: I0217 15:56:30.218145 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-t8ws2"] Feb 17 15:56:30 crc kubenswrapper[4808]: I0217 15:56:30.264228 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cbwrs" podStartSLOduration=133.264206633 podStartE2EDuration="2m13.264206633s" podCreationTimestamp="2026-02-17 15:54:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:56:30.251446697 +0000 UTC m=+153.767805780" watchObservedRunningTime="2026-02-17 15:56:30.264206633 +0000 UTC m=+153.780565696" Feb 17 15:56:30 crc kubenswrapper[4808]: I0217 15:56:30.276913 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bmq9l"] Feb 17 15:56:30 crc kubenswrapper[4808]: I0217 15:56:30.287515 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-spzc7"] Feb 17 15:56:30 crc kubenswrapper[4808]: I0217 15:56:30.300093 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cg82l" podStartSLOduration=133.300066553 podStartE2EDuration="2m13.300066553s" podCreationTimestamp="2026-02-17 15:54:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:56:30.299083816 +0000 UTC m=+153.815442909" watchObservedRunningTime="2026-02-17 15:56:30.300066553 +0000 UTC m=+153.816425626" Feb 17 15:56:30 crc kubenswrapper[4808]: I0217 15:56:30.313927 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:30 crc kubenswrapper[4808]: E0217 15:56:30.314351 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:56:30.814337209 +0000 UTC m=+154.330696282 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fmfh5" (UID: "ddc3801d-3513-460c-a719-ed9dc92697e7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:30 crc kubenswrapper[4808]: W0217 15:56:30.332139 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod94f0bc0d_40c0_45b7_b6c4_7b285ba26c52.slice/crio-bc6385422873ea61f34adbdf29b40165c69ab9207cdde9aa47560a45b2135def WatchSource:0}: Error finding container bc6385422873ea61f34adbdf29b40165c69ab9207cdde9aa47560a45b2135def: Status 404 returned error can't find the container with id bc6385422873ea61f34adbdf29b40165c69ab9207cdde9aa47560a45b2135def Feb 17 15:56:30 crc kubenswrapper[4808]: W0217 15:56:30.346715 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb7697c8e_8996_44b9_8b66_965584ab26e2.slice/crio-3d1ce1fbdbd9f0e5d8ef9187f84ba7865c9ffbb5a8858fa3a293eb024ef93b21 WatchSource:0}: Error finding container 3d1ce1fbdbd9f0e5d8ef9187f84ba7865c9ffbb5a8858fa3a293eb024ef93b21: Status 404 returned error can't find the container with id 3d1ce1fbdbd9f0e5d8ef9187f84ba7865c9ffbb5a8858fa3a293eb024ef93b21 Feb 17 15:56:30 crc kubenswrapper[4808]: W0217 15:56:30.358454 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod14c6770e_9659_4e77_a7f1_f3ef06ec332d.slice/crio-fe3c487b77200b515c446e5bb7350cae13ed5f93ef4fbaf06e4463c9ea364a37 WatchSource:0}: Error finding container fe3c487b77200b515c446e5bb7350cae13ed5f93ef4fbaf06e4463c9ea364a37: Status 404 returned error can't find the container with id fe3c487b77200b515c446e5bb7350cae13ed5f93ef4fbaf06e4463c9ea364a37 Feb 17 15:56:30 crc kubenswrapper[4808]: I0217 15:56:30.358516 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-p8js4"] Feb 17 15:56:30 crc kubenswrapper[4808]: I0217 15:56:30.359604 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-jw4gs"] Feb 17 15:56:30 crc kubenswrapper[4808]: I0217 15:56:30.380490 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522385-74pvr"] Feb 17 15:56:30 crc kubenswrapper[4808]: I0217 15:56:30.403235 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-z4qfh"] Feb 17 15:56:30 crc kubenswrapper[4808]: I0217 15:56:30.404340 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-wlj8d" podStartSLOduration=133.404319163 podStartE2EDuration="2m13.404319163s" podCreationTimestamp="2026-02-17 15:54:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:56:30.390815737 +0000 UTC m=+153.907174810" watchObservedRunningTime="2026-02-17 15:56:30.404319163 +0000 UTC m=+153.920678236" Feb 17 15:56:30 crc kubenswrapper[4808]: I0217 15:56:30.407041 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vsl5p"] Feb 17 15:56:30 crc kubenswrapper[4808]: I0217 15:56:30.415437 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:56:30 crc kubenswrapper[4808]: E0217 15:56:30.416140 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:56:30.916120311 +0000 UTC m=+154.432479384 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:30 crc kubenswrapper[4808]: I0217 15:56:30.463118 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-2lsb7" event={"ID":"e20a6284-be62-4671-b75f-38b32dc20813","Type":"ContainerStarted","Data":"46488ee8d17bd26171359dd8a8e243ec82f66e1f7ec6373f1973739186bb8608"} Feb 17 15:56:30 crc kubenswrapper[4808]: I0217 15:56:30.514644 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-z82w8" event={"ID":"728793ed-1e89-455c-8d45-92c4ab08c1f6","Type":"ContainerStarted","Data":"0e2295ac419b2dc097f140848b76ed1756cacf4b44747f5a97fc1cfe0a8b9711"} Feb 17 15:56:30 crc kubenswrapper[4808]: I0217 15:56:30.518216 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:30 crc kubenswrapper[4808]: E0217 15:56:30.572261 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:56:31.072211754 +0000 UTC m=+154.588570827 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fmfh5" (UID: "ddc3801d-3513-460c-a719-ed9dc92697e7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:30 crc kubenswrapper[4808]: I0217 15:56:30.586321 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j6vm5" event={"ID":"8227d3a9-60f5-4d19-b4d1-8a0143864837","Type":"ContainerStarted","Data":"f98437fbbf139d63581f07e82442459bd2916424cb75fd60caf9d2b40747e184"} Feb 17 15:56:30 crc kubenswrapper[4808]: I0217 15:56:30.586991 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j6vm5" Feb 17 15:56:30 crc kubenswrapper[4808]: I0217 15:56:30.599661 4808 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-j6vm5 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Feb 17 15:56:30 crc kubenswrapper[4808]: I0217 15:56:30.599735 4808 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j6vm5" podUID="8227d3a9-60f5-4d19-b4d1-8a0143864837" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" Feb 17 15:56:30 crc kubenswrapper[4808]: I0217 15:56:30.614915 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-dxj7b" event={"ID":"69e8c398-683b-47dc-a517-633d625cbd97","Type":"ContainerStarted","Data":"7fcc3e4b3e72a540ddfc1939e87ac4ce7d3bb78661a8bb6f21a95f2e2afecfda"} Feb 17 15:56:30 crc kubenswrapper[4808]: I0217 15:56:30.625468 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:56:30 crc kubenswrapper[4808]: E0217 15:56:30.626095 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:56:31.12605284 +0000 UTC m=+154.642411913 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:30 crc kubenswrapper[4808]: I0217 15:56:30.679422 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9bcck" event={"ID":"71acbaae-e241-4c8e-ac2b-6dd40b15b494","Type":"ContainerStarted","Data":"3e89b193f707b0cb6f40ddbd3be40b4434d71dfa91333f6a8492228f51982188"} Feb 17 15:56:30 crc kubenswrapper[4808]: I0217 15:56:30.680027 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9bcck" event={"ID":"71acbaae-e241-4c8e-ac2b-6dd40b15b494","Type":"ContainerStarted","Data":"fd73c63544ba33b7f4743f37f0b3438c023b57fcaebfe84fe6a81d3d921660d5"} Feb 17 15:56:30 crc kubenswrapper[4808]: I0217 15:56:30.727377 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:30 crc kubenswrapper[4808]: E0217 15:56:30.728682 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:56:31.228663545 +0000 UTC m=+154.745022618 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fmfh5" (UID: "ddc3801d-3513-460c-a719-ed9dc92697e7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:30 crc kubenswrapper[4808]: I0217 15:56:30.747340 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8zrdj" event={"ID":"445cb05c-ac1a-44a2-864f-a87e0e7b29a5","Type":"ContainerStarted","Data":"042396a13a5329504a1fae70fc09bdfe2ab24d3cc60fa07dfc947083a18771e6"} Feb 17 15:56:30 crc kubenswrapper[4808]: I0217 15:56:30.749206 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-n5p8z" event={"ID":"4f9ab75e-8898-4a0c-8630-c657450b648e","Type":"ContainerStarted","Data":"65cd2ca01645fae2a06426f9da167fcadb7900d0665e3ff976914945a22ae214"} Feb 17 15:56:30 crc kubenswrapper[4808]: I0217 15:56:30.758170 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-x2jlg" event={"ID":"683fb061-dc67-431d-8a8a-d5a383794fef","Type":"ContainerStarted","Data":"9f060864e83d276fe705e23e0395af9e9048caed59a1822022d020e0a81836fa"} Feb 17 15:56:30 crc kubenswrapper[4808]: I0217 15:56:30.760993 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-spzc7" event={"ID":"14c6770e-9659-4e77-a7f1-f3ef06ec332d","Type":"ContainerStarted","Data":"fe3c487b77200b515c446e5bb7350cae13ed5f93ef4fbaf06e4463c9ea364a37"} Feb 17 15:56:30 crc kubenswrapper[4808]: I0217 15:56:30.766566 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-mxgf8" event={"ID":"25b3b271-e6e0-49c4-8fa2-17d8f8f2d5fa","Type":"ContainerStarted","Data":"507800b9841cc80b1865f606d7f977e50047f1cac5275561e18d7592e1f64531"} Feb 17 15:56:30 crc kubenswrapper[4808]: I0217 15:56:30.766670 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-mxgf8" Feb 17 15:56:30 crc kubenswrapper[4808]: I0217 15:56:30.769654 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9l858" event={"ID":"9c7096e1-8ca1-483d-8e12-1cc79d28182a","Type":"ContainerStarted","Data":"20bff2b811aa836fd61417fa647f37f9de8e986a28076ef932a459fc43055c3e"} Feb 17 15:56:30 crc kubenswrapper[4808]: I0217 15:56:30.770661 4808 patch_prober.go:28] interesting pod/console-operator-58897d9998-mxgf8 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/readyz\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Feb 17 15:56:30 crc kubenswrapper[4808]: I0217 15:56:30.770703 4808 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-mxgf8" podUID="25b3b271-e6e0-49c4-8fa2-17d8f8f2d5fa" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.13:8443/readyz\": dial tcp 10.217.0.13:8443: connect: connection refused" Feb 17 15:56:30 crc kubenswrapper[4808]: I0217 15:56:30.776482 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-s2fz5" event={"ID":"98bde021-9860-4b02-9223-512db6787eff","Type":"ContainerStarted","Data":"1e875ee300c0488d8291c56021229aac4c3401a41ad1f2d3dc23a2913df4c895"} Feb 17 15:56:30 crc kubenswrapper[4808]: I0217 15:56:30.776525 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-s2fz5" event={"ID":"98bde021-9860-4b02-9223-512db6787eff","Type":"ContainerStarted","Data":"c09ec5e2ee88b663934e8350a60d6fbc3a441771d379f75fb2671fa0bb4feda0"} Feb 17 15:56:30 crc kubenswrapper[4808]: I0217 15:56:30.788352 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-54vjj" event={"ID":"3267bf97-7e39-410a-8502-3737bfb7f963","Type":"ContainerStarted","Data":"535ae32eb6f2ea3ba0ed154b1b92dca3d81d27d6eb74531225f25eb06233123c"} Feb 17 15:56:30 crc kubenswrapper[4808]: I0217 15:56:30.792320 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-cvqck" event={"ID":"a7649915-6408-4c30-8faa-0fb3ea55007a","Type":"ContainerStarted","Data":"fb57ffbad5715668e0b26cf285ebec4d01aad8ac4a4db782b62b453c180c8e47"} Feb 17 15:56:30 crc kubenswrapper[4808]: I0217 15:56:30.792689 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-cvqck" Feb 17 15:56:30 crc kubenswrapper[4808]: I0217 15:56:30.795486 4808 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-cvqck container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Feb 17 15:56:30 crc kubenswrapper[4808]: I0217 15:56:30.795560 4808 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-cvqck" podUID="a7649915-6408-4c30-8faa-0fb3ea55007a" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" Feb 17 15:56:30 crc kubenswrapper[4808]: I0217 15:56:30.800070 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-dgt46" event={"ID":"fddf9ec8-447f-487c-a863-73ec68b90737","Type":"ContainerStarted","Data":"e85c9b5aaeb7b5b5a0c652c7848594f38267be8786ae7c4e2293038778dbf6fb"} Feb 17 15:56:30 crc kubenswrapper[4808]: I0217 15:56:30.806873 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-bqslk" event={"ID":"3ba06ea2-9714-49b5-8477-8eb056bb45a4","Type":"ContainerStarted","Data":"02dd5d7b58edf49fd1e85175a803d2a8024bd4a6a6c96449839f3d310f3b9d42"} Feb 17 15:56:30 crc kubenswrapper[4808]: I0217 15:56:30.818384 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-sbr84" event={"ID":"b0793347-d948-480b-b5a7-d0fed7e12b38","Type":"ContainerStarted","Data":"026165e1bd109fad794dffddae09d3e255a5318f60f94f71f305c72e7d4ac00e"} Feb 17 15:56:30 crc kubenswrapper[4808]: I0217 15:56:30.829380 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:56:30 crc kubenswrapper[4808]: E0217 15:56:30.829541 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:56:31.329507633 +0000 UTC m=+154.845866706 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:30 crc kubenswrapper[4808]: I0217 15:56:30.829971 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:30 crc kubenswrapper[4808]: E0217 15:56:30.830620 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:56:31.330600782 +0000 UTC m=+154.846959845 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fmfh5" (UID: "ddc3801d-3513-460c-a719-ed9dc92697e7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:30 crc kubenswrapper[4808]: I0217 15:56:30.833187 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bz4bz" event={"ID":"0131c573-bf76-49f4-9581-dd39ef60b27f","Type":"ContainerStarted","Data":"71d3523977c68d7be7a0fd789fd9343dd3bcfe2e002a98f8e88fb2e3a9cfcd13"} Feb 17 15:56:30 crc kubenswrapper[4808]: I0217 15:56:30.842229 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cw29n" event={"ID":"26fa95d4-8240-472a-a86f-98acf35ade67","Type":"ContainerStarted","Data":"4d0b6ff7e08b05b7d2862bcc5291ffbb8e1e202799902c4edd8fb74af81ab746"} Feb 17 15:56:30 crc kubenswrapper[4808]: I0217 15:56:30.849507 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lzvjs" event={"ID":"b8b124f4-97ab-4512-a1a2-b93bc4e724e8","Type":"ContainerStarted","Data":"551a33e50c7398d763eee1244f86da9b8f2ba2e4db083390f8a3e5f9c52519f2"} Feb 17 15:56:30 crc kubenswrapper[4808]: I0217 15:56:30.867274 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-7jp8q" event={"ID":"d0ee93f1-93ac-4db2-b35e-5be5bded6541","Type":"ContainerStarted","Data":"306d019fd0a960ebe596dd62bde91fac66d83ac96ee596dcc0dcc7215c74b83c"} Feb 17 15:56:30 crc kubenswrapper[4808]: I0217 15:56:30.882315 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-t8ws2" event={"ID":"94f0bc0d-40c0-45b7-b6c4-7b285ba26c52","Type":"ContainerStarted","Data":"bc6385422873ea61f34adbdf29b40165c69ab9207cdde9aa47560a45b2135def"} Feb 17 15:56:30 crc kubenswrapper[4808]: I0217 15:56:30.903115 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j6vm5" podStartSLOduration=132.903088333 podStartE2EDuration="2m12.903088333s" podCreationTimestamp="2026-02-17 15:54:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:56:30.901370496 +0000 UTC m=+154.417729569" watchObservedRunningTime="2026-02-17 15:56:30.903088333 +0000 UTC m=+154.419447406" Feb 17 15:56:30 crc kubenswrapper[4808]: I0217 15:56:30.931143 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:56:30 crc kubenswrapper[4808]: E0217 15:56:30.932507 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:56:31.432474448 +0000 UTC m=+154.948833511 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:30 crc kubenswrapper[4808]: I0217 15:56:30.941460 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-jwcd2" event={"ID":"b26b861c-ec52-4685-846c-ea022517e9fb","Type":"ContainerStarted","Data":"3f79d6b9fcdc485bbc4f2a9c50e5848aa8428ba8b850f9b53eead931b8bbe676"} Feb 17 15:56:30 crc kubenswrapper[4808]: I0217 15:56:30.944524 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8mjrc" event={"ID":"092b0577-f19f-413d-afc5-bdc3a40f7f75","Type":"ContainerStarted","Data":"faf7562009ff6319cf2977233e4d63812224f9df6b0fc904ad604c768dd6d53b"} Feb 17 15:56:30 crc kubenswrapper[4808]: I0217 15:56:30.963786 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-hdg74" event={"ID":"e489a46b-9123-44c6-94e0-692621760dd6","Type":"ContainerStarted","Data":"5fa014756fd5fd80eb6b1fdbbf3d68e06eb937cbb5c5ef91970212b3ef06613a"} Feb 17 15:56:30 crc kubenswrapper[4808]: I0217 15:56:30.983552 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-dgt46" podStartSLOduration=5.983530569 podStartE2EDuration="5.983530569s" podCreationTimestamp="2026-02-17 15:56:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:56:30.931797379 +0000 UTC m=+154.448156452" watchObservedRunningTime="2026-02-17 15:56:30.983530569 +0000 UTC m=+154.499889642" Feb 17 15:56:30 crc kubenswrapper[4808]: I0217 15:56:30.999393 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pd6wv" event={"ID":"8ce31dac-90ec-4aa8-b765-1ee1add26c2d","Type":"ContainerStarted","Data":"0fc5e8095e93cd2824fbf14d2c5476e057998ed4379d9831be2286540517c16b"} Feb 17 15:56:31 crc kubenswrapper[4808]: I0217 15:56:31.004919 4808 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-pd6wv container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" start-of-body= Feb 17 15:56:31 crc kubenswrapper[4808]: I0217 15:56:31.004962 4808 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pd6wv" podUID="8ce31dac-90ec-4aa8-b765-1ee1add26c2d" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" Feb 17 15:56:31 crc kubenswrapper[4808]: I0217 15:56:31.005163 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pd6wv" Feb 17 15:56:31 crc kubenswrapper[4808]: I0217 15:56:31.018627 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-7jp8q" podStartSLOduration=134.018597307 podStartE2EDuration="2m14.018597307s" podCreationTimestamp="2026-02-17 15:54:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:56:30.986130089 +0000 UTC m=+154.502489162" watchObservedRunningTime="2026-02-17 15:56:31.018597307 +0000 UTC m=+154.534956380" Feb 17 15:56:31 crc kubenswrapper[4808]: I0217 15:56:31.035160 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:31 crc kubenswrapper[4808]: E0217 15:56:31.037448 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:56:31.537416976 +0000 UTC m=+155.053776049 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fmfh5" (UID: "ddc3801d-3513-460c-a719-ed9dc92697e7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:31 crc kubenswrapper[4808]: I0217 15:56:31.038855 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bmq9l" event={"ID":"b7697c8e-8996-44b9-8b66-965584ab26e2","Type":"ContainerStarted","Data":"3d1ce1fbdbd9f0e5d8ef9187f84ba7865c9ffbb5a8858fa3a293eb024ef93b21"} Feb 17 15:56:31 crc kubenswrapper[4808]: I0217 15:56:31.044152 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mggmj" event={"ID":"2d6f6cc0-7fc0-411c-800f-f98dc61b5035","Type":"ContainerStarted","Data":"5fe85b50798642cca4b4739ce6cf54363e8a0a7f3426dba4efea7f36d163df35"} Feb 17 15:56:31 crc kubenswrapper[4808]: I0217 15:56:31.045908 4808 patch_prober.go:28] interesting pod/downloads-7954f5f757-wlj8d container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Feb 17 15:56:31 crc kubenswrapper[4808]: I0217 15:56:31.045983 4808 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-wlj8d" podUID="116ae5bc-cf7e-45ad-9800-501bcfc04ff7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Feb 17 15:56:31 crc kubenswrapper[4808]: I0217 15:56:31.055264 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bz4bz" podStartSLOduration=134.055125255 podStartE2EDuration="2m14.055125255s" podCreationTimestamp="2026-02-17 15:54:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:56:31.050410898 +0000 UTC m=+154.566769971" watchObservedRunningTime="2026-02-17 15:56:31.055125255 +0000 UTC m=+154.571484318" Feb 17 15:56:31 crc kubenswrapper[4808]: I0217 15:56:31.106108 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cw29n" podStartSLOduration=133.106072722 podStartE2EDuration="2m13.106072722s" podCreationTimestamp="2026-02-17 15:54:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:56:31.104325846 +0000 UTC m=+154.620684919" watchObservedRunningTime="2026-02-17 15:56:31.106072722 +0000 UTC m=+154.622431795" Feb 17 15:56:31 crc kubenswrapper[4808]: I0217 15:56:31.138914 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:56:31 crc kubenswrapper[4808]: E0217 15:56:31.140619 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:56:31.640599447 +0000 UTC m=+155.156958520 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:31 crc kubenswrapper[4808]: I0217 15:56:31.140652 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-cvqck" podStartSLOduration=134.140639488 podStartE2EDuration="2m14.140639488s" podCreationTimestamp="2026-02-17 15:54:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:56:31.138220202 +0000 UTC m=+154.654579275" watchObservedRunningTime="2026-02-17 15:56:31.140639488 +0000 UTC m=+154.656998561" Feb 17 15:56:31 crc kubenswrapper[4808]: I0217 15:56:31.179349 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9l858" podStartSLOduration=134.179323524 podStartE2EDuration="2m14.179323524s" podCreationTimestamp="2026-02-17 15:54:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:56:31.1765893 +0000 UTC m=+154.692948373" watchObservedRunningTime="2026-02-17 15:56:31.179323524 +0000 UTC m=+154.695682607" Feb 17 15:56:31 crc kubenswrapper[4808]: I0217 15:56:31.230200 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-mxgf8" podStartSLOduration=134.23017813 podStartE2EDuration="2m14.23017813s" podCreationTimestamp="2026-02-17 15:54:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:56:31.227332373 +0000 UTC m=+154.743691446" watchObservedRunningTime="2026-02-17 15:56:31.23017813 +0000 UTC m=+154.746537193" Feb 17 15:56:31 crc kubenswrapper[4808]: I0217 15:56:31.240686 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:31 crc kubenswrapper[4808]: E0217 15:56:31.241070 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:56:31.741054204 +0000 UTC m=+155.257413277 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fmfh5" (UID: "ddc3801d-3513-460c-a719-ed9dc92697e7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:31 crc kubenswrapper[4808]: I0217 15:56:31.265304 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mggmj" podStartSLOduration=134.265280019 podStartE2EDuration="2m14.265280019s" podCreationTimestamp="2026-02-17 15:54:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:56:31.264287602 +0000 UTC m=+154.780646685" watchObservedRunningTime="2026-02-17 15:56:31.265280019 +0000 UTC m=+154.781639092" Feb 17 15:56:31 crc kubenswrapper[4808]: I0217 15:56:31.300955 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-hdg74" podStartSLOduration=134.300928143 podStartE2EDuration="2m14.300928143s" podCreationTimestamp="2026-02-17 15:54:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:56:31.299015071 +0000 UTC m=+154.815374154" watchObservedRunningTime="2026-02-17 15:56:31.300928143 +0000 UTC m=+154.817287216" Feb 17 15:56:31 crc kubenswrapper[4808]: I0217 15:56:31.345623 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:56:31 crc kubenswrapper[4808]: E0217 15:56:31.345978 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:56:31.845956031 +0000 UTC m=+155.362315104 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:31 crc kubenswrapper[4808]: I0217 15:56:31.395429 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pd6wv" podStartSLOduration=133.395404719 podStartE2EDuration="2m13.395404719s" podCreationTimestamp="2026-02-17 15:54:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:56:31.351906572 +0000 UTC m=+154.868265645" watchObservedRunningTime="2026-02-17 15:56:31.395404719 +0000 UTC m=+154.911763792" Feb 17 15:56:31 crc kubenswrapper[4808]: I0217 15:56:31.395915 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-jwcd2" podStartSLOduration=134.395910512 podStartE2EDuration="2m14.395910512s" podCreationTimestamp="2026-02-17 15:54:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:56:31.394667459 +0000 UTC m=+154.911026542" watchObservedRunningTime="2026-02-17 15:56:31.395910512 +0000 UTC m=+154.912269585" Feb 17 15:56:31 crc kubenswrapper[4808]: I0217 15:56:31.453542 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:31 crc kubenswrapper[4808]: E0217 15:56:31.454200 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:56:31.954178768 +0000 UTC m=+155.470537841 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fmfh5" (UID: "ddc3801d-3513-460c-a719-ed9dc92697e7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:31 crc kubenswrapper[4808]: I0217 15:56:31.555217 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:56:31 crc kubenswrapper[4808]: E0217 15:56:31.555694 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:56:32.055674413 +0000 UTC m=+155.572033496 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:31 crc kubenswrapper[4808]: I0217 15:56:31.656984 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:31 crc kubenswrapper[4808]: E0217 15:56:31.657830 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:56:32.157807305 +0000 UTC m=+155.674166368 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fmfh5" (UID: "ddc3801d-3513-460c-a719-ed9dc92697e7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:31 crc kubenswrapper[4808]: I0217 15:56:31.713071 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-jwcd2" Feb 17 15:56:31 crc kubenswrapper[4808]: I0217 15:56:31.725322 4808 patch_prober.go:28] interesting pod/router-default-5444994796-jwcd2 container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Feb 17 15:56:31 crc kubenswrapper[4808]: I0217 15:56:31.725415 4808 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jwcd2" podUID="b26b861c-ec52-4685-846c-ea022517e9fb" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Feb 17 15:56:31 crc kubenswrapper[4808]: I0217 15:56:31.744956 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-7jp8q" Feb 17 15:56:31 crc kubenswrapper[4808]: I0217 15:56:31.745359 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-7jp8q" Feb 17 15:56:31 crc kubenswrapper[4808]: I0217 15:56:31.762515 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:56:31 crc kubenswrapper[4808]: E0217 15:56:31.762763 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:56:32.262738814 +0000 UTC m=+155.779097907 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:31 crc kubenswrapper[4808]: I0217 15:56:31.762951 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:31 crc kubenswrapper[4808]: E0217 15:56:31.763373 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:56:32.26336434 +0000 UTC m=+155.779723403 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fmfh5" (UID: "ddc3801d-3513-460c-a719-ed9dc92697e7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:31 crc kubenswrapper[4808]: I0217 15:56:31.828867 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k48nr" Feb 17 15:56:31 crc kubenswrapper[4808]: I0217 15:56:31.829789 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k48nr" Feb 17 15:56:31 crc kubenswrapper[4808]: I0217 15:56:31.843527 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k48nr" Feb 17 15:56:31 crc kubenswrapper[4808]: I0217 15:56:31.864499 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:56:31 crc kubenswrapper[4808]: E0217 15:56:31.864952 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:56:32.364930487 +0000 UTC m=+155.881289570 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:31 crc kubenswrapper[4808]: I0217 15:56:31.966936 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:31 crc kubenswrapper[4808]: E0217 15:56:31.967507 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:56:32.467483041 +0000 UTC m=+155.983842114 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fmfh5" (UID: "ddc3801d-3513-460c-a719-ed9dc92697e7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.069695 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:56:32 crc kubenswrapper[4808]: E0217 15:56:32.070085 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:56:32.570066256 +0000 UTC m=+156.086425329 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.109913 4808 generic.go:334] "Generic (PLEG): container finished" podID="98bde021-9860-4b02-9223-512db6787eff" containerID="1e875ee300c0488d8291c56021229aac4c3401a41ad1f2d3dc23a2913df4c895" exitCode=0 Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.110021 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-s2fz5" event={"ID":"98bde021-9860-4b02-9223-512db6787eff","Type":"ContainerDied","Data":"1e875ee300c0488d8291c56021229aac4c3401a41ad1f2d3dc23a2913df4c895"} Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.158395 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-n5p8z" event={"ID":"4f9ab75e-8898-4a0c-8630-c657450b648e","Type":"ContainerStarted","Data":"f1af7eaa0f66662d226a2eaafb6575bc4d9168c89ee24fef058ac5d4fe51291e"} Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.171407 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-sbr84" event={"ID":"b0793347-d948-480b-b5a7-d0fed7e12b38","Type":"ContainerStarted","Data":"1c4f11a7931bfb6c7e6734178fd2038fdd115a2788998f8ef169fbd7407cf6d2"} Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.171420 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:32 crc kubenswrapper[4808]: E0217 15:56:32.171800 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:56:32.671782477 +0000 UTC m=+156.188141550 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fmfh5" (UID: "ddc3801d-3513-460c-a719-ed9dc92697e7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.172537 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-sbr84" Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.175315 4808 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-sbr84 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/healthz\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.175368 4808 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-sbr84" podUID="b0793347-d948-480b-b5a7-d0fed7e12b38" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.27:8080/healthz\": dial tcp 10.217.0.27:8080: connect: connection refused" Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.204977 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bmq9l" event={"ID":"b7697c8e-8996-44b9-8b66-965584ab26e2","Type":"ContainerStarted","Data":"a95102ed9187227caa549de5d8578d98e8c9e0e5d26a212f6a25f3bd1988b467"} Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.207237 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bmq9l" Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.208031 4808 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-bmq9l container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:5443/healthz\": dial tcp 10.217.0.40:5443: connect: connection refused" start-of-body= Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.208092 4808 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bmq9l" podUID="b7697c8e-8996-44b9-8b66-965584ab26e2" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.40:5443/healthz\": dial tcp 10.217.0.40:5443: connect: connection refused" Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.236063 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cw29n" event={"ID":"26fa95d4-8240-472a-a86f-98acf35ade67","Type":"ContainerStarted","Data":"0826966d6d87149771be9ceb8e0a5daef9d5f2fe2ed88b1c8fb880f6e9c0614c"} Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.260199 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-2lsb7" event={"ID":"e20a6284-be62-4671-b75f-38b32dc20813","Type":"ContainerStarted","Data":"3a7f1bc676889c728bffbdbaee82723db47d3e80b3bd5883c8088aa6580ee1e7"} Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.270718 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-sbr84" podStartSLOduration=134.270697082 podStartE2EDuration="2m14.270697082s" podCreationTimestamp="2026-02-17 15:54:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:56:32.204611225 +0000 UTC m=+155.720970318" watchObservedRunningTime="2026-02-17 15:56:32.270697082 +0000 UTC m=+155.787056155" Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.270926 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lzvjs" event={"ID":"b8b124f4-97ab-4512-a1a2-b93bc4e724e8","Type":"ContainerStarted","Data":"8920c9f68a2dada17aac710b71d1b8e3fde3fcfe0616a9282fef97145c312ea8"} Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.274747 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:56:32 crc kubenswrapper[4808]: E0217 15:56:32.274877 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:56:32.774861025 +0000 UTC m=+156.291220118 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:32 crc kubenswrapper[4808]: E0217 15:56:32.275671 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:56:32.775652806 +0000 UTC m=+156.292011869 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fmfh5" (UID: "ddc3801d-3513-460c-a719-ed9dc92697e7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.275165 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.322297 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pd6wv" event={"ID":"8ce31dac-90ec-4aa8-b765-1ee1add26c2d","Type":"ContainerStarted","Data":"4b2be5da98db133479da22cad2f9c7b90db7982322b06e78a4e711739d997cb8"} Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.324098 4808 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-pd6wv container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" start-of-body= Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.324149 4808 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pd6wv" podUID="8ce31dac-90ec-4aa8-b765-1ee1add26c2d" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.394287 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:56:32 crc kubenswrapper[4808]: E0217 15:56:32.395617 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:56:32.895597441 +0000 UTC m=+156.411956514 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.397943 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-bqslk" event={"ID":"3ba06ea2-9714-49b5-8477-8eb056bb45a4","Type":"ContainerStarted","Data":"41ed52098133b44c5c9e31150d6c9aa64c662fbf8019ef662f732bcca8867818"} Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.403307 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-x2jlg" event={"ID":"683fb061-dc67-431d-8a8a-d5a383794fef","Type":"ContainerStarted","Data":"140a91348592f9d5be82cb0c14961712188766e6c7cef5c96331471907718163"} Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.422154 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mggmj" event={"ID":"2d6f6cc0-7fc0-411c-800f-f98dc61b5035","Type":"ContainerStarted","Data":"5aa559ed0747a6b2ab13d8fac6a52f35c01eb0325ebaa5a2cb811a356cb86be1"} Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.438760 4808 patch_prober.go:28] interesting pod/apiserver-76f77b778f-7jp8q container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 17 15:56:32 crc kubenswrapper[4808]: [+]log ok Feb 17 15:56:32 crc kubenswrapper[4808]: [+]etcd ok Feb 17 15:56:32 crc kubenswrapper[4808]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 17 15:56:32 crc kubenswrapper[4808]: [+]poststarthook/generic-apiserver-start-informers ok Feb 17 15:56:32 crc kubenswrapper[4808]: [+]poststarthook/max-in-flight-filter ok Feb 17 15:56:32 crc kubenswrapper[4808]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 17 15:56:32 crc kubenswrapper[4808]: [+]poststarthook/image.openshift.io-apiserver-caches ok Feb 17 15:56:32 crc kubenswrapper[4808]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Feb 17 15:56:32 crc kubenswrapper[4808]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Feb 17 15:56:32 crc kubenswrapper[4808]: [+]poststarthook/project.openshift.io-projectcache ok Feb 17 15:56:32 crc kubenswrapper[4808]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Feb 17 15:56:32 crc kubenswrapper[4808]: [+]poststarthook/openshift.io-startinformers ok Feb 17 15:56:32 crc kubenswrapper[4808]: [+]poststarthook/openshift.io-restmapperupdater ok Feb 17 15:56:32 crc kubenswrapper[4808]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 17 15:56:32 crc kubenswrapper[4808]: livez check failed Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.438844 4808 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-7jp8q" podUID="d0ee93f1-93ac-4db2-b35e-5be5bded6541" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.444749 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bmq9l" podStartSLOduration=134.444731829 podStartE2EDuration="2m14.444731829s" podCreationTimestamp="2026-02-17 15:54:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:56:32.280181339 +0000 UTC m=+155.796540412" watchObservedRunningTime="2026-02-17 15:56:32.444731829 +0000 UTC m=+155.961090902" Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.455986 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522385-74pvr" event={"ID":"7baa3ebb-6bb0-4744-b096-971958bcd263","Type":"ContainerStarted","Data":"4636e3a05a4f1b63b0a37839e73e790b55d96dd321273848e2dfb3f38193ea44"} Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.456481 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522385-74pvr" event={"ID":"7baa3ebb-6bb0-4744-b096-971958bcd263","Type":"ContainerStarted","Data":"b07a627c0e44e85d03382e77fdbb6e3a6fef1ba1b49d24c7a30b720a10a8ce6d"} Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.483912 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9bcck" event={"ID":"71acbaae-e241-4c8e-ac2b-6dd40b15b494","Type":"ContainerStarted","Data":"045401e7538b14d1ef3741ef7fcf9686f582e526e1fe704e011788219910ffe7"} Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.498074 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:32 crc kubenswrapper[4808]: E0217 15:56:32.501157 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:56:33.001134955 +0000 UTC m=+156.517494018 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fmfh5" (UID: "ddc3801d-3513-460c-a719-ed9dc92697e7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.519809 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8mjrc" event={"ID":"092b0577-f19f-413d-afc5-bdc3a40f7f75","Type":"ContainerStarted","Data":"ecd09fc45743a6f9fc3cebcbe467096f9f07928922d13c4afa26394c7b053c73"} Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.519868 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8mjrc" event={"ID":"092b0577-f19f-413d-afc5-bdc3a40f7f75","Type":"ContainerStarted","Data":"22ba8a60fb5ca2d89b7a16fec0516beb65d2ea05ef0a7f8d733398a77d340355"} Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.523460 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-2lsb7" podStartSLOduration=135.523432228 podStartE2EDuration="2m15.523432228s" podCreationTimestamp="2026-02-17 15:54:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:56:32.445206193 +0000 UTC m=+155.961565266" watchObservedRunningTime="2026-02-17 15:56:32.523432228 +0000 UTC m=+156.039791301" Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.539727 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-j6dgq" event={"ID":"33978535-84b2-4def-af5a-d2819171e202","Type":"ContainerStarted","Data":"a1afe1988306793eee4a68327c90d6c1337c9d7cc71b57771cb662e2ecc6eca8"} Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.540849 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-j6dgq" Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.577425 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8zrdj" event={"ID":"445cb05c-ac1a-44a2-864f-a87e0e7b29a5","Type":"ContainerStarted","Data":"72bc4c8d24437e9e749d7d4bcd97db5d12fdae8924c3ed3363c14461f3b2b8dd"} Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.578454 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8zrdj" Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.607402 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.608171 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9bcck" podStartSLOduration=134.60815053 podStartE2EDuration="2m14.60815053s" podCreationTimestamp="2026-02-17 15:54:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:56:32.606761852 +0000 UTC m=+156.123120925" watchObservedRunningTime="2026-02-17 15:56:32.60815053 +0000 UTC m=+156.124509603" Feb 17 15:56:32 crc kubenswrapper[4808]: E0217 15:56:32.609083 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:56:33.109056984 +0000 UTC m=+156.625416057 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.609301 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lzvjs" podStartSLOduration=135.60929358 podStartE2EDuration="2m15.60929358s" podCreationTimestamp="2026-02-17 15:54:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:56:32.524005124 +0000 UTC m=+156.040364197" watchObservedRunningTime="2026-02-17 15:56:32.60929358 +0000 UTC m=+156.125652653" Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.608210 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8zrdj" Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.611829 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-jw4gs" event={"ID":"e8aed8e7-df36-4a82-a7d6-8a65d9a28eeb","Type":"ContainerStarted","Data":"4dda1c6fa752ebf39aad20ebafc91a0bdacb7ea3eda95ca701959d2729712306"} Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.611874 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-jw4gs" event={"ID":"e8aed8e7-df36-4a82-a7d6-8a65d9a28eeb","Type":"ContainerStarted","Data":"82c7b8498052c7db6301b6c7d381474378ef0fd0d5b7fab82d60f602abb43e6f"} Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.629071 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-spzc7" event={"ID":"14c6770e-9659-4e77-a7f1-f3ef06ec332d","Type":"ContainerStarted","Data":"72c20f12164ebf86d6f323fb2ad21fd775ed7625f202920a874c45d32d619b74"} Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.629905 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-spzc7" Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.653497 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-t8ws2" event={"ID":"94f0bc0d-40c0-45b7-b6c4-7b285ba26c52","Type":"ContainerStarted","Data":"1bbca72abc7557abc6c4328ff389a7c0fb8106ba97b69e12d3ae85589a684f81"} Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.668807 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-bqslk" podStartSLOduration=134.668783269 podStartE2EDuration="2m14.668783269s" podCreationTimestamp="2026-02-17 15:54:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:56:32.668638525 +0000 UTC m=+156.184997608" watchObservedRunningTime="2026-02-17 15:56:32.668783269 +0000 UTC m=+156.185142342" Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.668989 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vsl5p" event={"ID":"0b9e5453-e92d-46cd-b8fb-c989f00809ae","Type":"ContainerStarted","Data":"9f5dabab73befbc735ecb4209850931ff7234f5cccba6b61340a80ac7fbbbb27"} Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.669037 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vsl5p" event={"ID":"0b9e5453-e92d-46cd-b8fb-c989f00809ae","Type":"ContainerStarted","Data":"c1bb38e7834b1e3cca31499b884f983387b4c32fdcbfdd54789bcf688dc501ea"} Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.709208 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29522385-74pvr" podStartSLOduration=135.709185352 podStartE2EDuration="2m15.709185352s" podCreationTimestamp="2026-02-17 15:54:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:56:32.706907531 +0000 UTC m=+156.223266604" watchObservedRunningTime="2026-02-17 15:56:32.709185352 +0000 UTC m=+156.225544425" Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.709288 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-p8js4" event={"ID":"4b736927-813a-4b21-80d6-a0b4106e2c95","Type":"ContainerStarted","Data":"55a0f5580ac0a9a8933f18ea49236a08177ca4b4ae0093a0452031393efe2bcc"} Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.709339 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-p8js4" event={"ID":"4b736927-813a-4b21-80d6-a0b4106e2c95","Type":"ContainerStarted","Data":"3f615bb48b49156af7952e03fd9d3dfd72050ff4da2c586b454560e08dea8345"} Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.710476 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:32 crc kubenswrapper[4808]: E0217 15:56:32.712674 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:56:33.212661606 +0000 UTC m=+156.729020679 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fmfh5" (UID: "ddc3801d-3513-460c-a719-ed9dc92697e7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.726321 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-54vjj" event={"ID":"3267bf97-7e39-410a-8502-3737bfb7f963","Type":"ContainerStarted","Data":"f9cda0bd85d70f2bb040be7aa45aad29ac3dcd5bbc8469e158ce44f2db1d2b3c"} Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.738347 4808 patch_prober.go:28] interesting pod/router-default-5444994796-jwcd2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:56:32 crc kubenswrapper[4808]: [-]has-synced failed: reason withheld Feb 17 15:56:32 crc kubenswrapper[4808]: [+]process-running ok Feb 17 15:56:32 crc kubenswrapper[4808]: healthz check failed Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.738450 4808 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jwcd2" podUID="b26b861c-ec52-4685-846c-ea022517e9fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.741685 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-jwcd2" event={"ID":"b26b861c-ec52-4685-846c-ea022517e9fb","Type":"ContainerStarted","Data":"03010ae54b2a47c5cbf745bb4ec8340b35db2e76f02b8106933962c3f82cc328"} Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.747696 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8mjrc" podStartSLOduration=135.747679223 podStartE2EDuration="2m15.747679223s" podCreationTimestamp="2026-02-17 15:54:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:56:32.746186133 +0000 UTC m=+156.262545206" watchObservedRunningTime="2026-02-17 15:56:32.747679223 +0000 UTC m=+156.264038286" Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.756121 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-z4qfh" event={"ID":"9bca2625-c55d-4a28-b37d-2ac43d181e26","Type":"ContainerStarted","Data":"7e31de47cf5c126931a9310c441850afa6ddd8361e63e6ea7b4760988d17591f"} Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.756211 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-z4qfh" event={"ID":"9bca2625-c55d-4a28-b37d-2ac43d181e26","Type":"ContainerStarted","Data":"9799a3d840179bd0f9bd6c405739949ce024d0e6d6998a0d416443b4c98e0d5f"} Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.765715 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j6vm5" Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.766281 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k48nr" Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.779872 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-cvqck" Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.812773 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-p8js4" podStartSLOduration=135.812756584 podStartE2EDuration="2m15.812756584s" podCreationTimestamp="2026-02-17 15:54:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:56:32.812264201 +0000 UTC m=+156.328623274" watchObservedRunningTime="2026-02-17 15:56:32.812756584 +0000 UTC m=+156.329115657" Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.816845 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:56:32 crc kubenswrapper[4808]: E0217 15:56:32.817037 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:56:33.316997738 +0000 UTC m=+156.833356811 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.817247 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:32 crc kubenswrapper[4808]: E0217 15:56:32.819402 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:56:33.319388173 +0000 UTC m=+156.835747246 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fmfh5" (UID: "ddc3801d-3513-460c-a719-ed9dc92697e7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.892944 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-54vjj" podStartSLOduration=135.892922181 podStartE2EDuration="2m15.892922181s" podCreationTimestamp="2026-02-17 15:54:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:56:32.854869893 +0000 UTC m=+156.371228966" watchObservedRunningTime="2026-02-17 15:56:32.892922181 +0000 UTC m=+156.409281244" Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.895470 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8zrdj" podStartSLOduration=134.89546175 podStartE2EDuration="2m14.89546175s" podCreationTimestamp="2026-02-17 15:54:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:56:32.89250251 +0000 UTC m=+156.408861583" watchObservedRunningTime="2026-02-17 15:56:32.89546175 +0000 UTC m=+156.411820823" Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.918253 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:56:32 crc kubenswrapper[4808]: E0217 15:56:32.918772 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:56:33.41872729 +0000 UTC m=+156.935086373 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.919789 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.920772 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-t8ws2" podStartSLOduration=134.920761274 podStartE2EDuration="2m14.920761274s" podCreationTimestamp="2026-02-17 15:54:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:56:32.917960299 +0000 UTC m=+156.434319372" watchObservedRunningTime="2026-02-17 15:56:32.920761274 +0000 UTC m=+156.437120347" Feb 17 15:56:32 crc kubenswrapper[4808]: E0217 15:56:32.924137 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:56:33.424118385 +0000 UTC m=+156.940477458 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fmfh5" (UID: "ddc3801d-3513-460c-a719-ed9dc92697e7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.962849 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-jw4gs" podStartSLOduration=134.962817162 podStartE2EDuration="2m14.962817162s" podCreationTimestamp="2026-02-17 15:54:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:56:32.949138162 +0000 UTC m=+156.465497235" watchObservedRunningTime="2026-02-17 15:56:32.962817162 +0000 UTC m=+156.479176235" Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.995466 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vsl5p" podStartSLOduration=135.995435385 podStartE2EDuration="2m15.995435385s" podCreationTimestamp="2026-02-17 15:54:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:56:32.977024786 +0000 UTC m=+156.493383879" watchObservedRunningTime="2026-02-17 15:56:32.995435385 +0000 UTC m=+156.511794458" Feb 17 15:56:33 crc kubenswrapper[4808]: I0217 15:56:33.034034 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:56:33 crc kubenswrapper[4808]: E0217 15:56:33.034592 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:56:33.534553293 +0000 UTC m=+157.050912366 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:33 crc kubenswrapper[4808]: I0217 15:56:33.084052 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-spzc7" podStartSLOduration=135.084026651 podStartE2EDuration="2m15.084026651s" podCreationTimestamp="2026-02-17 15:54:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:56:33.032679482 +0000 UTC m=+156.549038575" watchObservedRunningTime="2026-02-17 15:56:33.084026651 +0000 UTC m=+156.600385724" Feb 17 15:56:33 crc kubenswrapper[4808]: I0217 15:56:33.130901 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-j6dgq" podStartSLOduration=136.130877737 podStartE2EDuration="2m16.130877737s" podCreationTimestamp="2026-02-17 15:54:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:56:33.086781525 +0000 UTC m=+156.603140598" watchObservedRunningTime="2026-02-17 15:56:33.130877737 +0000 UTC m=+156.647236810" Feb 17 15:56:33 crc kubenswrapper[4808]: I0217 15:56:33.136466 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:33 crc kubenswrapper[4808]: E0217 15:56:33.136944 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:56:33.636925741 +0000 UTC m=+157.153284814 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fmfh5" (UID: "ddc3801d-3513-460c-a719-ed9dc92697e7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:33 crc kubenswrapper[4808]: I0217 15:56:33.194370 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-z4qfh" podStartSLOduration=8.194347594 podStartE2EDuration="8.194347594s" podCreationTimestamp="2026-02-17 15:56:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:56:33.191022535 +0000 UTC m=+156.707381608" watchObservedRunningTime="2026-02-17 15:56:33.194347594 +0000 UTC m=+156.710706687" Feb 17 15:56:33 crc kubenswrapper[4808]: I0217 15:56:33.242760 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:56:33 crc kubenswrapper[4808]: E0217 15:56:33.243057 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:56:33.743039601 +0000 UTC m=+157.259398674 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:33 crc kubenswrapper[4808]: I0217 15:56:33.345545 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:33 crc kubenswrapper[4808]: E0217 15:56:33.345968 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:56:33.845953435 +0000 UTC m=+157.362312508 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fmfh5" (UID: "ddc3801d-3513-460c-a719-ed9dc92697e7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:33 crc kubenswrapper[4808]: I0217 15:56:33.447295 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:56:33 crc kubenswrapper[4808]: E0217 15:56:33.447486 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:56:33.94746297 +0000 UTC m=+157.463822043 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:33 crc kubenswrapper[4808]: I0217 15:56:33.447970 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:33 crc kubenswrapper[4808]: E0217 15:56:33.448372 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:56:33.948359125 +0000 UTC m=+157.464718198 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fmfh5" (UID: "ddc3801d-3513-460c-a719-ed9dc92697e7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:33 crc kubenswrapper[4808]: I0217 15:56:33.541322 4808 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-j6dgq container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.11:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 17 15:56:33 crc kubenswrapper[4808]: I0217 15:56:33.541871 4808 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-j6dgq" podUID="33978535-84b2-4def-af5a-d2819171e202" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.11:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 17 15:56:33 crc kubenswrapper[4808]: I0217 15:56:33.550089 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:56:33 crc kubenswrapper[4808]: E0217 15:56:33.550327 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:56:34.050311052 +0000 UTC m=+157.566670125 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:33 crc kubenswrapper[4808]: I0217 15:56:33.631708 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-mxgf8" Feb 17 15:56:33 crc kubenswrapper[4808]: I0217 15:56:33.652245 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:33 crc kubenswrapper[4808]: E0217 15:56:33.652675 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:56:34.152661121 +0000 UTC m=+157.669020194 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fmfh5" (UID: "ddc3801d-3513-460c-a719-ed9dc92697e7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:33 crc kubenswrapper[4808]: I0217 15:56:33.717684 4808 patch_prober.go:28] interesting pod/router-default-5444994796-jwcd2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:56:33 crc kubenswrapper[4808]: [-]has-synced failed: reason withheld Feb 17 15:56:33 crc kubenswrapper[4808]: [+]process-running ok Feb 17 15:56:33 crc kubenswrapper[4808]: healthz check failed Feb 17 15:56:33 crc kubenswrapper[4808]: I0217 15:56:33.717748 4808 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jwcd2" podUID="b26b861c-ec52-4685-846c-ea022517e9fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:56:33 crc kubenswrapper[4808]: I0217 15:56:33.754221 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:56:33 crc kubenswrapper[4808]: E0217 15:56:33.754724 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:56:34.25470135 +0000 UTC m=+157.771060423 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:33 crc kubenswrapper[4808]: I0217 15:56:33.788600 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-spzc7" event={"ID":"14c6770e-9659-4e77-a7f1-f3ef06ec332d","Type":"ContainerStarted","Data":"2470da7936a29f3f56730e7168918a901e1d6d72c1ad9da5572d1943312ac952"} Feb 17 15:56:33 crc kubenswrapper[4808]: I0217 15:56:33.790897 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-dxj7b" event={"ID":"69e8c398-683b-47dc-a517-633d625cbd97","Type":"ContainerStarted","Data":"5b040f8b829760acc053068dc69cdb50a3a6fb21d82b5d5b1a076a6fc10e2d28"} Feb 17 15:56:33 crc kubenswrapper[4808]: I0217 15:56:33.794560 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-z82w8" event={"ID":"728793ed-1e89-455c-8d45-92c4ab08c1f6","Type":"ContainerStarted","Data":"1515b2c38d6c463cdf7029191fa4639f05e318748ff6cbc7fa4190670301824e"} Feb 17 15:56:33 crc kubenswrapper[4808]: I0217 15:56:33.801041 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-p8js4" event={"ID":"4b736927-813a-4b21-80d6-a0b4106e2c95","Type":"ContainerStarted","Data":"cb206168ab129d006ad7d5f6d31c6572e07b746c93ed7110887c23e590e6dff2"} Feb 17 15:56:33 crc kubenswrapper[4808]: I0217 15:56:33.807247 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-s2fz5" event={"ID":"98bde021-9860-4b02-9223-512db6787eff","Type":"ContainerStarted","Data":"62ab951b66683ebc98e2343b94934e9ee53c8fd1fe8a6fdfd37370d4c9bcaf75"} Feb 17 15:56:33 crc kubenswrapper[4808]: I0217 15:56:33.807444 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-s2fz5" Feb 17 15:56:33 crc kubenswrapper[4808]: I0217 15:56:33.814038 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-n5p8z" event={"ID":"4f9ab75e-8898-4a0c-8630-c657450b648e","Type":"ContainerStarted","Data":"f95a1b99d065c0511cc8e26a1c74ada25d15226411a1c0db49831c8c1b94a36e"} Feb 17 15:56:33 crc kubenswrapper[4808]: I0217 15:56:33.819523 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-x2jlg" event={"ID":"683fb061-dc67-431d-8a8a-d5a383794fef","Type":"ContainerStarted","Data":"6d7b02d0e6d15d7663f2b440e1a47856e12a46f8aac060e4ba78b162a63bd943"} Feb 17 15:56:33 crc kubenswrapper[4808]: I0217 15:56:33.819587 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-x2jlg" Feb 17 15:56:33 crc kubenswrapper[4808]: I0217 15:56:33.824607 4808 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-sbr84 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/healthz\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Feb 17 15:56:33 crc kubenswrapper[4808]: I0217 15:56:33.824676 4808 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-sbr84" podUID="b0793347-d948-480b-b5a7-d0fed7e12b38" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.27:8080/healthz\": dial tcp 10.217.0.27:8080: connect: connection refused" Feb 17 15:56:33 crc kubenswrapper[4808]: I0217 15:56:33.839991 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pd6wv" Feb 17 15:56:33 crc kubenswrapper[4808]: I0217 15:56:33.843291 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-j6dgq" Feb 17 15:56:33 crc kubenswrapper[4808]: I0217 15:56:33.847779 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-z82w8" podStartSLOduration=135.847756757 podStartE2EDuration="2m15.847756757s" podCreationTimestamp="2026-02-17 15:54:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:56:33.843029429 +0000 UTC m=+157.359388502" watchObservedRunningTime="2026-02-17 15:56:33.847756757 +0000 UTC m=+157.364115830" Feb 17 15:56:33 crc kubenswrapper[4808]: I0217 15:56:33.856270 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:33 crc kubenswrapper[4808]: E0217 15:56:33.863025 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:56:34.36300073 +0000 UTC m=+157.879360003 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fmfh5" (UID: "ddc3801d-3513-460c-a719-ed9dc92697e7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:33 crc kubenswrapper[4808]: I0217 15:56:33.949319 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-x2jlg" podStartSLOduration=8.949299393 podStartE2EDuration="8.949299393s" podCreationTimestamp="2026-02-17 15:56:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:56:33.948955834 +0000 UTC m=+157.465314907" watchObservedRunningTime="2026-02-17 15:56:33.949299393 +0000 UTC m=+157.465658456" Feb 17 15:56:33 crc kubenswrapper[4808]: I0217 15:56:33.969347 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:56:33 crc kubenswrapper[4808]: E0217 15:56:33.969664 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:56:34.469626084 +0000 UTC m=+157.985985167 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:33 crc kubenswrapper[4808]: I0217 15:56:33.970236 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:33 crc kubenswrapper[4808]: E0217 15:56:33.985325 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:56:34.485282557 +0000 UTC m=+158.001641630 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fmfh5" (UID: "ddc3801d-3513-460c-a719-ed9dc92697e7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:34 crc kubenswrapper[4808]: I0217 15:56:34.058189 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-n5p8z" podStartSLOduration=137.058166598 podStartE2EDuration="2m17.058166598s" podCreationTimestamp="2026-02-17 15:54:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:56:34.044263812 +0000 UTC m=+157.560622885" watchObservedRunningTime="2026-02-17 15:56:34.058166598 +0000 UTC m=+157.574525681" Feb 17 15:56:34 crc kubenswrapper[4808]: I0217 15:56:34.071833 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:56:34 crc kubenswrapper[4808]: E0217 15:56:34.072186 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:56:34.572164316 +0000 UTC m=+158.088523389 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:34 crc kubenswrapper[4808]: I0217 15:56:34.174814 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:34 crc kubenswrapper[4808]: E0217 15:56:34.175294 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:56:34.675280636 +0000 UTC m=+158.191639709 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fmfh5" (UID: "ddc3801d-3513-460c-a719-ed9dc92697e7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:34 crc kubenswrapper[4808]: I0217 15:56:34.201195 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-s2fz5" podStartSLOduration=137.201176906 podStartE2EDuration="2m17.201176906s" podCreationTimestamp="2026-02-17 15:54:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:56:34.19908644 +0000 UTC m=+157.715445513" watchObservedRunningTime="2026-02-17 15:56:34.201176906 +0000 UTC m=+157.717535979" Feb 17 15:56:34 crc kubenswrapper[4808]: I0217 15:56:34.251019 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-22x8m"] Feb 17 15:56:34 crc kubenswrapper[4808]: I0217 15:56:34.252022 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-22x8m" Feb 17 15:56:34 crc kubenswrapper[4808]: I0217 15:56:34.264400 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 17 15:56:34 crc kubenswrapper[4808]: I0217 15:56:34.276711 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:56:34 crc kubenswrapper[4808]: I0217 15:56:34.276990 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/543b2019-8399-411e-8e8b-45787b96873f-utilities\") pod \"community-operators-22x8m\" (UID: \"543b2019-8399-411e-8e8b-45787b96873f\") " pod="openshift-marketplace/community-operators-22x8m" Feb 17 15:56:34 crc kubenswrapper[4808]: I0217 15:56:34.277044 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/543b2019-8399-411e-8e8b-45787b96873f-catalog-content\") pod \"community-operators-22x8m\" (UID: \"543b2019-8399-411e-8e8b-45787b96873f\") " pod="openshift-marketplace/community-operators-22x8m" Feb 17 15:56:34 crc kubenswrapper[4808]: I0217 15:56:34.277080 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h922n\" (UniqueName: \"kubernetes.io/projected/543b2019-8399-411e-8e8b-45787b96873f-kube-api-access-h922n\") pod \"community-operators-22x8m\" (UID: \"543b2019-8399-411e-8e8b-45787b96873f\") " pod="openshift-marketplace/community-operators-22x8m" Feb 17 15:56:34 crc kubenswrapper[4808]: E0217 15:56:34.277165 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:56:34.77713342 +0000 UTC m=+158.293492493 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:34 crc kubenswrapper[4808]: I0217 15:56:34.341373 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-22x8m"] Feb 17 15:56:34 crc kubenswrapper[4808]: I0217 15:56:34.381010 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/543b2019-8399-411e-8e8b-45787b96873f-utilities\") pod \"community-operators-22x8m\" (UID: \"543b2019-8399-411e-8e8b-45787b96873f\") " pod="openshift-marketplace/community-operators-22x8m" Feb 17 15:56:34 crc kubenswrapper[4808]: I0217 15:56:34.381094 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/543b2019-8399-411e-8e8b-45787b96873f-catalog-content\") pod \"community-operators-22x8m\" (UID: \"543b2019-8399-411e-8e8b-45787b96873f\") " pod="openshift-marketplace/community-operators-22x8m" Feb 17 15:56:34 crc kubenswrapper[4808]: I0217 15:56:34.381165 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:34 crc kubenswrapper[4808]: I0217 15:56:34.381190 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h922n\" (UniqueName: \"kubernetes.io/projected/543b2019-8399-411e-8e8b-45787b96873f-kube-api-access-h922n\") pod \"community-operators-22x8m\" (UID: \"543b2019-8399-411e-8e8b-45787b96873f\") " pod="openshift-marketplace/community-operators-22x8m" Feb 17 15:56:34 crc kubenswrapper[4808]: I0217 15:56:34.381665 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/543b2019-8399-411e-8e8b-45787b96873f-utilities\") pod \"community-operators-22x8m\" (UID: \"543b2019-8399-411e-8e8b-45787b96873f\") " pod="openshift-marketplace/community-operators-22x8m" Feb 17 15:56:34 crc kubenswrapper[4808]: E0217 15:56:34.381876 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:56:34.881850053 +0000 UTC m=+158.398209126 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fmfh5" (UID: "ddc3801d-3513-460c-a719-ed9dc92697e7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:34 crc kubenswrapper[4808]: I0217 15:56:34.381892 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/543b2019-8399-411e-8e8b-45787b96873f-catalog-content\") pod \"community-operators-22x8m\" (UID: \"543b2019-8399-411e-8e8b-45787b96873f\") " pod="openshift-marketplace/community-operators-22x8m" Feb 17 15:56:34 crc kubenswrapper[4808]: I0217 15:56:34.432891 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-hn7fn"] Feb 17 15:56:34 crc kubenswrapper[4808]: I0217 15:56:34.434286 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hn7fn" Feb 17 15:56:34 crc kubenswrapper[4808]: I0217 15:56:34.442047 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 17 15:56:34 crc kubenswrapper[4808]: I0217 15:56:34.450428 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h922n\" (UniqueName: \"kubernetes.io/projected/543b2019-8399-411e-8e8b-45787b96873f-kube-api-access-h922n\") pod \"community-operators-22x8m\" (UID: \"543b2019-8399-411e-8e8b-45787b96873f\") " pod="openshift-marketplace/community-operators-22x8m" Feb 17 15:56:34 crc kubenswrapper[4808]: I0217 15:56:34.470494 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hn7fn"] Feb 17 15:56:34 crc kubenswrapper[4808]: I0217 15:56:34.482005 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:56:34 crc kubenswrapper[4808]: E0217 15:56:34.482165 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:56:34.982139056 +0000 UTC m=+158.498498129 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:34 crc kubenswrapper[4808]: I0217 15:56:34.482217 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:34 crc kubenswrapper[4808]: I0217 15:56:34.482419 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sp46n\" (UniqueName: \"kubernetes.io/projected/a1db3ff7-c43f-412e-ab72-3d592b6352b0-kube-api-access-sp46n\") pod \"certified-operators-hn7fn\" (UID: \"a1db3ff7-c43f-412e-ab72-3d592b6352b0\") " pod="openshift-marketplace/certified-operators-hn7fn" Feb 17 15:56:34 crc kubenswrapper[4808]: E0217 15:56:34.482559 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:56:34.982543226 +0000 UTC m=+158.498902299 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fmfh5" (UID: "ddc3801d-3513-460c-a719-ed9dc92697e7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:34 crc kubenswrapper[4808]: I0217 15:56:34.482597 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1db3ff7-c43f-412e-ab72-3d592b6352b0-utilities\") pod \"certified-operators-hn7fn\" (UID: \"a1db3ff7-c43f-412e-ab72-3d592b6352b0\") " pod="openshift-marketplace/certified-operators-hn7fn" Feb 17 15:56:34 crc kubenswrapper[4808]: I0217 15:56:34.482667 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1db3ff7-c43f-412e-ab72-3d592b6352b0-catalog-content\") pod \"certified-operators-hn7fn\" (UID: \"a1db3ff7-c43f-412e-ab72-3d592b6352b0\") " pod="openshift-marketplace/certified-operators-hn7fn" Feb 17 15:56:34 crc kubenswrapper[4808]: I0217 15:56:34.583592 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:56:34 crc kubenswrapper[4808]: E0217 15:56:34.583743 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:56:35.083708363 +0000 UTC m=+158.600067436 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:34 crc kubenswrapper[4808]: I0217 15:56:34.583904 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sp46n\" (UniqueName: \"kubernetes.io/projected/a1db3ff7-c43f-412e-ab72-3d592b6352b0-kube-api-access-sp46n\") pod \"certified-operators-hn7fn\" (UID: \"a1db3ff7-c43f-412e-ab72-3d592b6352b0\") " pod="openshift-marketplace/certified-operators-hn7fn" Feb 17 15:56:34 crc kubenswrapper[4808]: I0217 15:56:34.583961 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1db3ff7-c43f-412e-ab72-3d592b6352b0-utilities\") pod \"certified-operators-hn7fn\" (UID: \"a1db3ff7-c43f-412e-ab72-3d592b6352b0\") " pod="openshift-marketplace/certified-operators-hn7fn" Feb 17 15:56:34 crc kubenswrapper[4808]: I0217 15:56:34.583992 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1db3ff7-c43f-412e-ab72-3d592b6352b0-catalog-content\") pod \"certified-operators-hn7fn\" (UID: \"a1db3ff7-c43f-412e-ab72-3d592b6352b0\") " pod="openshift-marketplace/certified-operators-hn7fn" Feb 17 15:56:34 crc kubenswrapper[4808]: I0217 15:56:34.584028 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:34 crc kubenswrapper[4808]: E0217 15:56:34.584419 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:56:35.084402691 +0000 UTC m=+158.600761764 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fmfh5" (UID: "ddc3801d-3513-460c-a719-ed9dc92697e7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:34 crc kubenswrapper[4808]: I0217 15:56:34.584596 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1db3ff7-c43f-412e-ab72-3d592b6352b0-utilities\") pod \"certified-operators-hn7fn\" (UID: \"a1db3ff7-c43f-412e-ab72-3d592b6352b0\") " pod="openshift-marketplace/certified-operators-hn7fn" Feb 17 15:56:34 crc kubenswrapper[4808]: I0217 15:56:34.584669 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1db3ff7-c43f-412e-ab72-3d592b6352b0-catalog-content\") pod \"certified-operators-hn7fn\" (UID: \"a1db3ff7-c43f-412e-ab72-3d592b6352b0\") " pod="openshift-marketplace/certified-operators-hn7fn" Feb 17 15:56:34 crc kubenswrapper[4808]: I0217 15:56:34.595958 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-22x8m" Feb 17 15:56:34 crc kubenswrapper[4808]: I0217 15:56:34.623270 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6vvmq"] Feb 17 15:56:34 crc kubenswrapper[4808]: I0217 15:56:34.641802 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6vvmq" Feb 17 15:56:34 crc kubenswrapper[4808]: I0217 15:56:34.652300 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6vvmq"] Feb 17 15:56:34 crc kubenswrapper[4808]: I0217 15:56:34.656431 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sp46n\" (UniqueName: \"kubernetes.io/projected/a1db3ff7-c43f-412e-ab72-3d592b6352b0-kube-api-access-sp46n\") pod \"certified-operators-hn7fn\" (UID: \"a1db3ff7-c43f-412e-ab72-3d592b6352b0\") " pod="openshift-marketplace/certified-operators-hn7fn" Feb 17 15:56:34 crc kubenswrapper[4808]: I0217 15:56:34.678612 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bmq9l" Feb 17 15:56:34 crc kubenswrapper[4808]: I0217 15:56:34.686142 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:56:34 crc kubenswrapper[4808]: I0217 15:56:34.686521 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57300b85-6c7e-49da-bb14-40055f48a85c-catalog-content\") pod \"community-operators-6vvmq\" (UID: \"57300b85-6c7e-49da-bb14-40055f48a85c\") " pod="openshift-marketplace/community-operators-6vvmq" Feb 17 15:56:34 crc kubenswrapper[4808]: I0217 15:56:34.686553 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57300b85-6c7e-49da-bb14-40055f48a85c-utilities\") pod \"community-operators-6vvmq\" (UID: \"57300b85-6c7e-49da-bb14-40055f48a85c\") " pod="openshift-marketplace/community-operators-6vvmq" Feb 17 15:56:34 crc kubenswrapper[4808]: I0217 15:56:34.686617 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzvbx\" (UniqueName: \"kubernetes.io/projected/57300b85-6c7e-49da-bb14-40055f48a85c-kube-api-access-pzvbx\") pod \"community-operators-6vvmq\" (UID: \"57300b85-6c7e-49da-bb14-40055f48a85c\") " pod="openshift-marketplace/community-operators-6vvmq" Feb 17 15:56:34 crc kubenswrapper[4808]: E0217 15:56:34.686807 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:56:35.186787071 +0000 UTC m=+158.703146144 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:34 crc kubenswrapper[4808]: I0217 15:56:34.722755 4808 patch_prober.go:28] interesting pod/router-default-5444994796-jwcd2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:56:34 crc kubenswrapper[4808]: [-]has-synced failed: reason withheld Feb 17 15:56:34 crc kubenswrapper[4808]: [+]process-running ok Feb 17 15:56:34 crc kubenswrapper[4808]: healthz check failed Feb 17 15:56:34 crc kubenswrapper[4808]: I0217 15:56:34.722819 4808 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jwcd2" podUID="b26b861c-ec52-4685-846c-ea022517e9fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:56:34 crc kubenswrapper[4808]: I0217 15:56:34.787042 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hn7fn" Feb 17 15:56:34 crc kubenswrapper[4808]: I0217 15:56:34.788176 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:34 crc kubenswrapper[4808]: I0217 15:56:34.788239 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57300b85-6c7e-49da-bb14-40055f48a85c-catalog-content\") pod \"community-operators-6vvmq\" (UID: \"57300b85-6c7e-49da-bb14-40055f48a85c\") " pod="openshift-marketplace/community-operators-6vvmq" Feb 17 15:56:34 crc kubenswrapper[4808]: I0217 15:56:34.788262 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57300b85-6c7e-49da-bb14-40055f48a85c-utilities\") pod \"community-operators-6vvmq\" (UID: \"57300b85-6c7e-49da-bb14-40055f48a85c\") " pod="openshift-marketplace/community-operators-6vvmq" Feb 17 15:56:34 crc kubenswrapper[4808]: I0217 15:56:34.788301 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzvbx\" (UniqueName: \"kubernetes.io/projected/57300b85-6c7e-49da-bb14-40055f48a85c-kube-api-access-pzvbx\") pod \"community-operators-6vvmq\" (UID: \"57300b85-6c7e-49da-bb14-40055f48a85c\") " pod="openshift-marketplace/community-operators-6vvmq" Feb 17 15:56:34 crc kubenswrapper[4808]: I0217 15:56:34.788984 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57300b85-6c7e-49da-bb14-40055f48a85c-catalog-content\") pod \"community-operators-6vvmq\" (UID: \"57300b85-6c7e-49da-bb14-40055f48a85c\") " pod="openshift-marketplace/community-operators-6vvmq" Feb 17 15:56:34 crc kubenswrapper[4808]: I0217 15:56:34.789256 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57300b85-6c7e-49da-bb14-40055f48a85c-utilities\") pod \"community-operators-6vvmq\" (UID: \"57300b85-6c7e-49da-bb14-40055f48a85c\") " pod="openshift-marketplace/community-operators-6vvmq" Feb 17 15:56:34 crc kubenswrapper[4808]: E0217 15:56:34.789672 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:56:35.289658532 +0000 UTC m=+158.806017605 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fmfh5" (UID: "ddc3801d-3513-460c-a719-ed9dc92697e7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:34 crc kubenswrapper[4808]: I0217 15:56:34.840334 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-wsbjl"] Feb 17 15:56:34 crc kubenswrapper[4808]: I0217 15:56:34.841396 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wsbjl" Feb 17 15:56:34 crc kubenswrapper[4808]: I0217 15:56:34.854303 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wsbjl"] Feb 17 15:56:34 crc kubenswrapper[4808]: I0217 15:56:34.861238 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzvbx\" (UniqueName: \"kubernetes.io/projected/57300b85-6c7e-49da-bb14-40055f48a85c-kube-api-access-pzvbx\") pod \"community-operators-6vvmq\" (UID: \"57300b85-6c7e-49da-bb14-40055f48a85c\") " pod="openshift-marketplace/community-operators-6vvmq" Feb 17 15:56:34 crc kubenswrapper[4808]: I0217 15:56:34.864851 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-dxj7b" event={"ID":"69e8c398-683b-47dc-a517-633d625cbd97","Type":"ContainerStarted","Data":"815d41d195a9858305817e3cb2e19c39ddeead1311aafdc5105711ad98beaada"} Feb 17 15:56:34 crc kubenswrapper[4808]: I0217 15:56:34.864905 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-dxj7b" event={"ID":"69e8c398-683b-47dc-a517-633d625cbd97","Type":"ContainerStarted","Data":"6ec7a39da2b5d4550f24f7e026c6d83f4682118c7b304a2c82fa1e54b603f474"} Feb 17 15:56:34 crc kubenswrapper[4808]: I0217 15:56:34.889017 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:56:34 crc kubenswrapper[4808]: I0217 15:56:34.889312 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f04008a-114c-4f19-971a-34fa574846f5-catalog-content\") pod \"certified-operators-wsbjl\" (UID: \"2f04008a-114c-4f19-971a-34fa574846f5\") " pod="openshift-marketplace/certified-operators-wsbjl" Feb 17 15:56:34 crc kubenswrapper[4808]: I0217 15:56:34.889551 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4v4z\" (UniqueName: \"kubernetes.io/projected/2f04008a-114c-4f19-971a-34fa574846f5-kube-api-access-z4v4z\") pod \"certified-operators-wsbjl\" (UID: \"2f04008a-114c-4f19-971a-34fa574846f5\") " pod="openshift-marketplace/certified-operators-wsbjl" Feb 17 15:56:34 crc kubenswrapper[4808]: I0217 15:56:34.889612 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f04008a-114c-4f19-971a-34fa574846f5-utilities\") pod \"certified-operators-wsbjl\" (UID: \"2f04008a-114c-4f19-971a-34fa574846f5\") " pod="openshift-marketplace/certified-operators-wsbjl" Feb 17 15:56:34 crc kubenswrapper[4808]: E0217 15:56:34.930980 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:56:35.430943374 +0000 UTC m=+158.947302447 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:35 crc kubenswrapper[4808]: I0217 15:56:35.007623 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6vvmq" Feb 17 15:56:35 crc kubenswrapper[4808]: I0217 15:56:35.008675 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f04008a-114c-4f19-971a-34fa574846f5-catalog-content\") pod \"certified-operators-wsbjl\" (UID: \"2f04008a-114c-4f19-971a-34fa574846f5\") " pod="openshift-marketplace/certified-operators-wsbjl" Feb 17 15:56:35 crc kubenswrapper[4808]: I0217 15:56:35.009852 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:35 crc kubenswrapper[4808]: I0217 15:56:35.009947 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4v4z\" (UniqueName: \"kubernetes.io/projected/2f04008a-114c-4f19-971a-34fa574846f5-kube-api-access-z4v4z\") pod \"certified-operators-wsbjl\" (UID: \"2f04008a-114c-4f19-971a-34fa574846f5\") " pod="openshift-marketplace/certified-operators-wsbjl" Feb 17 15:56:35 crc kubenswrapper[4808]: I0217 15:56:35.010018 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f04008a-114c-4f19-971a-34fa574846f5-utilities\") pod \"certified-operators-wsbjl\" (UID: \"2f04008a-114c-4f19-971a-34fa574846f5\") " pod="openshift-marketplace/certified-operators-wsbjl" Feb 17 15:56:35 crc kubenswrapper[4808]: I0217 15:56:35.010335 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f04008a-114c-4f19-971a-34fa574846f5-utilities\") pod \"certified-operators-wsbjl\" (UID: \"2f04008a-114c-4f19-971a-34fa574846f5\") " pod="openshift-marketplace/certified-operators-wsbjl" Feb 17 15:56:35 crc kubenswrapper[4808]: I0217 15:56:35.010431 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f04008a-114c-4f19-971a-34fa574846f5-catalog-content\") pod \"certified-operators-wsbjl\" (UID: \"2f04008a-114c-4f19-971a-34fa574846f5\") " pod="openshift-marketplace/certified-operators-wsbjl" Feb 17 15:56:35 crc kubenswrapper[4808]: E0217 15:56:35.010706 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:56:35.510692921 +0000 UTC m=+159.027051994 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fmfh5" (UID: "ddc3801d-3513-460c-a719-ed9dc92697e7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:35 crc kubenswrapper[4808]: I0217 15:56:35.067013 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4v4z\" (UniqueName: \"kubernetes.io/projected/2f04008a-114c-4f19-971a-34fa574846f5-kube-api-access-z4v4z\") pod \"certified-operators-wsbjl\" (UID: \"2f04008a-114c-4f19-971a-34fa574846f5\") " pod="openshift-marketplace/certified-operators-wsbjl" Feb 17 15:56:35 crc kubenswrapper[4808]: I0217 15:56:35.111858 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:56:35 crc kubenswrapper[4808]: E0217 15:56:35.112279 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:56:35.612255798 +0000 UTC m=+159.128614871 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:35 crc kubenswrapper[4808]: I0217 15:56:35.184728 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wsbjl" Feb 17 15:56:35 crc kubenswrapper[4808]: I0217 15:56:35.216671 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:35 crc kubenswrapper[4808]: E0217 15:56:35.217110 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:56:35.717094594 +0000 UTC m=+159.233453667 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fmfh5" (UID: "ddc3801d-3513-460c-a719-ed9dc92697e7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:35 crc kubenswrapper[4808]: I0217 15:56:35.318274 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:56:35 crc kubenswrapper[4808]: E0217 15:56:35.318408 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:56:35.818381733 +0000 UTC m=+159.334740806 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:35 crc kubenswrapper[4808]: I0217 15:56:35.318691 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:35 crc kubenswrapper[4808]: E0217 15:56:35.319072 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:56:35.819063791 +0000 UTC m=+159.335422864 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fmfh5" (UID: "ddc3801d-3513-460c-a719-ed9dc92697e7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:35 crc kubenswrapper[4808]: I0217 15:56:35.364618 4808 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Feb 17 15:56:35 crc kubenswrapper[4808]: I0217 15:56:35.396310 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-22x8m"] Feb 17 15:56:35 crc kubenswrapper[4808]: I0217 15:56:35.420218 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:56:35 crc kubenswrapper[4808]: E0217 15:56:35.420723 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:56:35.92067709 +0000 UTC m=+159.437036163 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:35 crc kubenswrapper[4808]: I0217 15:56:35.526398 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:35 crc kubenswrapper[4808]: E0217 15:56:35.527237 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:56:36.027223332 +0000 UTC m=+159.543582395 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fmfh5" (UID: "ddc3801d-3513-460c-a719-ed9dc92697e7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:35 crc kubenswrapper[4808]: I0217 15:56:35.630212 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:56:35 crc kubenswrapper[4808]: E0217 15:56:35.630598 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:56:36.130559687 +0000 UTC m=+159.646918760 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:35 crc kubenswrapper[4808]: I0217 15:56:35.724214 4808 patch_prober.go:28] interesting pod/router-default-5444994796-jwcd2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:56:35 crc kubenswrapper[4808]: [-]has-synced failed: reason withheld Feb 17 15:56:35 crc kubenswrapper[4808]: [+]process-running ok Feb 17 15:56:35 crc kubenswrapper[4808]: healthz check failed Feb 17 15:56:35 crc kubenswrapper[4808]: I0217 15:56:35.724259 4808 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jwcd2" podUID="b26b861c-ec52-4685-846c-ea022517e9fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:56:35 crc kubenswrapper[4808]: I0217 15:56:35.732813 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:35 crc kubenswrapper[4808]: E0217 15:56:35.733145 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:56:36.233133541 +0000 UTC m=+159.749492614 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fmfh5" (UID: "ddc3801d-3513-460c-a719-ed9dc92697e7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:35 crc kubenswrapper[4808]: I0217 15:56:35.834497 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:56:35 crc kubenswrapper[4808]: E0217 15:56:35.834661 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:56:36.334626565 +0000 UTC m=+159.850985628 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:35 crc kubenswrapper[4808]: I0217 15:56:35.835402 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:35 crc kubenswrapper[4808]: E0217 15:56:35.835876 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:56:36.335860469 +0000 UTC m=+159.852219542 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fmfh5" (UID: "ddc3801d-3513-460c-a719-ed9dc92697e7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:35 crc kubenswrapper[4808]: I0217 15:56:35.914711 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hn7fn"] Feb 17 15:56:35 crc kubenswrapper[4808]: I0217 15:56:35.929043 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-dxj7b" event={"ID":"69e8c398-683b-47dc-a517-633d625cbd97","Type":"ContainerStarted","Data":"7aa9eff9e442f60586b42eaff2de3d9580aae6c64dad1bbdef28119c4acd70c1"} Feb 17 15:56:35 crc kubenswrapper[4808]: I0217 15:56:35.939124 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:56:35 crc kubenswrapper[4808]: E0217 15:56:35.939544 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:56:36.439524423 +0000 UTC m=+159.955883486 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:35 crc kubenswrapper[4808]: I0217 15:56:35.977441 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-22x8m" event={"ID":"543b2019-8399-411e-8e8b-45787b96873f","Type":"ContainerStarted","Data":"a1b466a7276199cdb3d16661c145bd9226ea4df1371372728f98eec1641d1432"} Feb 17 15:56:35 crc kubenswrapper[4808]: I0217 15:56:35.977486 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-22x8m" event={"ID":"543b2019-8399-411e-8e8b-45787b96873f","Type":"ContainerStarted","Data":"88ab9dc080b2cadb5ff2951ac6094d56029248c1c148ac36b7e2a6167225bf7c"} Feb 17 15:56:36 crc kubenswrapper[4808]: I0217 15:56:36.044999 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:36 crc kubenswrapper[4808]: E0217 15:56:36.046331 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:56:36.546316092 +0000 UTC m=+160.062675165 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fmfh5" (UID: "ddc3801d-3513-460c-a719-ed9dc92697e7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:36 crc kubenswrapper[4808]: I0217 15:56:36.058621 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-dxj7b" podStartSLOduration=11.058597303 podStartE2EDuration="11.058597303s" podCreationTimestamp="2026-02-17 15:56:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:56:36.023561576 +0000 UTC m=+159.539920649" watchObservedRunningTime="2026-02-17 15:56:36.058597303 +0000 UTC m=+159.574956396" Feb 17 15:56:36 crc kubenswrapper[4808]: I0217 15:56:36.060361 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6vvmq"] Feb 17 15:56:36 crc kubenswrapper[4808]: I0217 15:56:36.098951 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wsbjl"] Feb 17 15:56:36 crc kubenswrapper[4808]: W0217 15:56:36.122782 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2f04008a_114c_4f19_971a_34fa574846f5.slice/crio-735c6effafb73a77d28e55e021aec1242fb9a889fb9fde23203faa6b85d31dbc WatchSource:0}: Error finding container 735c6effafb73a77d28e55e021aec1242fb9a889fb9fde23203faa6b85d31dbc: Status 404 returned error can't find the container with id 735c6effafb73a77d28e55e021aec1242fb9a889fb9fde23203faa6b85d31dbc Feb 17 15:56:36 crc kubenswrapper[4808]: I0217 15:56:36.146466 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:56:36 crc kubenswrapper[4808]: E0217 15:56:36.147331 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:56:36.647309333 +0000 UTC m=+160.163668406 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:36 crc kubenswrapper[4808]: I0217 15:56:36.208715 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-cs597"] Feb 17 15:56:36 crc kubenswrapper[4808]: I0217 15:56:36.213085 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cs597" Feb 17 15:56:36 crc kubenswrapper[4808]: I0217 15:56:36.215865 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 17 15:56:36 crc kubenswrapper[4808]: I0217 15:56:36.244364 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cs597"] Feb 17 15:56:36 crc kubenswrapper[4808]: I0217 15:56:36.250550 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:36 crc kubenswrapper[4808]: I0217 15:56:36.250693 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48efd125-e3aa-444d-91a3-fa915be48b46-catalog-content\") pod \"redhat-marketplace-cs597\" (UID: \"48efd125-e3aa-444d-91a3-fa915be48b46\") " pod="openshift-marketplace/redhat-marketplace-cs597" Feb 17 15:56:36 crc kubenswrapper[4808]: I0217 15:56:36.250797 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48efd125-e3aa-444d-91a3-fa915be48b46-utilities\") pod \"redhat-marketplace-cs597\" (UID: \"48efd125-e3aa-444d-91a3-fa915be48b46\") " pod="openshift-marketplace/redhat-marketplace-cs597" Feb 17 15:56:36 crc kubenswrapper[4808]: I0217 15:56:36.250892 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptbxm\" (UniqueName: \"kubernetes.io/projected/48efd125-e3aa-444d-91a3-fa915be48b46-kube-api-access-ptbxm\") pod \"redhat-marketplace-cs597\" (UID: \"48efd125-e3aa-444d-91a3-fa915be48b46\") " pod="openshift-marketplace/redhat-marketplace-cs597" Feb 17 15:56:36 crc kubenswrapper[4808]: E0217 15:56:36.252202 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:56:36.752188279 +0000 UTC m=+160.268547352 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fmfh5" (UID: "ddc3801d-3513-460c-a719-ed9dc92697e7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:36 crc kubenswrapper[4808]: I0217 15:56:36.321905 4808 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-02-17T15:56:35.364647235Z","Handler":null,"Name":""} Feb 17 15:56:36 crc kubenswrapper[4808]: I0217 15:56:36.326869 4808 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Feb 17 15:56:36 crc kubenswrapper[4808]: I0217 15:56:36.326924 4808 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Feb 17 15:56:36 crc kubenswrapper[4808]: I0217 15:56:36.353314 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:56:36 crc kubenswrapper[4808]: I0217 15:56:36.353533 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptbxm\" (UniqueName: \"kubernetes.io/projected/48efd125-e3aa-444d-91a3-fa915be48b46-kube-api-access-ptbxm\") pod \"redhat-marketplace-cs597\" (UID: \"48efd125-e3aa-444d-91a3-fa915be48b46\") " pod="openshift-marketplace/redhat-marketplace-cs597" Feb 17 15:56:36 crc kubenswrapper[4808]: I0217 15:56:36.353629 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48efd125-e3aa-444d-91a3-fa915be48b46-catalog-content\") pod \"redhat-marketplace-cs597\" (UID: \"48efd125-e3aa-444d-91a3-fa915be48b46\") " pod="openshift-marketplace/redhat-marketplace-cs597" Feb 17 15:56:36 crc kubenswrapper[4808]: I0217 15:56:36.353668 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48efd125-e3aa-444d-91a3-fa915be48b46-utilities\") pod \"redhat-marketplace-cs597\" (UID: \"48efd125-e3aa-444d-91a3-fa915be48b46\") " pod="openshift-marketplace/redhat-marketplace-cs597" Feb 17 15:56:36 crc kubenswrapper[4808]: I0217 15:56:36.355812 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48efd125-e3aa-444d-91a3-fa915be48b46-utilities\") pod \"redhat-marketplace-cs597\" (UID: \"48efd125-e3aa-444d-91a3-fa915be48b46\") " pod="openshift-marketplace/redhat-marketplace-cs597" Feb 17 15:56:36 crc kubenswrapper[4808]: I0217 15:56:36.355833 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48efd125-e3aa-444d-91a3-fa915be48b46-catalog-content\") pod \"redhat-marketplace-cs597\" (UID: \"48efd125-e3aa-444d-91a3-fa915be48b46\") " pod="openshift-marketplace/redhat-marketplace-cs597" Feb 17 15:56:36 crc kubenswrapper[4808]: I0217 15:56:36.404691 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 17 15:56:36 crc kubenswrapper[4808]: I0217 15:56:36.406597 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptbxm\" (UniqueName: \"kubernetes.io/projected/48efd125-e3aa-444d-91a3-fa915be48b46-kube-api-access-ptbxm\") pod \"redhat-marketplace-cs597\" (UID: \"48efd125-e3aa-444d-91a3-fa915be48b46\") " pod="openshift-marketplace/redhat-marketplace-cs597" Feb 17 15:56:36 crc kubenswrapper[4808]: I0217 15:56:36.454497 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:36 crc kubenswrapper[4808]: I0217 15:56:36.462946 4808 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 15:56:36 crc kubenswrapper[4808]: I0217 15:56:36.463003 4808 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:36 crc kubenswrapper[4808]: I0217 15:56:36.556969 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:36 crc kubenswrapper[4808]: I0217 15:56:36.566747 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cs597" Feb 17 15:56:36 crc kubenswrapper[4808]: I0217 15:56:36.602449 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-ts9gs"] Feb 17 15:56:36 crc kubenswrapper[4808]: I0217 15:56:36.603621 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ts9gs" Feb 17 15:56:36 crc kubenswrapper[4808]: I0217 15:56:36.632207 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ts9gs"] Feb 17 15:56:36 crc kubenswrapper[4808]: I0217 15:56:36.641955 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:36 crc kubenswrapper[4808]: I0217 15:56:36.657870 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92dfded8-f453-4bfc-809e-e7ed7e25de27-catalog-content\") pod \"redhat-marketplace-ts9gs\" (UID: \"92dfded8-f453-4bfc-809e-e7ed7e25de27\") " pod="openshift-marketplace/redhat-marketplace-ts9gs" Feb 17 15:56:36 crc kubenswrapper[4808]: I0217 15:56:36.657937 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbjtv\" (UniqueName: \"kubernetes.io/projected/92dfded8-f453-4bfc-809e-e7ed7e25de27-kube-api-access-kbjtv\") pod \"redhat-marketplace-ts9gs\" (UID: \"92dfded8-f453-4bfc-809e-e7ed7e25de27\") " pod="openshift-marketplace/redhat-marketplace-ts9gs" Feb 17 15:56:36 crc kubenswrapper[4808]: I0217 15:56:36.657969 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92dfded8-f453-4bfc-809e-e7ed7e25de27-utilities\") pod \"redhat-marketplace-ts9gs\" (UID: \"92dfded8-f453-4bfc-809e-e7ed7e25de27\") " pod="openshift-marketplace/redhat-marketplace-ts9gs" Feb 17 15:56:36 crc kubenswrapper[4808]: I0217 15:56:36.718881 4808 patch_prober.go:28] interesting pod/router-default-5444994796-jwcd2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:56:36 crc kubenswrapper[4808]: [-]has-synced failed: reason withheld Feb 17 15:56:36 crc kubenswrapper[4808]: [+]process-running ok Feb 17 15:56:36 crc kubenswrapper[4808]: healthz check failed Feb 17 15:56:36 crc kubenswrapper[4808]: I0217 15:56:36.718974 4808 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jwcd2" podUID="b26b861c-ec52-4685-846c-ea022517e9fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:56:36 crc kubenswrapper[4808]: I0217 15:56:36.757786 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-7jp8q" Feb 17 15:56:36 crc kubenswrapper[4808]: I0217 15:56:36.759881 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92dfded8-f453-4bfc-809e-e7ed7e25de27-catalog-content\") pod \"redhat-marketplace-ts9gs\" (UID: \"92dfded8-f453-4bfc-809e-e7ed7e25de27\") " pod="openshift-marketplace/redhat-marketplace-ts9gs" Feb 17 15:56:36 crc kubenswrapper[4808]: I0217 15:56:36.759921 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbjtv\" (UniqueName: \"kubernetes.io/projected/92dfded8-f453-4bfc-809e-e7ed7e25de27-kube-api-access-kbjtv\") pod \"redhat-marketplace-ts9gs\" (UID: \"92dfded8-f453-4bfc-809e-e7ed7e25de27\") " pod="openshift-marketplace/redhat-marketplace-ts9gs" Feb 17 15:56:36 crc kubenswrapper[4808]: I0217 15:56:36.759956 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92dfded8-f453-4bfc-809e-e7ed7e25de27-utilities\") pod \"redhat-marketplace-ts9gs\" (UID: \"92dfded8-f453-4bfc-809e-e7ed7e25de27\") " pod="openshift-marketplace/redhat-marketplace-ts9gs" Feb 17 15:56:36 crc kubenswrapper[4808]: I0217 15:56:36.760433 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92dfded8-f453-4bfc-809e-e7ed7e25de27-utilities\") pod \"redhat-marketplace-ts9gs\" (UID: \"92dfded8-f453-4bfc-809e-e7ed7e25de27\") " pod="openshift-marketplace/redhat-marketplace-ts9gs" Feb 17 15:56:36 crc kubenswrapper[4808]: I0217 15:56:36.760667 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92dfded8-f453-4bfc-809e-e7ed7e25de27-catalog-content\") pod \"redhat-marketplace-ts9gs\" (UID: \"92dfded8-f453-4bfc-809e-e7ed7e25de27\") " pod="openshift-marketplace/redhat-marketplace-ts9gs" Feb 17 15:56:36 crc kubenswrapper[4808]: I0217 15:56:36.777899 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-7jp8q" Feb 17 15:56:36 crc kubenswrapper[4808]: I0217 15:56:36.784639 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbjtv\" (UniqueName: \"kubernetes.io/projected/92dfded8-f453-4bfc-809e-e7ed7e25de27-kube-api-access-kbjtv\") pod \"redhat-marketplace-ts9gs\" (UID: \"92dfded8-f453-4bfc-809e-e7ed7e25de27\") " pod="openshift-marketplace/redhat-marketplace-ts9gs" Feb 17 15:56:36 crc kubenswrapper[4808]: I0217 15:56:36.919603 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ts9gs" Feb 17 15:56:36 crc kubenswrapper[4808]: I0217 15:56:36.953337 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cs597"] Feb 17 15:56:37 crc kubenswrapper[4808]: I0217 15:56:37.004294 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fmfh5"] Feb 17 15:56:37 crc kubenswrapper[4808]: I0217 15:56:37.012006 4808 generic.go:334] "Generic (PLEG): container finished" podID="2f04008a-114c-4f19-971a-34fa574846f5" containerID="f9c248e0102ac7a597ac6e8de2b6e8d0d34fbaee650f849f4734c52dfbfaedd5" exitCode=0 Feb 17 15:56:37 crc kubenswrapper[4808]: I0217 15:56:37.012095 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wsbjl" event={"ID":"2f04008a-114c-4f19-971a-34fa574846f5","Type":"ContainerDied","Data":"f9c248e0102ac7a597ac6e8de2b6e8d0d34fbaee650f849f4734c52dfbfaedd5"} Feb 17 15:56:37 crc kubenswrapper[4808]: I0217 15:56:37.012137 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wsbjl" event={"ID":"2f04008a-114c-4f19-971a-34fa574846f5","Type":"ContainerStarted","Data":"735c6effafb73a77d28e55e021aec1242fb9a889fb9fde23203faa6b85d31dbc"} Feb 17 15:56:37 crc kubenswrapper[4808]: I0217 15:56:37.026280 4808 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 15:56:37 crc kubenswrapper[4808]: I0217 15:56:37.047078 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cs597" event={"ID":"48efd125-e3aa-444d-91a3-fa915be48b46","Type":"ContainerStarted","Data":"126635f0be61976c959568021a2dceebba5ec8a4421ba4bd848eb5998d5c720b"} Feb 17 15:56:37 crc kubenswrapper[4808]: I0217 15:56:37.069257 4808 generic.go:334] "Generic (PLEG): container finished" podID="543b2019-8399-411e-8e8b-45787b96873f" containerID="a1b466a7276199cdb3d16661c145bd9226ea4df1371372728f98eec1641d1432" exitCode=0 Feb 17 15:56:37 crc kubenswrapper[4808]: I0217 15:56:37.069353 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-22x8m" event={"ID":"543b2019-8399-411e-8e8b-45787b96873f","Type":"ContainerDied","Data":"a1b466a7276199cdb3d16661c145bd9226ea4df1371372728f98eec1641d1432"} Feb 17 15:56:37 crc kubenswrapper[4808]: I0217 15:56:37.077302 4808 generic.go:334] "Generic (PLEG): container finished" podID="a1db3ff7-c43f-412e-ab72-3d592b6352b0" containerID="b039d42ff08392f60bfd69fd494b2249c19f74796e443b4b4b8b827c93e49b48" exitCode=0 Feb 17 15:56:37 crc kubenswrapper[4808]: I0217 15:56:37.077390 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hn7fn" event={"ID":"a1db3ff7-c43f-412e-ab72-3d592b6352b0","Type":"ContainerDied","Data":"b039d42ff08392f60bfd69fd494b2249c19f74796e443b4b4b8b827c93e49b48"} Feb 17 15:56:37 crc kubenswrapper[4808]: I0217 15:56:37.077414 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hn7fn" event={"ID":"a1db3ff7-c43f-412e-ab72-3d592b6352b0","Type":"ContainerStarted","Data":"a45a3dcf61a1bf78b3c958287ad11993acb14303ea923a5033d56896c26a6ab3"} Feb 17 15:56:37 crc kubenswrapper[4808]: I0217 15:56:37.101974 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6vvmq" event={"ID":"57300b85-6c7e-49da-bb14-40055f48a85c","Type":"ContainerDied","Data":"a0e2eeefc3bf87bde55affaedf8d295a474fecb9dcf906520b5bc6b26957f78c"} Feb 17 15:56:37 crc kubenswrapper[4808]: I0217 15:56:37.104175 4808 generic.go:334] "Generic (PLEG): container finished" podID="57300b85-6c7e-49da-bb14-40055f48a85c" containerID="a0e2eeefc3bf87bde55affaedf8d295a474fecb9dcf906520b5bc6b26957f78c" exitCode=0 Feb 17 15:56:37 crc kubenswrapper[4808]: I0217 15:56:37.105042 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6vvmq" event={"ID":"57300b85-6c7e-49da-bb14-40055f48a85c","Type":"ContainerStarted","Data":"978f619d6b3d5011491c32f00a6237544c3cbc039e50f7389d14d76374df3c9e"} Feb 17 15:56:37 crc kubenswrapper[4808]: I0217 15:56:37.167147 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Feb 17 15:56:37 crc kubenswrapper[4808]: I0217 15:56:37.287470 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ts9gs"] Feb 17 15:56:37 crc kubenswrapper[4808]: W0217 15:56:37.342898 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod92dfded8_f453_4bfc_809e_e7ed7e25de27.slice/crio-f4563d14e850e83b34a7ac316296bd63282dec1b6828a89346f08302aa89387a WatchSource:0}: Error finding container f4563d14e850e83b34a7ac316296bd63282dec1b6828a89346f08302aa89387a: Status 404 returned error can't find the container with id f4563d14e850e83b34a7ac316296bd63282dec1b6828a89346f08302aa89387a Feb 17 15:56:37 crc kubenswrapper[4808]: I0217 15:56:37.372806 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 17 15:56:37 crc kubenswrapper[4808]: I0217 15:56:37.373749 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 17 15:56:37 crc kubenswrapper[4808]: I0217 15:56:37.390059 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 17 15:56:37 crc kubenswrapper[4808]: I0217 15:56:37.390146 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Feb 17 15:56:37 crc kubenswrapper[4808]: I0217 15:56:37.390844 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 17 15:56:37 crc kubenswrapper[4808]: I0217 15:56:37.462696 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-s2fz5" Feb 17 15:56:37 crc kubenswrapper[4808]: I0217 15:56:37.471081 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/92637ea3-788c-438d-a664-c2b8d640f2d1-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"92637ea3-788c-438d-a664-c2b8d640f2d1\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 17 15:56:37 crc kubenswrapper[4808]: I0217 15:56:37.471161 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/92637ea3-788c-438d-a664-c2b8d640f2d1-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"92637ea3-788c-438d-a664-c2b8d640f2d1\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 17 15:56:37 crc kubenswrapper[4808]: I0217 15:56:37.572873 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/92637ea3-788c-438d-a664-c2b8d640f2d1-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"92637ea3-788c-438d-a664-c2b8d640f2d1\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 17 15:56:37 crc kubenswrapper[4808]: I0217 15:56:37.573012 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/92637ea3-788c-438d-a664-c2b8d640f2d1-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"92637ea3-788c-438d-a664-c2b8d640f2d1\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 17 15:56:37 crc kubenswrapper[4808]: I0217 15:56:37.573029 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/92637ea3-788c-438d-a664-c2b8d640f2d1-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"92637ea3-788c-438d-a664-c2b8d640f2d1\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 17 15:56:37 crc kubenswrapper[4808]: I0217 15:56:37.608614 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/92637ea3-788c-438d-a664-c2b8d640f2d1-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"92637ea3-788c-438d-a664-c2b8d640f2d1\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 17 15:56:37 crc kubenswrapper[4808]: I0217 15:56:37.611679 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8jsrz"] Feb 17 15:56:37 crc kubenswrapper[4808]: I0217 15:56:37.612909 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8jsrz" Feb 17 15:56:37 crc kubenswrapper[4808]: I0217 15:56:37.616336 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 17 15:56:37 crc kubenswrapper[4808]: I0217 15:56:37.675146 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfwdc\" (UniqueName: \"kubernetes.io/projected/e22d34a8-92f6-4a2a-a0f5-e063c25afac1-kube-api-access-bfwdc\") pod \"redhat-operators-8jsrz\" (UID: \"e22d34a8-92f6-4a2a-a0f5-e063c25afac1\") " pod="openshift-marketplace/redhat-operators-8jsrz" Feb 17 15:56:37 crc kubenswrapper[4808]: I0217 15:56:37.675378 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e22d34a8-92f6-4a2a-a0f5-e063c25afac1-catalog-content\") pod \"redhat-operators-8jsrz\" (UID: \"e22d34a8-92f6-4a2a-a0f5-e063c25afac1\") " pod="openshift-marketplace/redhat-operators-8jsrz" Feb 17 15:56:37 crc kubenswrapper[4808]: I0217 15:56:37.675558 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e22d34a8-92f6-4a2a-a0f5-e063c25afac1-utilities\") pod \"redhat-operators-8jsrz\" (UID: \"e22d34a8-92f6-4a2a-a0f5-e063c25afac1\") " pod="openshift-marketplace/redhat-operators-8jsrz" Feb 17 15:56:37 crc kubenswrapper[4808]: I0217 15:56:37.700284 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8jsrz"] Feb 17 15:56:37 crc kubenswrapper[4808]: I0217 15:56:37.725203 4808 patch_prober.go:28] interesting pod/router-default-5444994796-jwcd2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:56:37 crc kubenswrapper[4808]: [-]has-synced failed: reason withheld Feb 17 15:56:37 crc kubenswrapper[4808]: [+]process-running ok Feb 17 15:56:37 crc kubenswrapper[4808]: healthz check failed Feb 17 15:56:37 crc kubenswrapper[4808]: I0217 15:56:37.725286 4808 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jwcd2" podUID="b26b861c-ec52-4685-846c-ea022517e9fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:56:37 crc kubenswrapper[4808]: I0217 15:56:37.776657 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e22d34a8-92f6-4a2a-a0f5-e063c25afac1-catalog-content\") pod \"redhat-operators-8jsrz\" (UID: \"e22d34a8-92f6-4a2a-a0f5-e063c25afac1\") " pod="openshift-marketplace/redhat-operators-8jsrz" Feb 17 15:56:37 crc kubenswrapper[4808]: I0217 15:56:37.776741 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e22d34a8-92f6-4a2a-a0f5-e063c25afac1-utilities\") pod \"redhat-operators-8jsrz\" (UID: \"e22d34a8-92f6-4a2a-a0f5-e063c25afac1\") " pod="openshift-marketplace/redhat-operators-8jsrz" Feb 17 15:56:37 crc kubenswrapper[4808]: I0217 15:56:37.776782 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bfwdc\" (UniqueName: \"kubernetes.io/projected/e22d34a8-92f6-4a2a-a0f5-e063c25afac1-kube-api-access-bfwdc\") pod \"redhat-operators-8jsrz\" (UID: \"e22d34a8-92f6-4a2a-a0f5-e063c25afac1\") " pod="openshift-marketplace/redhat-operators-8jsrz" Feb 17 15:56:37 crc kubenswrapper[4808]: I0217 15:56:37.778251 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e22d34a8-92f6-4a2a-a0f5-e063c25afac1-utilities\") pod \"redhat-operators-8jsrz\" (UID: \"e22d34a8-92f6-4a2a-a0f5-e063c25afac1\") " pod="openshift-marketplace/redhat-operators-8jsrz" Feb 17 15:56:37 crc kubenswrapper[4808]: I0217 15:56:37.778280 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e22d34a8-92f6-4a2a-a0f5-e063c25afac1-catalog-content\") pod \"redhat-operators-8jsrz\" (UID: \"e22d34a8-92f6-4a2a-a0f5-e063c25afac1\") " pod="openshift-marketplace/redhat-operators-8jsrz" Feb 17 15:56:37 crc kubenswrapper[4808]: I0217 15:56:37.780851 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 17 15:56:37 crc kubenswrapper[4808]: I0217 15:56:37.796900 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bfwdc\" (UniqueName: \"kubernetes.io/projected/e22d34a8-92f6-4a2a-a0f5-e063c25afac1-kube-api-access-bfwdc\") pod \"redhat-operators-8jsrz\" (UID: \"e22d34a8-92f6-4a2a-a0f5-e063c25afac1\") " pod="openshift-marketplace/redhat-operators-8jsrz" Feb 17 15:56:37 crc kubenswrapper[4808]: I0217 15:56:37.912630 4808 patch_prober.go:28] interesting pod/downloads-7954f5f757-wlj8d container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Feb 17 15:56:37 crc kubenswrapper[4808]: I0217 15:56:37.912696 4808 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-wlj8d" podUID="116ae5bc-cf7e-45ad-9800-501bcfc04ff7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Feb 17 15:56:37 crc kubenswrapper[4808]: I0217 15:56:37.912977 4808 patch_prober.go:28] interesting pod/downloads-7954f5f757-wlj8d container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Feb 17 15:56:37 crc kubenswrapper[4808]: I0217 15:56:37.913053 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-wlj8d" podUID="116ae5bc-cf7e-45ad-9800-501bcfc04ff7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Feb 17 15:56:37 crc kubenswrapper[4808]: I0217 15:56:37.934915 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8jsrz" Feb 17 15:56:38 crc kubenswrapper[4808]: I0217 15:56:38.001101 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 17 15:56:38 crc kubenswrapper[4808]: I0217 15:56:38.018673 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qhtfr"] Feb 17 15:56:38 crc kubenswrapper[4808]: I0217 15:56:38.019719 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qhtfr" Feb 17 15:56:38 crc kubenswrapper[4808]: I0217 15:56:38.044268 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qhtfr"] Feb 17 15:56:38 crc kubenswrapper[4808]: I0217 15:56:38.082507 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df27437e-6547-4705-bbe7-08a726639dbe-utilities\") pod \"redhat-operators-qhtfr\" (UID: \"df27437e-6547-4705-bbe7-08a726639dbe\") " pod="openshift-marketplace/redhat-operators-qhtfr" Feb 17 15:56:38 crc kubenswrapper[4808]: I0217 15:56:38.082565 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df27437e-6547-4705-bbe7-08a726639dbe-catalog-content\") pod \"redhat-operators-qhtfr\" (UID: \"df27437e-6547-4705-bbe7-08a726639dbe\") " pod="openshift-marketplace/redhat-operators-qhtfr" Feb 17 15:56:38 crc kubenswrapper[4808]: I0217 15:56:38.082672 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2255r\" (UniqueName: \"kubernetes.io/projected/df27437e-6547-4705-bbe7-08a726639dbe-kube-api-access-2255r\") pod \"redhat-operators-qhtfr\" (UID: \"df27437e-6547-4705-bbe7-08a726639dbe\") " pod="openshift-marketplace/redhat-operators-qhtfr" Feb 17 15:56:38 crc kubenswrapper[4808]: I0217 15:56:38.138022 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"92637ea3-788c-438d-a664-c2b8d640f2d1","Type":"ContainerStarted","Data":"8d3e6325dff416527f0b5f7a426deb2ee9273e60e45b536362885c914658d019"} Feb 17 15:56:38 crc kubenswrapper[4808]: I0217 15:56:38.151501 4808 generic.go:334] "Generic (PLEG): container finished" podID="48efd125-e3aa-444d-91a3-fa915be48b46" containerID="2d27bebccfda20ebcc5c228a8194fccc9e95ec81e20baedc530a917fdd03e867" exitCode=0 Feb 17 15:56:38 crc kubenswrapper[4808]: I0217 15:56:38.151728 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cs597" event={"ID":"48efd125-e3aa-444d-91a3-fa915be48b46","Type":"ContainerDied","Data":"2d27bebccfda20ebcc5c228a8194fccc9e95ec81e20baedc530a917fdd03e867"} Feb 17 15:56:38 crc kubenswrapper[4808]: I0217 15:56:38.161848 4808 generic.go:334] "Generic (PLEG): container finished" podID="92dfded8-f453-4bfc-809e-e7ed7e25de27" containerID="9354679fc175439a552de7724a5e6bda5b9e9fec4478f89999a50a2ea884f0d2" exitCode=0 Feb 17 15:56:38 crc kubenswrapper[4808]: I0217 15:56:38.161932 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ts9gs" event={"ID":"92dfded8-f453-4bfc-809e-e7ed7e25de27","Type":"ContainerDied","Data":"9354679fc175439a552de7724a5e6bda5b9e9fec4478f89999a50a2ea884f0d2"} Feb 17 15:56:38 crc kubenswrapper[4808]: I0217 15:56:38.161991 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ts9gs" event={"ID":"92dfded8-f453-4bfc-809e-e7ed7e25de27","Type":"ContainerStarted","Data":"f4563d14e850e83b34a7ac316296bd63282dec1b6828a89346f08302aa89387a"} Feb 17 15:56:38 crc kubenswrapper[4808]: I0217 15:56:38.185557 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df27437e-6547-4705-bbe7-08a726639dbe-utilities\") pod \"redhat-operators-qhtfr\" (UID: \"df27437e-6547-4705-bbe7-08a726639dbe\") " pod="openshift-marketplace/redhat-operators-qhtfr" Feb 17 15:56:38 crc kubenswrapper[4808]: I0217 15:56:38.185661 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df27437e-6547-4705-bbe7-08a726639dbe-catalog-content\") pod \"redhat-operators-qhtfr\" (UID: \"df27437e-6547-4705-bbe7-08a726639dbe\") " pod="openshift-marketplace/redhat-operators-qhtfr" Feb 17 15:56:38 crc kubenswrapper[4808]: I0217 15:56:38.185761 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2255r\" (UniqueName: \"kubernetes.io/projected/df27437e-6547-4705-bbe7-08a726639dbe-kube-api-access-2255r\") pod \"redhat-operators-qhtfr\" (UID: \"df27437e-6547-4705-bbe7-08a726639dbe\") " pod="openshift-marketplace/redhat-operators-qhtfr" Feb 17 15:56:38 crc kubenswrapper[4808]: I0217 15:56:38.188507 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df27437e-6547-4705-bbe7-08a726639dbe-catalog-content\") pod \"redhat-operators-qhtfr\" (UID: \"df27437e-6547-4705-bbe7-08a726639dbe\") " pod="openshift-marketplace/redhat-operators-qhtfr" Feb 17 15:56:38 crc kubenswrapper[4808]: I0217 15:56:38.188610 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df27437e-6547-4705-bbe7-08a726639dbe-utilities\") pod \"redhat-operators-qhtfr\" (UID: \"df27437e-6547-4705-bbe7-08a726639dbe\") " pod="openshift-marketplace/redhat-operators-qhtfr" Feb 17 15:56:38 crc kubenswrapper[4808]: I0217 15:56:38.193538 4808 generic.go:334] "Generic (PLEG): container finished" podID="7baa3ebb-6bb0-4744-b096-971958bcd263" containerID="4636e3a05a4f1b63b0a37839e73e790b55d96dd321273848e2dfb3f38193ea44" exitCode=0 Feb 17 15:56:38 crc kubenswrapper[4808]: I0217 15:56:38.193966 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522385-74pvr" event={"ID":"7baa3ebb-6bb0-4744-b096-971958bcd263","Type":"ContainerDied","Data":"4636e3a05a4f1b63b0a37839e73e790b55d96dd321273848e2dfb3f38193ea44"} Feb 17 15:56:38 crc kubenswrapper[4808]: I0217 15:56:38.209877 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" event={"ID":"ddc3801d-3513-460c-a719-ed9dc92697e7","Type":"ContainerStarted","Data":"2c6abeefd28c47d49cee179f808d4b10aff7311be498ba875ef344c21dc775da"} Feb 17 15:56:38 crc kubenswrapper[4808]: I0217 15:56:38.209930 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" event={"ID":"ddc3801d-3513-460c-a719-ed9dc92697e7","Type":"ContainerStarted","Data":"6e3f1081b00b18d9f343d94a49f4eb8fd3475f6dc82e8e6676483c99ff105dda"} Feb 17 15:56:38 crc kubenswrapper[4808]: I0217 15:56:38.210606 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:38 crc kubenswrapper[4808]: I0217 15:56:38.218387 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2255r\" (UniqueName: \"kubernetes.io/projected/df27437e-6547-4705-bbe7-08a726639dbe-kube-api-access-2255r\") pod \"redhat-operators-qhtfr\" (UID: \"df27437e-6547-4705-bbe7-08a726639dbe\") " pod="openshift-marketplace/redhat-operators-qhtfr" Feb 17 15:56:38 crc kubenswrapper[4808]: I0217 15:56:38.219499 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-hdg74" Feb 17 15:56:38 crc kubenswrapper[4808]: I0217 15:56:38.219536 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-hdg74" Feb 17 15:56:38 crc kubenswrapper[4808]: I0217 15:56:38.245067 4808 patch_prober.go:28] interesting pod/console-f9d7485db-hdg74 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Feb 17 15:56:38 crc kubenswrapper[4808]: I0217 15:56:38.245131 4808 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-hdg74" podUID="e489a46b-9123-44c6-94e0-692621760dd6" containerName="console" probeResult="failure" output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" Feb 17 15:56:38 crc kubenswrapper[4808]: I0217 15:56:38.262739 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" podStartSLOduration=141.262714588 podStartE2EDuration="2m21.262714588s" podCreationTimestamp="2026-02-17 15:54:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:56:38.255064441 +0000 UTC m=+161.771423514" watchObservedRunningTime="2026-02-17 15:56:38.262714588 +0000 UTC m=+161.779073661" Feb 17 15:56:38 crc kubenswrapper[4808]: I0217 15:56:38.272310 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8jsrz"] Feb 17 15:56:38 crc kubenswrapper[4808]: W0217 15:56:38.332961 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode22d34a8_92f6_4a2a_a0f5_e063c25afac1.slice/crio-74a889b6efdb919b84134965ae425faf36a72c4e4787bd3f59cfb8cf73e5c6b2 WatchSource:0}: Error finding container 74a889b6efdb919b84134965ae425faf36a72c4e4787bd3f59cfb8cf73e5c6b2: Status 404 returned error can't find the container with id 74a889b6efdb919b84134965ae425faf36a72c4e4787bd3f59cfb8cf73e5c6b2 Feb 17 15:56:38 crc kubenswrapper[4808]: I0217 15:56:38.396269 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qhtfr" Feb 17 15:56:38 crc kubenswrapper[4808]: I0217 15:56:38.712421 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-jwcd2" Feb 17 15:56:38 crc kubenswrapper[4808]: I0217 15:56:38.717122 4808 patch_prober.go:28] interesting pod/router-default-5444994796-jwcd2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:56:38 crc kubenswrapper[4808]: [-]has-synced failed: reason withheld Feb 17 15:56:38 crc kubenswrapper[4808]: [+]process-running ok Feb 17 15:56:38 crc kubenswrapper[4808]: healthz check failed Feb 17 15:56:38 crc kubenswrapper[4808]: I0217 15:56:38.717179 4808 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jwcd2" podUID="b26b861c-ec52-4685-846c-ea022517e9fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:56:38 crc kubenswrapper[4808]: I0217 15:56:38.725009 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-sbr84" Feb 17 15:56:38 crc kubenswrapper[4808]: I0217 15:56:38.927704 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qhtfr"] Feb 17 15:56:38 crc kubenswrapper[4808]: W0217 15:56:38.945451 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddf27437e_6547_4705_bbe7_08a726639dbe.slice/crio-1e19955de905028b28d439d0244d4c394edca2e38947d73637092653f1783480 WatchSource:0}: Error finding container 1e19955de905028b28d439d0244d4c394edca2e38947d73637092653f1783480: Status 404 returned error can't find the container with id 1e19955de905028b28d439d0244d4c394edca2e38947d73637092653f1783480 Feb 17 15:56:39 crc kubenswrapper[4808]: I0217 15:56:39.225214 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"92637ea3-788c-438d-a664-c2b8d640f2d1","Type":"ContainerStarted","Data":"870afdbdf8bbaf38a8a882e84c4b0e9c69042050dd1e130951409c7fee498caf"} Feb 17 15:56:39 crc kubenswrapper[4808]: I0217 15:56:39.228379 4808 generic.go:334] "Generic (PLEG): container finished" podID="e22d34a8-92f6-4a2a-a0f5-e063c25afac1" containerID="3c46a03c8aecba377b0d1ea2fda18a067c3dd9d9e53d4229b5338fca0d7a98e0" exitCode=0 Feb 17 15:56:39 crc kubenswrapper[4808]: I0217 15:56:39.228477 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8jsrz" event={"ID":"e22d34a8-92f6-4a2a-a0f5-e063c25afac1","Type":"ContainerDied","Data":"3c46a03c8aecba377b0d1ea2fda18a067c3dd9d9e53d4229b5338fca0d7a98e0"} Feb 17 15:56:39 crc kubenswrapper[4808]: I0217 15:56:39.228504 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8jsrz" event={"ID":"e22d34a8-92f6-4a2a-a0f5-e063c25afac1","Type":"ContainerStarted","Data":"74a889b6efdb919b84134965ae425faf36a72c4e4787bd3f59cfb8cf73e5c6b2"} Feb 17 15:56:39 crc kubenswrapper[4808]: I0217 15:56:39.243496 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=2.243473535 podStartE2EDuration="2.243473535s" podCreationTimestamp="2026-02-17 15:56:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:56:39.237621367 +0000 UTC m=+162.753980440" watchObservedRunningTime="2026-02-17 15:56:39.243473535 +0000 UTC m=+162.759832608" Feb 17 15:56:39 crc kubenswrapper[4808]: I0217 15:56:39.281037 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qhtfr" event={"ID":"df27437e-6547-4705-bbe7-08a726639dbe","Type":"ContainerStarted","Data":"1e19955de905028b28d439d0244d4c394edca2e38947d73637092653f1783480"} Feb 17 15:56:39 crc kubenswrapper[4808]: I0217 15:56:39.642747 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522385-74pvr" Feb 17 15:56:39 crc kubenswrapper[4808]: I0217 15:56:39.722497 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-jwcd2" Feb 17 15:56:39 crc kubenswrapper[4808]: I0217 15:56:39.728900 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-jwcd2" Feb 17 15:56:39 crc kubenswrapper[4808]: I0217 15:56:39.816562 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7baa3ebb-6bb0-4744-b096-971958bcd263-secret-volume\") pod \"7baa3ebb-6bb0-4744-b096-971958bcd263\" (UID: \"7baa3ebb-6bb0-4744-b096-971958bcd263\") " Feb 17 15:56:39 crc kubenswrapper[4808]: I0217 15:56:39.816640 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7baa3ebb-6bb0-4744-b096-971958bcd263-config-volume\") pod \"7baa3ebb-6bb0-4744-b096-971958bcd263\" (UID: \"7baa3ebb-6bb0-4744-b096-971958bcd263\") " Feb 17 15:56:39 crc kubenswrapper[4808]: I0217 15:56:39.817513 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gmv2c\" (UniqueName: \"kubernetes.io/projected/7baa3ebb-6bb0-4744-b096-971958bcd263-kube-api-access-gmv2c\") pod \"7baa3ebb-6bb0-4744-b096-971958bcd263\" (UID: \"7baa3ebb-6bb0-4744-b096-971958bcd263\") " Feb 17 15:56:39 crc kubenswrapper[4808]: I0217 15:56:39.817820 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b88c3e5f-7390-477c-ae74-aced26a8ddf9-metrics-certs\") pod \"network-metrics-daemon-z8tn8\" (UID: \"b88c3e5f-7390-477c-ae74-aced26a8ddf9\") " pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:56:39 crc kubenswrapper[4808]: I0217 15:56:39.818802 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7baa3ebb-6bb0-4744-b096-971958bcd263-config-volume" (OuterVolumeSpecName: "config-volume") pod "7baa3ebb-6bb0-4744-b096-971958bcd263" (UID: "7baa3ebb-6bb0-4744-b096-971958bcd263"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:56:39 crc kubenswrapper[4808]: I0217 15:56:39.823486 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7baa3ebb-6bb0-4744-b096-971958bcd263-kube-api-access-gmv2c" (OuterVolumeSpecName: "kube-api-access-gmv2c") pod "7baa3ebb-6bb0-4744-b096-971958bcd263" (UID: "7baa3ebb-6bb0-4744-b096-971958bcd263"). InnerVolumeSpecName "kube-api-access-gmv2c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:56:39 crc kubenswrapper[4808]: I0217 15:56:39.827227 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b88c3e5f-7390-477c-ae74-aced26a8ddf9-metrics-certs\") pod \"network-metrics-daemon-z8tn8\" (UID: \"b88c3e5f-7390-477c-ae74-aced26a8ddf9\") " pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:56:39 crc kubenswrapper[4808]: I0217 15:56:39.842582 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7baa3ebb-6bb0-4744-b096-971958bcd263-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "7baa3ebb-6bb0-4744-b096-971958bcd263" (UID: "7baa3ebb-6bb0-4744-b096-971958bcd263"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:56:39 crc kubenswrapper[4808]: I0217 15:56:39.919079 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gmv2c\" (UniqueName: \"kubernetes.io/projected/7baa3ebb-6bb0-4744-b096-971958bcd263-kube-api-access-gmv2c\") on node \"crc\" DevicePath \"\"" Feb 17 15:56:39 crc kubenswrapper[4808]: I0217 15:56:39.919123 4808 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7baa3ebb-6bb0-4744-b096-971958bcd263-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 17 15:56:39 crc kubenswrapper[4808]: I0217 15:56:39.919134 4808 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7baa3ebb-6bb0-4744-b096-971958bcd263-config-volume\") on node \"crc\" DevicePath \"\"" Feb 17 15:56:40 crc kubenswrapper[4808]: I0217 15:56:40.072498 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:56:40 crc kubenswrapper[4808]: I0217 15:56:40.228524 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 17 15:56:40 crc kubenswrapper[4808]: E0217 15:56:40.228944 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7baa3ebb-6bb0-4744-b096-971958bcd263" containerName="collect-profiles" Feb 17 15:56:40 crc kubenswrapper[4808]: I0217 15:56:40.228980 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="7baa3ebb-6bb0-4744-b096-971958bcd263" containerName="collect-profiles" Feb 17 15:56:40 crc kubenswrapper[4808]: I0217 15:56:40.229154 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="7baa3ebb-6bb0-4744-b096-971958bcd263" containerName="collect-profiles" Feb 17 15:56:40 crc kubenswrapper[4808]: I0217 15:56:40.229939 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 17 15:56:40 crc kubenswrapper[4808]: I0217 15:56:40.232516 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 17 15:56:40 crc kubenswrapper[4808]: I0217 15:56:40.239155 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 17 15:56:40 crc kubenswrapper[4808]: I0217 15:56:40.275008 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 17 15:56:40 crc kubenswrapper[4808]: I0217 15:56:40.327685 4808 generic.go:334] "Generic (PLEG): container finished" podID="92637ea3-788c-438d-a664-c2b8d640f2d1" containerID="870afdbdf8bbaf38a8a882e84c4b0e9c69042050dd1e130951409c7fee498caf" exitCode=0 Feb 17 15:56:40 crc kubenswrapper[4808]: I0217 15:56:40.328518 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"92637ea3-788c-438d-a664-c2b8d640f2d1","Type":"ContainerDied","Data":"870afdbdf8bbaf38a8a882e84c4b0e9c69042050dd1e130951409c7fee498caf"} Feb 17 15:56:40 crc kubenswrapper[4808]: I0217 15:56:40.338081 4808 generic.go:334] "Generic (PLEG): container finished" podID="df27437e-6547-4705-bbe7-08a726639dbe" containerID="7be6898f1f88ea761e64c2d8022df14c7db8627e97d2f080f379df7514b92a85" exitCode=0 Feb 17 15:56:40 crc kubenswrapper[4808]: I0217 15:56:40.338204 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qhtfr" event={"ID":"df27437e-6547-4705-bbe7-08a726639dbe","Type":"ContainerDied","Data":"7be6898f1f88ea761e64c2d8022df14c7db8627e97d2f080f379df7514b92a85"} Feb 17 15:56:40 crc kubenswrapper[4808]: I0217 15:56:40.348488 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522385-74pvr" event={"ID":"7baa3ebb-6bb0-4744-b096-971958bcd263","Type":"ContainerDied","Data":"b07a627c0e44e85d03382e77fdbb6e3a6fef1ba1b49d24c7a30b720a10a8ce6d"} Feb 17 15:56:40 crc kubenswrapper[4808]: I0217 15:56:40.348562 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b07a627c0e44e85d03382e77fdbb6e3a6fef1ba1b49d24c7a30b720a10a8ce6d" Feb 17 15:56:40 crc kubenswrapper[4808]: I0217 15:56:40.348515 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522385-74pvr" Feb 17 15:56:40 crc kubenswrapper[4808]: I0217 15:56:40.438121 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7c3eff00-0ae7-4c6a-ad5f-931c2cf09940-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"7c3eff00-0ae7-4c6a-ad5f-931c2cf09940\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 17 15:56:40 crc kubenswrapper[4808]: I0217 15:56:40.438178 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7c3eff00-0ae7-4c6a-ad5f-931c2cf09940-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"7c3eff00-0ae7-4c6a-ad5f-931c2cf09940\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 17 15:56:40 crc kubenswrapper[4808]: I0217 15:56:40.520848 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-z8tn8"] Feb 17 15:56:40 crc kubenswrapper[4808]: I0217 15:56:40.539854 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7c3eff00-0ae7-4c6a-ad5f-931c2cf09940-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"7c3eff00-0ae7-4c6a-ad5f-931c2cf09940\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 17 15:56:40 crc kubenswrapper[4808]: I0217 15:56:40.540028 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7c3eff00-0ae7-4c6a-ad5f-931c2cf09940-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"7c3eff00-0ae7-4c6a-ad5f-931c2cf09940\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 17 15:56:40 crc kubenswrapper[4808]: I0217 15:56:40.539903 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7c3eff00-0ae7-4c6a-ad5f-931c2cf09940-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"7c3eff00-0ae7-4c6a-ad5f-931c2cf09940\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 17 15:56:40 crc kubenswrapper[4808]: W0217 15:56:40.545162 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb88c3e5f_7390_477c_ae74_aced26a8ddf9.slice/crio-1b698169075bf038e5184d91d7401cd9a1728c0dfa40c4b12efb0fd20af6ad51 WatchSource:0}: Error finding container 1b698169075bf038e5184d91d7401cd9a1728c0dfa40c4b12efb0fd20af6ad51: Status 404 returned error can't find the container with id 1b698169075bf038e5184d91d7401cd9a1728c0dfa40c4b12efb0fd20af6ad51 Feb 17 15:56:40 crc kubenswrapper[4808]: I0217 15:56:40.560461 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7c3eff00-0ae7-4c6a-ad5f-931c2cf09940-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"7c3eff00-0ae7-4c6a-ad5f-931c2cf09940\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 17 15:56:40 crc kubenswrapper[4808]: I0217 15:56:40.852143 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 17 15:56:41 crc kubenswrapper[4808]: I0217 15:56:41.297462 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 17 15:56:41 crc kubenswrapper[4808]: W0217 15:56:41.307348 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod7c3eff00_0ae7_4c6a_ad5f_931c2cf09940.slice/crio-641c09b4d5872f3c3e8ee8e03d3848dfb882c5e36b3e9f317878d25816f52685 WatchSource:0}: Error finding container 641c09b4d5872f3c3e8ee8e03d3848dfb882c5e36b3e9f317878d25816f52685: Status 404 returned error can't find the container with id 641c09b4d5872f3c3e8ee8e03d3848dfb882c5e36b3e9f317878d25816f52685 Feb 17 15:56:41 crc kubenswrapper[4808]: I0217 15:56:41.369247 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"7c3eff00-0ae7-4c6a-ad5f-931c2cf09940","Type":"ContainerStarted","Data":"641c09b4d5872f3c3e8ee8e03d3848dfb882c5e36b3e9f317878d25816f52685"} Feb 17 15:56:41 crc kubenswrapper[4808]: I0217 15:56:41.373421 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-z8tn8" event={"ID":"b88c3e5f-7390-477c-ae74-aced26a8ddf9","Type":"ContainerStarted","Data":"a8179ccd7a37be51ec49686db81a755d6740e78a2ba8586d22c71af160ecf913"} Feb 17 15:56:41 crc kubenswrapper[4808]: I0217 15:56:41.373453 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-z8tn8" event={"ID":"b88c3e5f-7390-477c-ae74-aced26a8ddf9","Type":"ContainerStarted","Data":"1b698169075bf038e5184d91d7401cd9a1728c0dfa40c4b12efb0fd20af6ad51"} Feb 17 15:56:41 crc kubenswrapper[4808]: I0217 15:56:41.811107 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 17 15:56:41 crc kubenswrapper[4808]: I0217 15:56:41.967657 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/92637ea3-788c-438d-a664-c2b8d640f2d1-kube-api-access\") pod \"92637ea3-788c-438d-a664-c2b8d640f2d1\" (UID: \"92637ea3-788c-438d-a664-c2b8d640f2d1\") " Feb 17 15:56:41 crc kubenswrapper[4808]: I0217 15:56:41.967773 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/92637ea3-788c-438d-a664-c2b8d640f2d1-kubelet-dir\") pod \"92637ea3-788c-438d-a664-c2b8d640f2d1\" (UID: \"92637ea3-788c-438d-a664-c2b8d640f2d1\") " Feb 17 15:56:41 crc kubenswrapper[4808]: I0217 15:56:41.967870 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/92637ea3-788c-438d-a664-c2b8d640f2d1-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "92637ea3-788c-438d-a664-c2b8d640f2d1" (UID: "92637ea3-788c-438d-a664-c2b8d640f2d1"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:56:41 crc kubenswrapper[4808]: I0217 15:56:41.969513 4808 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/92637ea3-788c-438d-a664-c2b8d640f2d1-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 17 15:56:41 crc kubenswrapper[4808]: I0217 15:56:41.976094 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92637ea3-788c-438d-a664-c2b8d640f2d1-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "92637ea3-788c-438d-a664-c2b8d640f2d1" (UID: "92637ea3-788c-438d-a664-c2b8d640f2d1"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:56:42 crc kubenswrapper[4808]: I0217 15:56:42.072371 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/92637ea3-788c-438d-a664-c2b8d640f2d1-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 17 15:56:42 crc kubenswrapper[4808]: I0217 15:56:42.422804 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"92637ea3-788c-438d-a664-c2b8d640f2d1","Type":"ContainerDied","Data":"8d3e6325dff416527f0b5f7a426deb2ee9273e60e45b536362885c914658d019"} Feb 17 15:56:42 crc kubenswrapper[4808]: I0217 15:56:42.422853 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d3e6325dff416527f0b5f7a426deb2ee9273e60e45b536362885c914658d019" Feb 17 15:56:42 crc kubenswrapper[4808]: I0217 15:56:42.422854 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 17 15:56:43 crc kubenswrapper[4808]: I0217 15:56:43.465088 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-z8tn8" event={"ID":"b88c3e5f-7390-477c-ae74-aced26a8ddf9","Type":"ContainerStarted","Data":"06a957f888bb8269e1dbf81b7c6449a7e858c2480beeda1758a5795ebe02bd2f"} Feb 17 15:56:43 crc kubenswrapper[4808]: I0217 15:56:43.468080 4808 generic.go:334] "Generic (PLEG): container finished" podID="7c3eff00-0ae7-4c6a-ad5f-931c2cf09940" containerID="084dd9cf385adbcc2f2e5a2b91eb5e840e1a961c941e025bd32443d059e8b202" exitCode=0 Feb 17 15:56:43 crc kubenswrapper[4808]: I0217 15:56:43.468138 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"7c3eff00-0ae7-4c6a-ad5f-931c2cf09940","Type":"ContainerDied","Data":"084dd9cf385adbcc2f2e5a2b91eb5e840e1a961c941e025bd32443d059e8b202"} Feb 17 15:56:43 crc kubenswrapper[4808]: I0217 15:56:43.485275 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-z8tn8" podStartSLOduration=146.485233801 podStartE2EDuration="2m26.485233801s" podCreationTimestamp="2026-02-17 15:54:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:56:43.483777812 +0000 UTC m=+167.000136935" watchObservedRunningTime="2026-02-17 15:56:43.485233801 +0000 UTC m=+167.001592874" Feb 17 15:56:43 crc kubenswrapper[4808]: I0217 15:56:43.812401 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-x2jlg" Feb 17 15:56:47 crc kubenswrapper[4808]: I0217 15:56:47.919865 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-wlj8d" Feb 17 15:56:48 crc kubenswrapper[4808]: I0217 15:56:48.224363 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-hdg74" Feb 17 15:56:48 crc kubenswrapper[4808]: I0217 15:56:48.229382 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-hdg74" Feb 17 15:56:50 crc kubenswrapper[4808]: I0217 15:56:50.837196 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 17 15:56:50 crc kubenswrapper[4808]: I0217 15:56:50.951470 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7c3eff00-0ae7-4c6a-ad5f-931c2cf09940-kube-api-access\") pod \"7c3eff00-0ae7-4c6a-ad5f-931c2cf09940\" (UID: \"7c3eff00-0ae7-4c6a-ad5f-931c2cf09940\") " Feb 17 15:56:50 crc kubenswrapper[4808]: I0217 15:56:50.951643 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7c3eff00-0ae7-4c6a-ad5f-931c2cf09940-kubelet-dir\") pod \"7c3eff00-0ae7-4c6a-ad5f-931c2cf09940\" (UID: \"7c3eff00-0ae7-4c6a-ad5f-931c2cf09940\") " Feb 17 15:56:50 crc kubenswrapper[4808]: I0217 15:56:50.951778 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c3eff00-0ae7-4c6a-ad5f-931c2cf09940-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "7c3eff00-0ae7-4c6a-ad5f-931c2cf09940" (UID: "7c3eff00-0ae7-4c6a-ad5f-931c2cf09940"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:56:50 crc kubenswrapper[4808]: I0217 15:56:50.952254 4808 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7c3eff00-0ae7-4c6a-ad5f-931c2cf09940-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 17 15:56:50 crc kubenswrapper[4808]: I0217 15:56:50.966874 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c3eff00-0ae7-4c6a-ad5f-931c2cf09940-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7c3eff00-0ae7-4c6a-ad5f-931c2cf09940" (UID: "7c3eff00-0ae7-4c6a-ad5f-931c2cf09940"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:56:51 crc kubenswrapper[4808]: I0217 15:56:51.054084 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7c3eff00-0ae7-4c6a-ad5f-931c2cf09940-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 17 15:56:51 crc kubenswrapper[4808]: I0217 15:56:51.406344 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-cvqck"] Feb 17 15:56:51 crc kubenswrapper[4808]: I0217 15:56:51.406599 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-cvqck" podUID="a7649915-6408-4c30-8faa-0fb3ea55007a" containerName="controller-manager" containerID="cri-o://fb57ffbad5715668e0b26cf285ebec4d01aad8ac4a4db782b62b453c180c8e47" gracePeriod=30 Feb 17 15:56:51 crc kubenswrapper[4808]: I0217 15:56:51.424851 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-j6vm5"] Feb 17 15:56:51 crc kubenswrapper[4808]: I0217 15:56:51.425094 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j6vm5" podUID="8227d3a9-60f5-4d19-b4d1-8a0143864837" containerName="route-controller-manager" containerID="cri-o://f98437fbbf139d63581f07e82442459bd2916424cb75fd60caf9d2b40747e184" gracePeriod=30 Feb 17 15:56:51 crc kubenswrapper[4808]: I0217 15:56:51.591913 4808 patch_prober.go:28] interesting pod/machine-config-daemon-k8v8k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 15:56:51 crc kubenswrapper[4808]: I0217 15:56:51.591980 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 15:56:51 crc kubenswrapper[4808]: I0217 15:56:51.592687 4808 generic.go:334] "Generic (PLEG): container finished" podID="8227d3a9-60f5-4d19-b4d1-8a0143864837" containerID="f98437fbbf139d63581f07e82442459bd2916424cb75fd60caf9d2b40747e184" exitCode=0 Feb 17 15:56:51 crc kubenswrapper[4808]: I0217 15:56:51.592785 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j6vm5" event={"ID":"8227d3a9-60f5-4d19-b4d1-8a0143864837","Type":"ContainerDied","Data":"f98437fbbf139d63581f07e82442459bd2916424cb75fd60caf9d2b40747e184"} Feb 17 15:56:51 crc kubenswrapper[4808]: I0217 15:56:51.598132 4808 generic.go:334] "Generic (PLEG): container finished" podID="a7649915-6408-4c30-8faa-0fb3ea55007a" containerID="fb57ffbad5715668e0b26cf285ebec4d01aad8ac4a4db782b62b453c180c8e47" exitCode=0 Feb 17 15:56:51 crc kubenswrapper[4808]: I0217 15:56:51.598216 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-cvqck" event={"ID":"a7649915-6408-4c30-8faa-0fb3ea55007a","Type":"ContainerDied","Data":"fb57ffbad5715668e0b26cf285ebec4d01aad8ac4a4db782b62b453c180c8e47"} Feb 17 15:56:51 crc kubenswrapper[4808]: I0217 15:56:51.605345 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"7c3eff00-0ae7-4c6a-ad5f-931c2cf09940","Type":"ContainerDied","Data":"641c09b4d5872f3c3e8ee8e03d3848dfb882c5e36b3e9f317878d25816f52685"} Feb 17 15:56:51 crc kubenswrapper[4808]: I0217 15:56:51.605392 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="641c09b4d5872f3c3e8ee8e03d3848dfb882c5e36b3e9f317878d25816f52685" Feb 17 15:56:51 crc kubenswrapper[4808]: I0217 15:56:51.605465 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 17 15:56:56 crc kubenswrapper[4808]: I0217 15:56:56.650841 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:58 crc kubenswrapper[4808]: I0217 15:56:58.126473 4808 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-j6vm5 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Feb 17 15:56:58 crc kubenswrapper[4808]: I0217 15:56:58.127172 4808 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j6vm5" podUID="8227d3a9-60f5-4d19-b4d1-8a0143864837" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" Feb 17 15:56:59 crc kubenswrapper[4808]: I0217 15:56:59.195536 4808 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-cvqck container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 17 15:56:59 crc kubenswrapper[4808]: I0217 15:56:59.196192 4808 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-cvqck" podUID="a7649915-6408-4c30-8faa-0fb3ea55007a" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.14:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 17 15:57:01 crc kubenswrapper[4808]: E0217 15:57:01.715203 4808 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 17 15:57:01 crc kubenswrapper[4808]: E0217 15:57:01.716294 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sp46n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-hn7fn_openshift-marketplace(a1db3ff7-c43f-412e-ab72-3d592b6352b0): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 17 15:57:01 crc kubenswrapper[4808]: E0217 15:57:01.717645 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-hn7fn" podUID="a1db3ff7-c43f-412e-ab72-3d592b6352b0" Feb 17 15:57:03 crc kubenswrapper[4808]: E0217 15:57:03.215094 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-hn7fn" podUID="a1db3ff7-c43f-412e-ab72-3d592b6352b0" Feb 17 15:57:03 crc kubenswrapper[4808]: I0217 15:57:03.278821 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-cvqck" Feb 17 15:57:03 crc kubenswrapper[4808]: I0217 15:57:03.313856 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5bfbf6ffb-5h8qn"] Feb 17 15:57:03 crc kubenswrapper[4808]: E0217 15:57:03.314230 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c3eff00-0ae7-4c6a-ad5f-931c2cf09940" containerName="pruner" Feb 17 15:57:03 crc kubenswrapper[4808]: I0217 15:57:03.314248 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c3eff00-0ae7-4c6a-ad5f-931c2cf09940" containerName="pruner" Feb 17 15:57:03 crc kubenswrapper[4808]: E0217 15:57:03.314259 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7649915-6408-4c30-8faa-0fb3ea55007a" containerName="controller-manager" Feb 17 15:57:03 crc kubenswrapper[4808]: I0217 15:57:03.314267 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7649915-6408-4c30-8faa-0fb3ea55007a" containerName="controller-manager" Feb 17 15:57:03 crc kubenswrapper[4808]: E0217 15:57:03.314284 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92637ea3-788c-438d-a664-c2b8d640f2d1" containerName="pruner" Feb 17 15:57:03 crc kubenswrapper[4808]: I0217 15:57:03.314293 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="92637ea3-788c-438d-a664-c2b8d640f2d1" containerName="pruner" Feb 17 15:57:03 crc kubenswrapper[4808]: I0217 15:57:03.314427 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7649915-6408-4c30-8faa-0fb3ea55007a" containerName="controller-manager" Feb 17 15:57:03 crc kubenswrapper[4808]: I0217 15:57:03.314441 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c3eff00-0ae7-4c6a-ad5f-931c2cf09940" containerName="pruner" Feb 17 15:57:03 crc kubenswrapper[4808]: I0217 15:57:03.314452 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="92637ea3-788c-438d-a664-c2b8d640f2d1" containerName="pruner" Feb 17 15:57:03 crc kubenswrapper[4808]: I0217 15:57:03.315100 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5bfbf6ffb-5h8qn" Feb 17 15:57:03 crc kubenswrapper[4808]: I0217 15:57:03.317227 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5bfbf6ffb-5h8qn"] Feb 17 15:57:03 crc kubenswrapper[4808]: E0217 15:57:03.360486 4808 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 17 15:57:03 crc kubenswrapper[4808]: E0217 15:57:03.360685 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h922n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-22x8m_openshift-marketplace(543b2019-8399-411e-8e8b-45787b96873f): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 17 15:57:03 crc kubenswrapper[4808]: E0217 15:57:03.361936 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-22x8m" podUID="543b2019-8399-411e-8e8b-45787b96873f" Feb 17 15:57:03 crc kubenswrapper[4808]: I0217 15:57:03.450019 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a7649915-6408-4c30-8faa-0fb3ea55007a-proxy-ca-bundles\") pod \"a7649915-6408-4c30-8faa-0fb3ea55007a\" (UID: \"a7649915-6408-4c30-8faa-0fb3ea55007a\") " Feb 17 15:57:03 crc kubenswrapper[4808]: I0217 15:57:03.450116 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a7649915-6408-4c30-8faa-0fb3ea55007a-client-ca\") pod \"a7649915-6408-4c30-8faa-0fb3ea55007a\" (UID: \"a7649915-6408-4c30-8faa-0fb3ea55007a\") " Feb 17 15:57:03 crc kubenswrapper[4808]: I0217 15:57:03.450289 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7649915-6408-4c30-8faa-0fb3ea55007a-config\") pod \"a7649915-6408-4c30-8faa-0fb3ea55007a\" (UID: \"a7649915-6408-4c30-8faa-0fb3ea55007a\") " Feb 17 15:57:03 crc kubenswrapper[4808]: I0217 15:57:03.450357 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a7649915-6408-4c30-8faa-0fb3ea55007a-serving-cert\") pod \"a7649915-6408-4c30-8faa-0fb3ea55007a\" (UID: \"a7649915-6408-4c30-8faa-0fb3ea55007a\") " Feb 17 15:57:03 crc kubenswrapper[4808]: I0217 15:57:03.450612 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v8srf\" (UniqueName: \"kubernetes.io/projected/a7649915-6408-4c30-8faa-0fb3ea55007a-kube-api-access-v8srf\") pod \"a7649915-6408-4c30-8faa-0fb3ea55007a\" (UID: \"a7649915-6408-4c30-8faa-0fb3ea55007a\") " Feb 17 15:57:03 crc kubenswrapper[4808]: I0217 15:57:03.451397 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7649915-6408-4c30-8faa-0fb3ea55007a-config" (OuterVolumeSpecName: "config") pod "a7649915-6408-4c30-8faa-0fb3ea55007a" (UID: "a7649915-6408-4c30-8faa-0fb3ea55007a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:57:03 crc kubenswrapper[4808]: I0217 15:57:03.451468 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7649915-6408-4c30-8faa-0fb3ea55007a-client-ca" (OuterVolumeSpecName: "client-ca") pod "a7649915-6408-4c30-8faa-0fb3ea55007a" (UID: "a7649915-6408-4c30-8faa-0fb3ea55007a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:57:03 crc kubenswrapper[4808]: I0217 15:57:03.451479 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7649915-6408-4c30-8faa-0fb3ea55007a-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "a7649915-6408-4c30-8faa-0fb3ea55007a" (UID: "a7649915-6408-4c30-8faa-0fb3ea55007a"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:57:03 crc kubenswrapper[4808]: I0217 15:57:03.452197 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/013c1a2d-19c5-47a3-ae05-f202eac66987-config\") pod \"controller-manager-5bfbf6ffb-5h8qn\" (UID: \"013c1a2d-19c5-47a3-ae05-f202eac66987\") " pod="openshift-controller-manager/controller-manager-5bfbf6ffb-5h8qn" Feb 17 15:57:03 crc kubenswrapper[4808]: I0217 15:57:03.452240 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxsd5\" (UniqueName: \"kubernetes.io/projected/013c1a2d-19c5-47a3-ae05-f202eac66987-kube-api-access-lxsd5\") pod \"controller-manager-5bfbf6ffb-5h8qn\" (UID: \"013c1a2d-19c5-47a3-ae05-f202eac66987\") " pod="openshift-controller-manager/controller-manager-5bfbf6ffb-5h8qn" Feb 17 15:57:03 crc kubenswrapper[4808]: I0217 15:57:03.452425 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/013c1a2d-19c5-47a3-ae05-f202eac66987-client-ca\") pod \"controller-manager-5bfbf6ffb-5h8qn\" (UID: \"013c1a2d-19c5-47a3-ae05-f202eac66987\") " pod="openshift-controller-manager/controller-manager-5bfbf6ffb-5h8qn" Feb 17 15:57:03 crc kubenswrapper[4808]: I0217 15:57:03.452452 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/013c1a2d-19c5-47a3-ae05-f202eac66987-proxy-ca-bundles\") pod \"controller-manager-5bfbf6ffb-5h8qn\" (UID: \"013c1a2d-19c5-47a3-ae05-f202eac66987\") " pod="openshift-controller-manager/controller-manager-5bfbf6ffb-5h8qn" Feb 17 15:57:03 crc kubenswrapper[4808]: I0217 15:57:03.452502 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/013c1a2d-19c5-47a3-ae05-f202eac66987-serving-cert\") pod \"controller-manager-5bfbf6ffb-5h8qn\" (UID: \"013c1a2d-19c5-47a3-ae05-f202eac66987\") " pod="openshift-controller-manager/controller-manager-5bfbf6ffb-5h8qn" Feb 17 15:57:03 crc kubenswrapper[4808]: I0217 15:57:03.452630 4808 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a7649915-6408-4c30-8faa-0fb3ea55007a-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 17 15:57:03 crc kubenswrapper[4808]: I0217 15:57:03.452646 4808 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a7649915-6408-4c30-8faa-0fb3ea55007a-client-ca\") on node \"crc\" DevicePath \"\"" Feb 17 15:57:03 crc kubenswrapper[4808]: I0217 15:57:03.452678 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7649915-6408-4c30-8faa-0fb3ea55007a-config\") on node \"crc\" DevicePath \"\"" Feb 17 15:57:03 crc kubenswrapper[4808]: I0217 15:57:03.458772 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7649915-6408-4c30-8faa-0fb3ea55007a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a7649915-6408-4c30-8faa-0fb3ea55007a" (UID: "a7649915-6408-4c30-8faa-0fb3ea55007a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:57:03 crc kubenswrapper[4808]: I0217 15:57:03.461368 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7649915-6408-4c30-8faa-0fb3ea55007a-kube-api-access-v8srf" (OuterVolumeSpecName: "kube-api-access-v8srf") pod "a7649915-6408-4c30-8faa-0fb3ea55007a" (UID: "a7649915-6408-4c30-8faa-0fb3ea55007a"). InnerVolumeSpecName "kube-api-access-v8srf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:57:03 crc kubenswrapper[4808]: I0217 15:57:03.553671 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/013c1a2d-19c5-47a3-ae05-f202eac66987-serving-cert\") pod \"controller-manager-5bfbf6ffb-5h8qn\" (UID: \"013c1a2d-19c5-47a3-ae05-f202eac66987\") " pod="openshift-controller-manager/controller-manager-5bfbf6ffb-5h8qn" Feb 17 15:57:03 crc kubenswrapper[4808]: I0217 15:57:03.553790 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/013c1a2d-19c5-47a3-ae05-f202eac66987-config\") pod \"controller-manager-5bfbf6ffb-5h8qn\" (UID: \"013c1a2d-19c5-47a3-ae05-f202eac66987\") " pod="openshift-controller-manager/controller-manager-5bfbf6ffb-5h8qn" Feb 17 15:57:03 crc kubenswrapper[4808]: I0217 15:57:03.553820 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxsd5\" (UniqueName: \"kubernetes.io/projected/013c1a2d-19c5-47a3-ae05-f202eac66987-kube-api-access-lxsd5\") pod \"controller-manager-5bfbf6ffb-5h8qn\" (UID: \"013c1a2d-19c5-47a3-ae05-f202eac66987\") " pod="openshift-controller-manager/controller-manager-5bfbf6ffb-5h8qn" Feb 17 15:57:03 crc kubenswrapper[4808]: I0217 15:57:03.553892 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/013c1a2d-19c5-47a3-ae05-f202eac66987-client-ca\") pod \"controller-manager-5bfbf6ffb-5h8qn\" (UID: \"013c1a2d-19c5-47a3-ae05-f202eac66987\") " pod="openshift-controller-manager/controller-manager-5bfbf6ffb-5h8qn" Feb 17 15:57:03 crc kubenswrapper[4808]: I0217 15:57:03.553918 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/013c1a2d-19c5-47a3-ae05-f202eac66987-proxy-ca-bundles\") pod \"controller-manager-5bfbf6ffb-5h8qn\" (UID: \"013c1a2d-19c5-47a3-ae05-f202eac66987\") " pod="openshift-controller-manager/controller-manager-5bfbf6ffb-5h8qn" Feb 17 15:57:03 crc kubenswrapper[4808]: I0217 15:57:03.553964 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v8srf\" (UniqueName: \"kubernetes.io/projected/a7649915-6408-4c30-8faa-0fb3ea55007a-kube-api-access-v8srf\") on node \"crc\" DevicePath \"\"" Feb 17 15:57:03 crc kubenswrapper[4808]: I0217 15:57:03.553980 4808 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a7649915-6408-4c30-8faa-0fb3ea55007a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 15:57:03 crc kubenswrapper[4808]: I0217 15:57:03.555744 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/013c1a2d-19c5-47a3-ae05-f202eac66987-proxy-ca-bundles\") pod \"controller-manager-5bfbf6ffb-5h8qn\" (UID: \"013c1a2d-19c5-47a3-ae05-f202eac66987\") " pod="openshift-controller-manager/controller-manager-5bfbf6ffb-5h8qn" Feb 17 15:57:03 crc kubenswrapper[4808]: I0217 15:57:03.557729 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/013c1a2d-19c5-47a3-ae05-f202eac66987-config\") pod \"controller-manager-5bfbf6ffb-5h8qn\" (UID: \"013c1a2d-19c5-47a3-ae05-f202eac66987\") " pod="openshift-controller-manager/controller-manager-5bfbf6ffb-5h8qn" Feb 17 15:57:03 crc kubenswrapper[4808]: I0217 15:57:03.558193 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/013c1a2d-19c5-47a3-ae05-f202eac66987-client-ca\") pod \"controller-manager-5bfbf6ffb-5h8qn\" (UID: \"013c1a2d-19c5-47a3-ae05-f202eac66987\") " pod="openshift-controller-manager/controller-manager-5bfbf6ffb-5h8qn" Feb 17 15:57:03 crc kubenswrapper[4808]: I0217 15:57:03.560426 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/013c1a2d-19c5-47a3-ae05-f202eac66987-serving-cert\") pod \"controller-manager-5bfbf6ffb-5h8qn\" (UID: \"013c1a2d-19c5-47a3-ae05-f202eac66987\") " pod="openshift-controller-manager/controller-manager-5bfbf6ffb-5h8qn" Feb 17 15:57:03 crc kubenswrapper[4808]: I0217 15:57:03.576199 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxsd5\" (UniqueName: \"kubernetes.io/projected/013c1a2d-19c5-47a3-ae05-f202eac66987-kube-api-access-lxsd5\") pod \"controller-manager-5bfbf6ffb-5h8qn\" (UID: \"013c1a2d-19c5-47a3-ae05-f202eac66987\") " pod="openshift-controller-manager/controller-manager-5bfbf6ffb-5h8qn" Feb 17 15:57:03 crc kubenswrapper[4808]: I0217 15:57:03.673067 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5bfbf6ffb-5h8qn" Feb 17 15:57:03 crc kubenswrapper[4808]: I0217 15:57:03.751285 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-cvqck" Feb 17 15:57:03 crc kubenswrapper[4808]: I0217 15:57:03.753125 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-cvqck" event={"ID":"a7649915-6408-4c30-8faa-0fb3ea55007a","Type":"ContainerDied","Data":"82fbd205cacd70de3bd72105fabd5651b63f3ef10de2b4bbb91392f1254ffcb7"} Feb 17 15:57:03 crc kubenswrapper[4808]: I0217 15:57:03.753270 4808 scope.go:117] "RemoveContainer" containerID="fb57ffbad5715668e0b26cf285ebec4d01aad8ac4a4db782b62b453c180c8e47" Feb 17 15:57:03 crc kubenswrapper[4808]: I0217 15:57:03.797200 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-cvqck"] Feb 17 15:57:03 crc kubenswrapper[4808]: I0217 15:57:03.801854 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-cvqck"] Feb 17 15:57:05 crc kubenswrapper[4808]: I0217 15:57:05.161237 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7649915-6408-4c30-8faa-0fb3ea55007a" path="/var/lib/kubelet/pods/a7649915-6408-4c30-8faa-0fb3ea55007a/volumes" Feb 17 15:57:05 crc kubenswrapper[4808]: I0217 15:57:05.476101 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:57:07 crc kubenswrapper[4808]: E0217 15:57:07.430306 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-22x8m" podUID="543b2019-8399-411e-8e8b-45787b96873f" Feb 17 15:57:07 crc kubenswrapper[4808]: I0217 15:57:07.517347 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j6vm5" Feb 17 15:57:07 crc kubenswrapper[4808]: E0217 15:57:07.567881 4808 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 17 15:57:07 crc kubenswrapper[4808]: E0217 15:57:07.572645 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bfwdc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-8jsrz_openshift-marketplace(e22d34a8-92f6-4a2a-a0f5-e063c25afac1): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 17 15:57:07 crc kubenswrapper[4808]: I0217 15:57:07.572960 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5797f68d88-nqrfd"] Feb 17 15:57:07 crc kubenswrapper[4808]: E0217 15:57:07.573297 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8227d3a9-60f5-4d19-b4d1-8a0143864837" containerName="route-controller-manager" Feb 17 15:57:07 crc kubenswrapper[4808]: I0217 15:57:07.573312 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="8227d3a9-60f5-4d19-b4d1-8a0143864837" containerName="route-controller-manager" Feb 17 15:57:07 crc kubenswrapper[4808]: I0217 15:57:07.573451 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="8227d3a9-60f5-4d19-b4d1-8a0143864837" containerName="route-controller-manager" Feb 17 15:57:07 crc kubenswrapper[4808]: E0217 15:57:07.573959 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-8jsrz" podUID="e22d34a8-92f6-4a2a-a0f5-e063c25afac1" Feb 17 15:57:07 crc kubenswrapper[4808]: I0217 15:57:07.574006 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5797f68d88-nqrfd" Feb 17 15:57:07 crc kubenswrapper[4808]: E0217 15:57:07.575570 4808 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 17 15:57:07 crc kubenswrapper[4808]: E0217 15:57:07.575789 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ptbxm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-cs597_openshift-marketplace(48efd125-e3aa-444d-91a3-fa915be48b46): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 17 15:57:07 crc kubenswrapper[4808]: E0217 15:57:07.577271 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-cs597" podUID="48efd125-e3aa-444d-91a3-fa915be48b46" Feb 17 15:57:07 crc kubenswrapper[4808]: I0217 15:57:07.580303 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5797f68d88-nqrfd"] Feb 17 15:57:07 crc kubenswrapper[4808]: E0217 15:57:07.593940 4808 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 17 15:57:07 crc kubenswrapper[4808]: E0217 15:57:07.594296 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2255r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-qhtfr_openshift-marketplace(df27437e-6547-4705-bbe7-08a726639dbe): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 17 15:57:07 crc kubenswrapper[4808]: E0217 15:57:07.595716 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-qhtfr" podUID="df27437e-6547-4705-bbe7-08a726639dbe" Feb 17 15:57:07 crc kubenswrapper[4808]: I0217 15:57:07.627153 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8227d3a9-60f5-4d19-b4d1-8a0143864837-serving-cert\") pod \"8227d3a9-60f5-4d19-b4d1-8a0143864837\" (UID: \"8227d3a9-60f5-4d19-b4d1-8a0143864837\") " Feb 17 15:57:07 crc kubenswrapper[4808]: I0217 15:57:07.627717 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8227d3a9-60f5-4d19-b4d1-8a0143864837-client-ca\") pod \"8227d3a9-60f5-4d19-b4d1-8a0143864837\" (UID: \"8227d3a9-60f5-4d19-b4d1-8a0143864837\") " Feb 17 15:57:07 crc kubenswrapper[4808]: I0217 15:57:07.627755 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8227d3a9-60f5-4d19-b4d1-8a0143864837-config\") pod \"8227d3a9-60f5-4d19-b4d1-8a0143864837\" (UID: \"8227d3a9-60f5-4d19-b4d1-8a0143864837\") " Feb 17 15:57:07 crc kubenswrapper[4808]: I0217 15:57:07.627785 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6nx4t\" (UniqueName: \"kubernetes.io/projected/8227d3a9-60f5-4d19-b4d1-8a0143864837-kube-api-access-6nx4t\") pod \"8227d3a9-60f5-4d19-b4d1-8a0143864837\" (UID: \"8227d3a9-60f5-4d19-b4d1-8a0143864837\") " Feb 17 15:57:07 crc kubenswrapper[4808]: I0217 15:57:07.628412 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8227d3a9-60f5-4d19-b4d1-8a0143864837-client-ca" (OuterVolumeSpecName: "client-ca") pod "8227d3a9-60f5-4d19-b4d1-8a0143864837" (UID: "8227d3a9-60f5-4d19-b4d1-8a0143864837"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:57:07 crc kubenswrapper[4808]: I0217 15:57:07.628987 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8227d3a9-60f5-4d19-b4d1-8a0143864837-config" (OuterVolumeSpecName: "config") pod "8227d3a9-60f5-4d19-b4d1-8a0143864837" (UID: "8227d3a9-60f5-4d19-b4d1-8a0143864837"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:57:07 crc kubenswrapper[4808]: I0217 15:57:07.634897 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8227d3a9-60f5-4d19-b4d1-8a0143864837-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8227d3a9-60f5-4d19-b4d1-8a0143864837" (UID: "8227d3a9-60f5-4d19-b4d1-8a0143864837"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:57:07 crc kubenswrapper[4808]: I0217 15:57:07.639416 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8227d3a9-60f5-4d19-b4d1-8a0143864837-kube-api-access-6nx4t" (OuterVolumeSpecName: "kube-api-access-6nx4t") pod "8227d3a9-60f5-4d19-b4d1-8a0143864837" (UID: "8227d3a9-60f5-4d19-b4d1-8a0143864837"). InnerVolumeSpecName "kube-api-access-6nx4t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:57:07 crc kubenswrapper[4808]: I0217 15:57:07.729362 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/26b4e80a-42fe-4a5f-99f3-e9967587b72a-client-ca\") pod \"route-controller-manager-5797f68d88-nqrfd\" (UID: \"26b4e80a-42fe-4a5f-99f3-e9967587b72a\") " pod="openshift-route-controller-manager/route-controller-manager-5797f68d88-nqrfd" Feb 17 15:57:07 crc kubenswrapper[4808]: I0217 15:57:07.729566 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lc69\" (UniqueName: \"kubernetes.io/projected/26b4e80a-42fe-4a5f-99f3-e9967587b72a-kube-api-access-7lc69\") pod \"route-controller-manager-5797f68d88-nqrfd\" (UID: \"26b4e80a-42fe-4a5f-99f3-e9967587b72a\") " pod="openshift-route-controller-manager/route-controller-manager-5797f68d88-nqrfd" Feb 17 15:57:07 crc kubenswrapper[4808]: I0217 15:57:07.729648 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/26b4e80a-42fe-4a5f-99f3-e9967587b72a-serving-cert\") pod \"route-controller-manager-5797f68d88-nqrfd\" (UID: \"26b4e80a-42fe-4a5f-99f3-e9967587b72a\") " pod="openshift-route-controller-manager/route-controller-manager-5797f68d88-nqrfd" Feb 17 15:57:07 crc kubenswrapper[4808]: I0217 15:57:07.729718 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26b4e80a-42fe-4a5f-99f3-e9967587b72a-config\") pod \"route-controller-manager-5797f68d88-nqrfd\" (UID: \"26b4e80a-42fe-4a5f-99f3-e9967587b72a\") " pod="openshift-route-controller-manager/route-controller-manager-5797f68d88-nqrfd" Feb 17 15:57:07 crc kubenswrapper[4808]: I0217 15:57:07.729796 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8227d3a9-60f5-4d19-b4d1-8a0143864837-config\") on node \"crc\" DevicePath \"\"" Feb 17 15:57:07 crc kubenswrapper[4808]: I0217 15:57:07.729848 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6nx4t\" (UniqueName: \"kubernetes.io/projected/8227d3a9-60f5-4d19-b4d1-8a0143864837-kube-api-access-6nx4t\") on node \"crc\" DevicePath \"\"" Feb 17 15:57:07 crc kubenswrapper[4808]: I0217 15:57:07.729864 4808 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8227d3a9-60f5-4d19-b4d1-8a0143864837-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 15:57:07 crc kubenswrapper[4808]: I0217 15:57:07.729874 4808 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8227d3a9-60f5-4d19-b4d1-8a0143864837-client-ca\") on node \"crc\" DevicePath \"\"" Feb 17 15:57:07 crc kubenswrapper[4808]: I0217 15:57:07.781368 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j6vm5" event={"ID":"8227d3a9-60f5-4d19-b4d1-8a0143864837","Type":"ContainerDied","Data":"87a30c2a90c4016dabeb2fd3e6331db8b801e3a30d3bec36b1482acb813df460"} Feb 17 15:57:07 crc kubenswrapper[4808]: I0217 15:57:07.781875 4808 scope.go:117] "RemoveContainer" containerID="f98437fbbf139d63581f07e82442459bd2916424cb75fd60caf9d2b40747e184" Feb 17 15:57:07 crc kubenswrapper[4808]: I0217 15:57:07.782012 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j6vm5" Feb 17 15:57:07 crc kubenswrapper[4808]: I0217 15:57:07.795911 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6vvmq" event={"ID":"57300b85-6c7e-49da-bb14-40055f48a85c","Type":"ContainerStarted","Data":"bbcda24c56c4da1bf611a909ec28352a94064de773428161e7634b8284dbcb93"} Feb 17 15:57:07 crc kubenswrapper[4808]: I0217 15:57:07.809318 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5bfbf6ffb-5h8qn"] Feb 17 15:57:07 crc kubenswrapper[4808]: I0217 15:57:07.830240 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wsbjl" event={"ID":"2f04008a-114c-4f19-971a-34fa574846f5","Type":"ContainerStarted","Data":"b4900ba4eb2857f22d6e65bf801ac98b6168df05a60b82365a27f7fac0951d6c"} Feb 17 15:57:07 crc kubenswrapper[4808]: I0217 15:57:07.831162 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26b4e80a-42fe-4a5f-99f3-e9967587b72a-config\") pod \"route-controller-manager-5797f68d88-nqrfd\" (UID: \"26b4e80a-42fe-4a5f-99f3-e9967587b72a\") " pod="openshift-route-controller-manager/route-controller-manager-5797f68d88-nqrfd" Feb 17 15:57:07 crc kubenswrapper[4808]: I0217 15:57:07.831248 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/26b4e80a-42fe-4a5f-99f3-e9967587b72a-client-ca\") pod \"route-controller-manager-5797f68d88-nqrfd\" (UID: \"26b4e80a-42fe-4a5f-99f3-e9967587b72a\") " pod="openshift-route-controller-manager/route-controller-manager-5797f68d88-nqrfd" Feb 17 15:57:07 crc kubenswrapper[4808]: I0217 15:57:07.831287 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7lc69\" (UniqueName: \"kubernetes.io/projected/26b4e80a-42fe-4a5f-99f3-e9967587b72a-kube-api-access-7lc69\") pod \"route-controller-manager-5797f68d88-nqrfd\" (UID: \"26b4e80a-42fe-4a5f-99f3-e9967587b72a\") " pod="openshift-route-controller-manager/route-controller-manager-5797f68d88-nqrfd" Feb 17 15:57:07 crc kubenswrapper[4808]: I0217 15:57:07.831313 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/26b4e80a-42fe-4a5f-99f3-e9967587b72a-serving-cert\") pod \"route-controller-manager-5797f68d88-nqrfd\" (UID: \"26b4e80a-42fe-4a5f-99f3-e9967587b72a\") " pod="openshift-route-controller-manager/route-controller-manager-5797f68d88-nqrfd" Feb 17 15:57:07 crc kubenswrapper[4808]: I0217 15:57:07.832988 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/26b4e80a-42fe-4a5f-99f3-e9967587b72a-client-ca\") pod \"route-controller-manager-5797f68d88-nqrfd\" (UID: \"26b4e80a-42fe-4a5f-99f3-e9967587b72a\") " pod="openshift-route-controller-manager/route-controller-manager-5797f68d88-nqrfd" Feb 17 15:57:07 crc kubenswrapper[4808]: I0217 15:57:07.833192 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26b4e80a-42fe-4a5f-99f3-e9967587b72a-config\") pod \"route-controller-manager-5797f68d88-nqrfd\" (UID: \"26b4e80a-42fe-4a5f-99f3-e9967587b72a\") " pod="openshift-route-controller-manager/route-controller-manager-5797f68d88-nqrfd" Feb 17 15:57:07 crc kubenswrapper[4808]: I0217 15:57:07.838835 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/26b4e80a-42fe-4a5f-99f3-e9967587b72a-serving-cert\") pod \"route-controller-manager-5797f68d88-nqrfd\" (UID: \"26b4e80a-42fe-4a5f-99f3-e9967587b72a\") " pod="openshift-route-controller-manager/route-controller-manager-5797f68d88-nqrfd" Feb 17 15:57:07 crc kubenswrapper[4808]: I0217 15:57:07.841516 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-j6vm5"] Feb 17 15:57:07 crc kubenswrapper[4808]: E0217 15:57:07.842821 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-8jsrz" podUID="e22d34a8-92f6-4a2a-a0f5-e063c25afac1" Feb 17 15:57:07 crc kubenswrapper[4808]: E0217 15:57:07.846732 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-qhtfr" podUID="df27437e-6547-4705-bbe7-08a726639dbe" Feb 17 15:57:07 crc kubenswrapper[4808]: I0217 15:57:07.847704 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lc69\" (UniqueName: \"kubernetes.io/projected/26b4e80a-42fe-4a5f-99f3-e9967587b72a-kube-api-access-7lc69\") pod \"route-controller-manager-5797f68d88-nqrfd\" (UID: \"26b4e80a-42fe-4a5f-99f3-e9967587b72a\") " pod="openshift-route-controller-manager/route-controller-manager-5797f68d88-nqrfd" Feb 17 15:57:07 crc kubenswrapper[4808]: I0217 15:57:07.853586 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-j6vm5"] Feb 17 15:57:07 crc kubenswrapper[4808]: E0217 15:57:07.856483 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-cs597" podUID="48efd125-e3aa-444d-91a3-fa915be48b46" Feb 17 15:57:07 crc kubenswrapper[4808]: I0217 15:57:07.895539 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5797f68d88-nqrfd" Feb 17 15:57:08 crc kubenswrapper[4808]: I0217 15:57:08.121957 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5797f68d88-nqrfd"] Feb 17 15:57:08 crc kubenswrapper[4808]: W0217 15:57:08.178364 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod26b4e80a_42fe_4a5f_99f3_e9967587b72a.slice/crio-d29a4ebc7c0c8249cf9b6afd154a8e3281a9da7692080a8b5c9a23df6d329cfe WatchSource:0}: Error finding container d29a4ebc7c0c8249cf9b6afd154a8e3281a9da7692080a8b5c9a23df6d329cfe: Status 404 returned error can't find the container with id d29a4ebc7c0c8249cf9b6afd154a8e3281a9da7692080a8b5c9a23df6d329cfe Feb 17 15:57:08 crc kubenswrapper[4808]: I0217 15:57:08.771864 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-spzc7" Feb 17 15:57:08 crc kubenswrapper[4808]: I0217 15:57:08.849565 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5bfbf6ffb-5h8qn" event={"ID":"013c1a2d-19c5-47a3-ae05-f202eac66987","Type":"ContainerStarted","Data":"7406868a2293d8950f1d4eab45dbd36bf1a8a3819755cbb814f90b1c5517b8b1"} Feb 17 15:57:08 crc kubenswrapper[4808]: I0217 15:57:08.849635 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5bfbf6ffb-5h8qn" event={"ID":"013c1a2d-19c5-47a3-ae05-f202eac66987","Type":"ContainerStarted","Data":"3db618564a6ec77c73d367392142f19b47c4dabc393708105b57bd64c94ec953"} Feb 17 15:57:08 crc kubenswrapper[4808]: I0217 15:57:08.849980 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5bfbf6ffb-5h8qn" Feb 17 15:57:08 crc kubenswrapper[4808]: I0217 15:57:08.851700 4808 generic.go:334] "Generic (PLEG): container finished" podID="92dfded8-f453-4bfc-809e-e7ed7e25de27" containerID="05108c0dc38f3bc05084f54e3c00bb8e1ea701f996797f792c1317ab21953190" exitCode=0 Feb 17 15:57:08 crc kubenswrapper[4808]: I0217 15:57:08.851725 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ts9gs" event={"ID":"92dfded8-f453-4bfc-809e-e7ed7e25de27","Type":"ContainerDied","Data":"05108c0dc38f3bc05084f54e3c00bb8e1ea701f996797f792c1317ab21953190"} Feb 17 15:57:08 crc kubenswrapper[4808]: I0217 15:57:08.854844 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5797f68d88-nqrfd" event={"ID":"26b4e80a-42fe-4a5f-99f3-e9967587b72a","Type":"ContainerStarted","Data":"cb87e90bf76e5d5089065094e76d13badc3d77135b619ab84f905d563062244c"} Feb 17 15:57:08 crc kubenswrapper[4808]: I0217 15:57:08.854872 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5797f68d88-nqrfd" event={"ID":"26b4e80a-42fe-4a5f-99f3-e9967587b72a","Type":"ContainerStarted","Data":"d29a4ebc7c0c8249cf9b6afd154a8e3281a9da7692080a8b5c9a23df6d329cfe"} Feb 17 15:57:08 crc kubenswrapper[4808]: I0217 15:57:08.855611 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5797f68d88-nqrfd" Feb 17 15:57:08 crc kubenswrapper[4808]: I0217 15:57:08.856644 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5bfbf6ffb-5h8qn" Feb 17 15:57:08 crc kubenswrapper[4808]: I0217 15:57:08.857996 4808 generic.go:334] "Generic (PLEG): container finished" podID="57300b85-6c7e-49da-bb14-40055f48a85c" containerID="bbcda24c56c4da1bf611a909ec28352a94064de773428161e7634b8284dbcb93" exitCode=0 Feb 17 15:57:08 crc kubenswrapper[4808]: I0217 15:57:08.858054 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6vvmq" event={"ID":"57300b85-6c7e-49da-bb14-40055f48a85c","Type":"ContainerDied","Data":"bbcda24c56c4da1bf611a909ec28352a94064de773428161e7634b8284dbcb93"} Feb 17 15:57:08 crc kubenswrapper[4808]: I0217 15:57:08.863732 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5797f68d88-nqrfd" Feb 17 15:57:08 crc kubenswrapper[4808]: I0217 15:57:08.864338 4808 generic.go:334] "Generic (PLEG): container finished" podID="2f04008a-114c-4f19-971a-34fa574846f5" containerID="b4900ba4eb2857f22d6e65bf801ac98b6168df05a60b82365a27f7fac0951d6c" exitCode=0 Feb 17 15:57:08 crc kubenswrapper[4808]: I0217 15:57:08.864432 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wsbjl" event={"ID":"2f04008a-114c-4f19-971a-34fa574846f5","Type":"ContainerDied","Data":"b4900ba4eb2857f22d6e65bf801ac98b6168df05a60b82365a27f7fac0951d6c"} Feb 17 15:57:08 crc kubenswrapper[4808]: I0217 15:57:08.897726 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5bfbf6ffb-5h8qn" podStartSLOduration=17.897703613 podStartE2EDuration="17.897703613s" podCreationTimestamp="2026-02-17 15:56:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:57:08.894904848 +0000 UTC m=+192.411263921" watchObservedRunningTime="2026-02-17 15:57:08.897703613 +0000 UTC m=+192.414062686" Feb 17 15:57:08 crc kubenswrapper[4808]: I0217 15:57:08.971838 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5797f68d88-nqrfd" podStartSLOduration=17.971810197 podStartE2EDuration="17.971810197s" podCreationTimestamp="2026-02-17 15:56:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:57:08.959463414 +0000 UTC m=+192.475822487" watchObservedRunningTime="2026-02-17 15:57:08.971810197 +0000 UTC m=+192.488169270" Feb 17 15:57:09 crc kubenswrapper[4808]: I0217 15:57:09.154213 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8227d3a9-60f5-4d19-b4d1-8a0143864837" path="/var/lib/kubelet/pods/8227d3a9-60f5-4d19-b4d1-8a0143864837/volumes" Feb 17 15:57:09 crc kubenswrapper[4808]: I0217 15:57:09.885524 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wsbjl" event={"ID":"2f04008a-114c-4f19-971a-34fa574846f5","Type":"ContainerStarted","Data":"0e7ffda38dadb23c7fa43fc3d035ca26df0c3b1d59fe1979ae7c5702a3647add"} Feb 17 15:57:09 crc kubenswrapper[4808]: I0217 15:57:09.888448 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ts9gs" event={"ID":"92dfded8-f453-4bfc-809e-e7ed7e25de27","Type":"ContainerStarted","Data":"79c59f236601db2e02bc2df82891cddc398d12a9a7f46934d64515020f07caa8"} Feb 17 15:57:09 crc kubenswrapper[4808]: I0217 15:57:09.923233 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-wsbjl" podStartSLOduration=3.579704015 podStartE2EDuration="35.92320537s" podCreationTimestamp="2026-02-17 15:56:34 +0000 UTC" firstStartedPulling="2026-02-17 15:56:37.026007109 +0000 UTC m=+160.542366182" lastFinishedPulling="2026-02-17 15:57:09.369508464 +0000 UTC m=+192.885867537" observedRunningTime="2026-02-17 15:57:09.917517276 +0000 UTC m=+193.433876349" watchObservedRunningTime="2026-02-17 15:57:09.92320537 +0000 UTC m=+193.439564443" Feb 17 15:57:09 crc kubenswrapper[4808]: I0217 15:57:09.946239 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-ts9gs" podStartSLOduration=2.7205383899999998 podStartE2EDuration="33.946216012s" podCreationTimestamp="2026-02-17 15:56:36 +0000 UTC" firstStartedPulling="2026-02-17 15:56:38.170198756 +0000 UTC m=+161.686557829" lastFinishedPulling="2026-02-17 15:57:09.395876378 +0000 UTC m=+192.912235451" observedRunningTime="2026-02-17 15:57:09.939807719 +0000 UTC m=+193.456166792" watchObservedRunningTime="2026-02-17 15:57:09.946216012 +0000 UTC m=+193.462575085" Feb 17 15:57:10 crc kubenswrapper[4808]: I0217 15:57:10.896459 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6vvmq" event={"ID":"57300b85-6c7e-49da-bb14-40055f48a85c","Type":"ContainerStarted","Data":"4af04fd40045e9e7dfaadf911b9f31ed6ee225c9d6497d579fe01321855f1de4"} Feb 17 15:57:10 crc kubenswrapper[4808]: I0217 15:57:10.922881 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6vvmq" podStartSLOduration=4.26199051 podStartE2EDuration="36.922851328s" podCreationTimestamp="2026-02-17 15:56:34 +0000 UTC" firstStartedPulling="2026-02-17 15:56:37.109512128 +0000 UTC m=+160.625871201" lastFinishedPulling="2026-02-17 15:57:09.770372946 +0000 UTC m=+193.286732019" observedRunningTime="2026-02-17 15:57:10.91593599 +0000 UTC m=+194.432295063" watchObservedRunningTime="2026-02-17 15:57:10.922851328 +0000 UTC m=+194.439210401" Feb 17 15:57:11 crc kubenswrapper[4808]: I0217 15:57:11.386552 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5bfbf6ffb-5h8qn"] Feb 17 15:57:11 crc kubenswrapper[4808]: I0217 15:57:11.500167 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5797f68d88-nqrfd"] Feb 17 15:57:11 crc kubenswrapper[4808]: I0217 15:57:11.901787 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5797f68d88-nqrfd" podUID="26b4e80a-42fe-4a5f-99f3-e9967587b72a" containerName="route-controller-manager" containerID="cri-o://cb87e90bf76e5d5089065094e76d13badc3d77135b619ab84f905d563062244c" gracePeriod=30 Feb 17 15:57:11 crc kubenswrapper[4808]: I0217 15:57:11.903357 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5bfbf6ffb-5h8qn" podUID="013c1a2d-19c5-47a3-ae05-f202eac66987" containerName="controller-manager" containerID="cri-o://7406868a2293d8950f1d4eab45dbd36bf1a8a3819755cbb814f90b1c5517b8b1" gracePeriod=30 Feb 17 15:57:12 crc kubenswrapper[4808]: I0217 15:57:12.396642 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5797f68d88-nqrfd" Feb 17 15:57:12 crc kubenswrapper[4808]: I0217 15:57:12.466467 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5bfbf6ffb-5h8qn" Feb 17 15:57:12 crc kubenswrapper[4808]: I0217 15:57:12.495475 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/013c1a2d-19c5-47a3-ae05-f202eac66987-proxy-ca-bundles\") pod \"013c1a2d-19c5-47a3-ae05-f202eac66987\" (UID: \"013c1a2d-19c5-47a3-ae05-f202eac66987\") " Feb 17 15:57:12 crc kubenswrapper[4808]: I0217 15:57:12.495519 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/26b4e80a-42fe-4a5f-99f3-e9967587b72a-serving-cert\") pod \"26b4e80a-42fe-4a5f-99f3-e9967587b72a\" (UID: \"26b4e80a-42fe-4a5f-99f3-e9967587b72a\") " Feb 17 15:57:12 crc kubenswrapper[4808]: I0217 15:57:12.495544 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/013c1a2d-19c5-47a3-ae05-f202eac66987-config\") pod \"013c1a2d-19c5-47a3-ae05-f202eac66987\" (UID: \"013c1a2d-19c5-47a3-ae05-f202eac66987\") " Feb 17 15:57:12 crc kubenswrapper[4808]: I0217 15:57:12.495561 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26b4e80a-42fe-4a5f-99f3-e9967587b72a-config\") pod \"26b4e80a-42fe-4a5f-99f3-e9967587b72a\" (UID: \"26b4e80a-42fe-4a5f-99f3-e9967587b72a\") " Feb 17 15:57:12 crc kubenswrapper[4808]: I0217 15:57:12.495598 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/013c1a2d-19c5-47a3-ae05-f202eac66987-client-ca\") pod \"013c1a2d-19c5-47a3-ae05-f202eac66987\" (UID: \"013c1a2d-19c5-47a3-ae05-f202eac66987\") " Feb 17 15:57:12 crc kubenswrapper[4808]: I0217 15:57:12.495617 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/26b4e80a-42fe-4a5f-99f3-e9967587b72a-client-ca\") pod \"26b4e80a-42fe-4a5f-99f3-e9967587b72a\" (UID: \"26b4e80a-42fe-4a5f-99f3-e9967587b72a\") " Feb 17 15:57:12 crc kubenswrapper[4808]: I0217 15:57:12.495655 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/013c1a2d-19c5-47a3-ae05-f202eac66987-serving-cert\") pod \"013c1a2d-19c5-47a3-ae05-f202eac66987\" (UID: \"013c1a2d-19c5-47a3-ae05-f202eac66987\") " Feb 17 15:57:12 crc kubenswrapper[4808]: I0217 15:57:12.495672 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7lc69\" (UniqueName: \"kubernetes.io/projected/26b4e80a-42fe-4a5f-99f3-e9967587b72a-kube-api-access-7lc69\") pod \"26b4e80a-42fe-4a5f-99f3-e9967587b72a\" (UID: \"26b4e80a-42fe-4a5f-99f3-e9967587b72a\") " Feb 17 15:57:12 crc kubenswrapper[4808]: I0217 15:57:12.495697 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lxsd5\" (UniqueName: \"kubernetes.io/projected/013c1a2d-19c5-47a3-ae05-f202eac66987-kube-api-access-lxsd5\") pod \"013c1a2d-19c5-47a3-ae05-f202eac66987\" (UID: \"013c1a2d-19c5-47a3-ae05-f202eac66987\") " Feb 17 15:57:12 crc kubenswrapper[4808]: I0217 15:57:12.497359 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/013c1a2d-19c5-47a3-ae05-f202eac66987-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "013c1a2d-19c5-47a3-ae05-f202eac66987" (UID: "013c1a2d-19c5-47a3-ae05-f202eac66987"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:57:12 crc kubenswrapper[4808]: I0217 15:57:12.497462 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/013c1a2d-19c5-47a3-ae05-f202eac66987-config" (OuterVolumeSpecName: "config") pod "013c1a2d-19c5-47a3-ae05-f202eac66987" (UID: "013c1a2d-19c5-47a3-ae05-f202eac66987"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:57:12 crc kubenswrapper[4808]: I0217 15:57:12.497480 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26b4e80a-42fe-4a5f-99f3-e9967587b72a-config" (OuterVolumeSpecName: "config") pod "26b4e80a-42fe-4a5f-99f3-e9967587b72a" (UID: "26b4e80a-42fe-4a5f-99f3-e9967587b72a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:57:12 crc kubenswrapper[4808]: I0217 15:57:12.497710 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26b4e80a-42fe-4a5f-99f3-e9967587b72a-client-ca" (OuterVolumeSpecName: "client-ca") pod "26b4e80a-42fe-4a5f-99f3-e9967587b72a" (UID: "26b4e80a-42fe-4a5f-99f3-e9967587b72a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:57:12 crc kubenswrapper[4808]: I0217 15:57:12.498224 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/013c1a2d-19c5-47a3-ae05-f202eac66987-client-ca" (OuterVolumeSpecName: "client-ca") pod "013c1a2d-19c5-47a3-ae05-f202eac66987" (UID: "013c1a2d-19c5-47a3-ae05-f202eac66987"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:57:12 crc kubenswrapper[4808]: I0217 15:57:12.503802 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26b4e80a-42fe-4a5f-99f3-e9967587b72a-kube-api-access-7lc69" (OuterVolumeSpecName: "kube-api-access-7lc69") pod "26b4e80a-42fe-4a5f-99f3-e9967587b72a" (UID: "26b4e80a-42fe-4a5f-99f3-e9967587b72a"). InnerVolumeSpecName "kube-api-access-7lc69". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:57:12 crc kubenswrapper[4808]: I0217 15:57:12.503939 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/013c1a2d-19c5-47a3-ae05-f202eac66987-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "013c1a2d-19c5-47a3-ae05-f202eac66987" (UID: "013c1a2d-19c5-47a3-ae05-f202eac66987"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:57:12 crc kubenswrapper[4808]: I0217 15:57:12.504134 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/013c1a2d-19c5-47a3-ae05-f202eac66987-kube-api-access-lxsd5" (OuterVolumeSpecName: "kube-api-access-lxsd5") pod "013c1a2d-19c5-47a3-ae05-f202eac66987" (UID: "013c1a2d-19c5-47a3-ae05-f202eac66987"). InnerVolumeSpecName "kube-api-access-lxsd5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:57:12 crc kubenswrapper[4808]: I0217 15:57:12.506659 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26b4e80a-42fe-4a5f-99f3-e9967587b72a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "26b4e80a-42fe-4a5f-99f3-e9967587b72a" (UID: "26b4e80a-42fe-4a5f-99f3-e9967587b72a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:57:12 crc kubenswrapper[4808]: I0217 15:57:12.597099 4808 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/013c1a2d-19c5-47a3-ae05-f202eac66987-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 17 15:57:12 crc kubenswrapper[4808]: I0217 15:57:12.597147 4808 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/26b4e80a-42fe-4a5f-99f3-e9967587b72a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 15:57:12 crc kubenswrapper[4808]: I0217 15:57:12.597162 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26b4e80a-42fe-4a5f-99f3-e9967587b72a-config\") on node \"crc\" DevicePath \"\"" Feb 17 15:57:12 crc kubenswrapper[4808]: I0217 15:57:12.597175 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/013c1a2d-19c5-47a3-ae05-f202eac66987-config\") on node \"crc\" DevicePath \"\"" Feb 17 15:57:12 crc kubenswrapper[4808]: I0217 15:57:12.597187 4808 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/013c1a2d-19c5-47a3-ae05-f202eac66987-client-ca\") on node \"crc\" DevicePath \"\"" Feb 17 15:57:12 crc kubenswrapper[4808]: I0217 15:57:12.597200 4808 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/26b4e80a-42fe-4a5f-99f3-e9967587b72a-client-ca\") on node \"crc\" DevicePath \"\"" Feb 17 15:57:12 crc kubenswrapper[4808]: I0217 15:57:12.597213 4808 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/013c1a2d-19c5-47a3-ae05-f202eac66987-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 15:57:12 crc kubenswrapper[4808]: I0217 15:57:12.597226 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7lc69\" (UniqueName: \"kubernetes.io/projected/26b4e80a-42fe-4a5f-99f3-e9967587b72a-kube-api-access-7lc69\") on node \"crc\" DevicePath \"\"" Feb 17 15:57:12 crc kubenswrapper[4808]: I0217 15:57:12.597242 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lxsd5\" (UniqueName: \"kubernetes.io/projected/013c1a2d-19c5-47a3-ae05-f202eac66987-kube-api-access-lxsd5\") on node \"crc\" DevicePath \"\"" Feb 17 15:57:12 crc kubenswrapper[4808]: I0217 15:57:12.799126 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-58c84966cb-66dmv"] Feb 17 15:57:12 crc kubenswrapper[4808]: E0217 15:57:12.799666 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="013c1a2d-19c5-47a3-ae05-f202eac66987" containerName="controller-manager" Feb 17 15:57:12 crc kubenswrapper[4808]: I0217 15:57:12.799698 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="013c1a2d-19c5-47a3-ae05-f202eac66987" containerName="controller-manager" Feb 17 15:57:12 crc kubenswrapper[4808]: E0217 15:57:12.799723 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26b4e80a-42fe-4a5f-99f3-e9967587b72a" containerName="route-controller-manager" Feb 17 15:57:12 crc kubenswrapper[4808]: I0217 15:57:12.799735 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="26b4e80a-42fe-4a5f-99f3-e9967587b72a" containerName="route-controller-manager" Feb 17 15:57:12 crc kubenswrapper[4808]: I0217 15:57:12.799993 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="013c1a2d-19c5-47a3-ae05-f202eac66987" containerName="controller-manager" Feb 17 15:57:12 crc kubenswrapper[4808]: I0217 15:57:12.800016 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="26b4e80a-42fe-4a5f-99f3-e9967587b72a" containerName="route-controller-manager" Feb 17 15:57:12 crc kubenswrapper[4808]: I0217 15:57:12.800793 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-58c84966cb-66dmv" Feb 17 15:57:12 crc kubenswrapper[4808]: I0217 15:57:12.807933 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-79d5bcd6bf-cd2bq"] Feb 17 15:57:12 crc kubenswrapper[4808]: I0217 15:57:12.808915 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-79d5bcd6bf-cd2bq" Feb 17 15:57:12 crc kubenswrapper[4808]: I0217 15:57:12.812470 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-58c84966cb-66dmv"] Feb 17 15:57:12 crc kubenswrapper[4808]: I0217 15:57:12.815359 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-79d5bcd6bf-cd2bq"] Feb 17 15:57:12 crc kubenswrapper[4808]: I0217 15:57:12.913512 4808 generic.go:334] "Generic (PLEG): container finished" podID="013c1a2d-19c5-47a3-ae05-f202eac66987" containerID="7406868a2293d8950f1d4eab45dbd36bf1a8a3819755cbb814f90b1c5517b8b1" exitCode=0 Feb 17 15:57:12 crc kubenswrapper[4808]: I0217 15:57:12.913675 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5bfbf6ffb-5h8qn" event={"ID":"013c1a2d-19c5-47a3-ae05-f202eac66987","Type":"ContainerDied","Data":"7406868a2293d8950f1d4eab45dbd36bf1a8a3819755cbb814f90b1c5517b8b1"} Feb 17 15:57:12 crc kubenswrapper[4808]: I0217 15:57:12.913996 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5bfbf6ffb-5h8qn" event={"ID":"013c1a2d-19c5-47a3-ae05-f202eac66987","Type":"ContainerDied","Data":"3db618564a6ec77c73d367392142f19b47c4dabc393708105b57bd64c94ec953"} Feb 17 15:57:12 crc kubenswrapper[4808]: I0217 15:57:12.914017 4808 scope.go:117] "RemoveContainer" containerID="7406868a2293d8950f1d4eab45dbd36bf1a8a3819755cbb814f90b1c5517b8b1" Feb 17 15:57:12 crc kubenswrapper[4808]: I0217 15:57:12.913710 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5bfbf6ffb-5h8qn" Feb 17 15:57:12 crc kubenswrapper[4808]: I0217 15:57:12.918243 4808 generic.go:334] "Generic (PLEG): container finished" podID="26b4e80a-42fe-4a5f-99f3-e9967587b72a" containerID="cb87e90bf76e5d5089065094e76d13badc3d77135b619ab84f905d563062244c" exitCode=0 Feb 17 15:57:12 crc kubenswrapper[4808]: I0217 15:57:12.918314 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5797f68d88-nqrfd" Feb 17 15:57:12 crc kubenswrapper[4808]: I0217 15:57:12.918344 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5797f68d88-nqrfd" event={"ID":"26b4e80a-42fe-4a5f-99f3-e9967587b72a","Type":"ContainerDied","Data":"cb87e90bf76e5d5089065094e76d13badc3d77135b619ab84f905d563062244c"} Feb 17 15:57:12 crc kubenswrapper[4808]: I0217 15:57:12.918586 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5797f68d88-nqrfd" event={"ID":"26b4e80a-42fe-4a5f-99f3-e9967587b72a","Type":"ContainerDied","Data":"d29a4ebc7c0c8249cf9b6afd154a8e3281a9da7692080a8b5c9a23df6d329cfe"} Feb 17 15:57:12 crc kubenswrapper[4808]: I0217 15:57:12.943926 4808 scope.go:117] "RemoveContainer" containerID="7406868a2293d8950f1d4eab45dbd36bf1a8a3819755cbb814f90b1c5517b8b1" Feb 17 15:57:12 crc kubenswrapper[4808]: E0217 15:57:12.948729 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7406868a2293d8950f1d4eab45dbd36bf1a8a3819755cbb814f90b1c5517b8b1\": container with ID starting with 7406868a2293d8950f1d4eab45dbd36bf1a8a3819755cbb814f90b1c5517b8b1 not found: ID does not exist" containerID="7406868a2293d8950f1d4eab45dbd36bf1a8a3819755cbb814f90b1c5517b8b1" Feb 17 15:57:12 crc kubenswrapper[4808]: I0217 15:57:12.948871 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7406868a2293d8950f1d4eab45dbd36bf1a8a3819755cbb814f90b1c5517b8b1"} err="failed to get container status \"7406868a2293d8950f1d4eab45dbd36bf1a8a3819755cbb814f90b1c5517b8b1\": rpc error: code = NotFound desc = could not find container \"7406868a2293d8950f1d4eab45dbd36bf1a8a3819755cbb814f90b1c5517b8b1\": container with ID starting with 7406868a2293d8950f1d4eab45dbd36bf1a8a3819755cbb814f90b1c5517b8b1 not found: ID does not exist" Feb 17 15:57:12 crc kubenswrapper[4808]: I0217 15:57:12.949021 4808 scope.go:117] "RemoveContainer" containerID="cb87e90bf76e5d5089065094e76d13badc3d77135b619ab84f905d563062244c" Feb 17 15:57:12 crc kubenswrapper[4808]: I0217 15:57:12.967235 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5bfbf6ffb-5h8qn"] Feb 17 15:57:12 crc kubenswrapper[4808]: I0217 15:57:12.972881 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5bfbf6ffb-5h8qn"] Feb 17 15:57:12 crc kubenswrapper[4808]: I0217 15:57:12.983076 4808 scope.go:117] "RemoveContainer" containerID="cb87e90bf76e5d5089065094e76d13badc3d77135b619ab84f905d563062244c" Feb 17 15:57:12 crc kubenswrapper[4808]: E0217 15:57:12.985176 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb87e90bf76e5d5089065094e76d13badc3d77135b619ab84f905d563062244c\": container with ID starting with cb87e90bf76e5d5089065094e76d13badc3d77135b619ab84f905d563062244c not found: ID does not exist" containerID="cb87e90bf76e5d5089065094e76d13badc3d77135b619ab84f905d563062244c" Feb 17 15:57:12 crc kubenswrapper[4808]: I0217 15:57:12.985322 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb87e90bf76e5d5089065094e76d13badc3d77135b619ab84f905d563062244c"} err="failed to get container status \"cb87e90bf76e5d5089065094e76d13badc3d77135b619ab84f905d563062244c\": rpc error: code = NotFound desc = could not find container \"cb87e90bf76e5d5089065094e76d13badc3d77135b619ab84f905d563062244c\": container with ID starting with cb87e90bf76e5d5089065094e76d13badc3d77135b619ab84f905d563062244c not found: ID does not exist" Feb 17 15:57:13 crc kubenswrapper[4808]: I0217 15:57:13.002120 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5797f68d88-nqrfd"] Feb 17 15:57:13 crc kubenswrapper[4808]: I0217 15:57:13.004301 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9455640b-d252-4198-b7df-a410bf7df2fe-serving-cert\") pod \"route-controller-manager-79d5bcd6bf-cd2bq\" (UID: \"9455640b-d252-4198-b7df-a410bf7df2fe\") " pod="openshift-route-controller-manager/route-controller-manager-79d5bcd6bf-cd2bq" Feb 17 15:57:13 crc kubenswrapper[4808]: I0217 15:57:13.004377 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdvvn\" (UniqueName: \"kubernetes.io/projected/9455640b-d252-4198-b7df-a410bf7df2fe-kube-api-access-mdvvn\") pod \"route-controller-manager-79d5bcd6bf-cd2bq\" (UID: \"9455640b-d252-4198-b7df-a410bf7df2fe\") " pod="openshift-route-controller-manager/route-controller-manager-79d5bcd6bf-cd2bq" Feb 17 15:57:13 crc kubenswrapper[4808]: I0217 15:57:13.004420 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9455640b-d252-4198-b7df-a410bf7df2fe-client-ca\") pod \"route-controller-manager-79d5bcd6bf-cd2bq\" (UID: \"9455640b-d252-4198-b7df-a410bf7df2fe\") " pod="openshift-route-controller-manager/route-controller-manager-79d5bcd6bf-cd2bq" Feb 17 15:57:13 crc kubenswrapper[4808]: I0217 15:57:13.004431 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5797f68d88-nqrfd"] Feb 17 15:57:13 crc kubenswrapper[4808]: I0217 15:57:13.004460 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a0b9abce-8b6f-4346-b18c-2bfb7e5982eb-proxy-ca-bundles\") pod \"controller-manager-58c84966cb-66dmv\" (UID: \"a0b9abce-8b6f-4346-b18c-2bfb7e5982eb\") " pod="openshift-controller-manager/controller-manager-58c84966cb-66dmv" Feb 17 15:57:13 crc kubenswrapper[4808]: I0217 15:57:13.004514 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0b9abce-8b6f-4346-b18c-2bfb7e5982eb-serving-cert\") pod \"controller-manager-58c84966cb-66dmv\" (UID: \"a0b9abce-8b6f-4346-b18c-2bfb7e5982eb\") " pod="openshift-controller-manager/controller-manager-58c84966cb-66dmv" Feb 17 15:57:13 crc kubenswrapper[4808]: I0217 15:57:13.011418 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a0b9abce-8b6f-4346-b18c-2bfb7e5982eb-client-ca\") pod \"controller-manager-58c84966cb-66dmv\" (UID: \"a0b9abce-8b6f-4346-b18c-2bfb7e5982eb\") " pod="openshift-controller-manager/controller-manager-58c84966cb-66dmv" Feb 17 15:57:13 crc kubenswrapper[4808]: I0217 15:57:13.011516 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0b9abce-8b6f-4346-b18c-2bfb7e5982eb-config\") pod \"controller-manager-58c84966cb-66dmv\" (UID: \"a0b9abce-8b6f-4346-b18c-2bfb7e5982eb\") " pod="openshift-controller-manager/controller-manager-58c84966cb-66dmv" Feb 17 15:57:13 crc kubenswrapper[4808]: I0217 15:57:13.011564 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9b42c\" (UniqueName: \"kubernetes.io/projected/a0b9abce-8b6f-4346-b18c-2bfb7e5982eb-kube-api-access-9b42c\") pod \"controller-manager-58c84966cb-66dmv\" (UID: \"a0b9abce-8b6f-4346-b18c-2bfb7e5982eb\") " pod="openshift-controller-manager/controller-manager-58c84966cb-66dmv" Feb 17 15:57:13 crc kubenswrapper[4808]: I0217 15:57:13.011617 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9455640b-d252-4198-b7df-a410bf7df2fe-config\") pod \"route-controller-manager-79d5bcd6bf-cd2bq\" (UID: \"9455640b-d252-4198-b7df-a410bf7df2fe\") " pod="openshift-route-controller-manager/route-controller-manager-79d5bcd6bf-cd2bq" Feb 17 15:57:13 crc kubenswrapper[4808]: I0217 15:57:13.112692 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0b9abce-8b6f-4346-b18c-2bfb7e5982eb-config\") pod \"controller-manager-58c84966cb-66dmv\" (UID: \"a0b9abce-8b6f-4346-b18c-2bfb7e5982eb\") " pod="openshift-controller-manager/controller-manager-58c84966cb-66dmv" Feb 17 15:57:13 crc kubenswrapper[4808]: I0217 15:57:13.112777 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9b42c\" (UniqueName: \"kubernetes.io/projected/a0b9abce-8b6f-4346-b18c-2bfb7e5982eb-kube-api-access-9b42c\") pod \"controller-manager-58c84966cb-66dmv\" (UID: \"a0b9abce-8b6f-4346-b18c-2bfb7e5982eb\") " pod="openshift-controller-manager/controller-manager-58c84966cb-66dmv" Feb 17 15:57:13 crc kubenswrapper[4808]: I0217 15:57:13.112811 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9455640b-d252-4198-b7df-a410bf7df2fe-config\") pod \"route-controller-manager-79d5bcd6bf-cd2bq\" (UID: \"9455640b-d252-4198-b7df-a410bf7df2fe\") " pod="openshift-route-controller-manager/route-controller-manager-79d5bcd6bf-cd2bq" Feb 17 15:57:13 crc kubenswrapper[4808]: I0217 15:57:13.112847 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9455640b-d252-4198-b7df-a410bf7df2fe-serving-cert\") pod \"route-controller-manager-79d5bcd6bf-cd2bq\" (UID: \"9455640b-d252-4198-b7df-a410bf7df2fe\") " pod="openshift-route-controller-manager/route-controller-manager-79d5bcd6bf-cd2bq" Feb 17 15:57:13 crc kubenswrapper[4808]: I0217 15:57:13.112870 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mdvvn\" (UniqueName: \"kubernetes.io/projected/9455640b-d252-4198-b7df-a410bf7df2fe-kube-api-access-mdvvn\") pod \"route-controller-manager-79d5bcd6bf-cd2bq\" (UID: \"9455640b-d252-4198-b7df-a410bf7df2fe\") " pod="openshift-route-controller-manager/route-controller-manager-79d5bcd6bf-cd2bq" Feb 17 15:57:13 crc kubenswrapper[4808]: I0217 15:57:13.112890 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9455640b-d252-4198-b7df-a410bf7df2fe-client-ca\") pod \"route-controller-manager-79d5bcd6bf-cd2bq\" (UID: \"9455640b-d252-4198-b7df-a410bf7df2fe\") " pod="openshift-route-controller-manager/route-controller-manager-79d5bcd6bf-cd2bq" Feb 17 15:57:13 crc kubenswrapper[4808]: I0217 15:57:13.112972 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a0b9abce-8b6f-4346-b18c-2bfb7e5982eb-proxy-ca-bundles\") pod \"controller-manager-58c84966cb-66dmv\" (UID: \"a0b9abce-8b6f-4346-b18c-2bfb7e5982eb\") " pod="openshift-controller-manager/controller-manager-58c84966cb-66dmv" Feb 17 15:57:13 crc kubenswrapper[4808]: I0217 15:57:13.113035 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0b9abce-8b6f-4346-b18c-2bfb7e5982eb-serving-cert\") pod \"controller-manager-58c84966cb-66dmv\" (UID: \"a0b9abce-8b6f-4346-b18c-2bfb7e5982eb\") " pod="openshift-controller-manager/controller-manager-58c84966cb-66dmv" Feb 17 15:57:13 crc kubenswrapper[4808]: I0217 15:57:13.113061 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a0b9abce-8b6f-4346-b18c-2bfb7e5982eb-client-ca\") pod \"controller-manager-58c84966cb-66dmv\" (UID: \"a0b9abce-8b6f-4346-b18c-2bfb7e5982eb\") " pod="openshift-controller-manager/controller-manager-58c84966cb-66dmv" Feb 17 15:57:13 crc kubenswrapper[4808]: I0217 15:57:13.115299 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9455640b-d252-4198-b7df-a410bf7df2fe-client-ca\") pod \"route-controller-manager-79d5bcd6bf-cd2bq\" (UID: \"9455640b-d252-4198-b7df-a410bf7df2fe\") " pod="openshift-route-controller-manager/route-controller-manager-79d5bcd6bf-cd2bq" Feb 17 15:57:13 crc kubenswrapper[4808]: I0217 15:57:13.115409 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0b9abce-8b6f-4346-b18c-2bfb7e5982eb-config\") pod \"controller-manager-58c84966cb-66dmv\" (UID: \"a0b9abce-8b6f-4346-b18c-2bfb7e5982eb\") " pod="openshift-controller-manager/controller-manager-58c84966cb-66dmv" Feb 17 15:57:13 crc kubenswrapper[4808]: I0217 15:57:13.115428 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9455640b-d252-4198-b7df-a410bf7df2fe-config\") pod \"route-controller-manager-79d5bcd6bf-cd2bq\" (UID: \"9455640b-d252-4198-b7df-a410bf7df2fe\") " pod="openshift-route-controller-manager/route-controller-manager-79d5bcd6bf-cd2bq" Feb 17 15:57:13 crc kubenswrapper[4808]: I0217 15:57:13.115534 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a0b9abce-8b6f-4346-b18c-2bfb7e5982eb-client-ca\") pod \"controller-manager-58c84966cb-66dmv\" (UID: \"a0b9abce-8b6f-4346-b18c-2bfb7e5982eb\") " pod="openshift-controller-manager/controller-manager-58c84966cb-66dmv" Feb 17 15:57:13 crc kubenswrapper[4808]: I0217 15:57:13.116920 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a0b9abce-8b6f-4346-b18c-2bfb7e5982eb-proxy-ca-bundles\") pod \"controller-manager-58c84966cb-66dmv\" (UID: \"a0b9abce-8b6f-4346-b18c-2bfb7e5982eb\") " pod="openshift-controller-manager/controller-manager-58c84966cb-66dmv" Feb 17 15:57:13 crc kubenswrapper[4808]: I0217 15:57:13.120214 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9455640b-d252-4198-b7df-a410bf7df2fe-serving-cert\") pod \"route-controller-manager-79d5bcd6bf-cd2bq\" (UID: \"9455640b-d252-4198-b7df-a410bf7df2fe\") " pod="openshift-route-controller-manager/route-controller-manager-79d5bcd6bf-cd2bq" Feb 17 15:57:13 crc kubenswrapper[4808]: I0217 15:57:13.122900 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0b9abce-8b6f-4346-b18c-2bfb7e5982eb-serving-cert\") pod \"controller-manager-58c84966cb-66dmv\" (UID: \"a0b9abce-8b6f-4346-b18c-2bfb7e5982eb\") " pod="openshift-controller-manager/controller-manager-58c84966cb-66dmv" Feb 17 15:57:13 crc kubenswrapper[4808]: I0217 15:57:13.132502 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mdvvn\" (UniqueName: \"kubernetes.io/projected/9455640b-d252-4198-b7df-a410bf7df2fe-kube-api-access-mdvvn\") pod \"route-controller-manager-79d5bcd6bf-cd2bq\" (UID: \"9455640b-d252-4198-b7df-a410bf7df2fe\") " pod="openshift-route-controller-manager/route-controller-manager-79d5bcd6bf-cd2bq" Feb 17 15:57:13 crc kubenswrapper[4808]: I0217 15:57:13.133519 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9b42c\" (UniqueName: \"kubernetes.io/projected/a0b9abce-8b6f-4346-b18c-2bfb7e5982eb-kube-api-access-9b42c\") pod \"controller-manager-58c84966cb-66dmv\" (UID: \"a0b9abce-8b6f-4346-b18c-2bfb7e5982eb\") " pod="openshift-controller-manager/controller-manager-58c84966cb-66dmv" Feb 17 15:57:13 crc kubenswrapper[4808]: I0217 15:57:13.140392 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-58c84966cb-66dmv" Feb 17 15:57:13 crc kubenswrapper[4808]: I0217 15:57:13.153153 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-79d5bcd6bf-cd2bq" Feb 17 15:57:13 crc kubenswrapper[4808]: I0217 15:57:13.153471 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="013c1a2d-19c5-47a3-ae05-f202eac66987" path="/var/lib/kubelet/pods/013c1a2d-19c5-47a3-ae05-f202eac66987/volumes" Feb 17 15:57:13 crc kubenswrapper[4808]: I0217 15:57:13.154243 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26b4e80a-42fe-4a5f-99f3-e9967587b72a" path="/var/lib/kubelet/pods/26b4e80a-42fe-4a5f-99f3-e9967587b72a/volumes" Feb 17 15:57:13 crc kubenswrapper[4808]: I0217 15:57:13.407817 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-58c84966cb-66dmv"] Feb 17 15:57:13 crc kubenswrapper[4808]: I0217 15:57:13.464695 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-79d5bcd6bf-cd2bq"] Feb 17 15:57:13 crc kubenswrapper[4808]: W0217 15:57:13.486847 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9455640b_d252_4198_b7df_a410bf7df2fe.slice/crio-327f5a42044ba8a23bba834cc735ee73f16c693a4050fd5db7f91b4968d83e39 WatchSource:0}: Error finding container 327f5a42044ba8a23bba834cc735ee73f16c693a4050fd5db7f91b4968d83e39: Status 404 returned error can't find the container with id 327f5a42044ba8a23bba834cc735ee73f16c693a4050fd5db7f91b4968d83e39 Feb 17 15:57:13 crc kubenswrapper[4808]: I0217 15:57:13.928104 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-79d5bcd6bf-cd2bq" event={"ID":"9455640b-d252-4198-b7df-a410bf7df2fe","Type":"ContainerStarted","Data":"2c9dbd682946c3e5c2cfca8b85377da096ea534bb79d801e3a40476342b68450"} Feb 17 15:57:13 crc kubenswrapper[4808]: I0217 15:57:13.928212 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-79d5bcd6bf-cd2bq" event={"ID":"9455640b-d252-4198-b7df-a410bf7df2fe","Type":"ContainerStarted","Data":"327f5a42044ba8a23bba834cc735ee73f16c693a4050fd5db7f91b4968d83e39"} Feb 17 15:57:13 crc kubenswrapper[4808]: I0217 15:57:13.928472 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-79d5bcd6bf-cd2bq" Feb 17 15:57:13 crc kubenswrapper[4808]: I0217 15:57:13.930905 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-58c84966cb-66dmv" event={"ID":"a0b9abce-8b6f-4346-b18c-2bfb7e5982eb","Type":"ContainerStarted","Data":"04835832bfc8343ab9fa813877ab509d95417e7a4406a2dd5c0ba0c9d44fac95"} Feb 17 15:57:13 crc kubenswrapper[4808]: I0217 15:57:13.930961 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-58c84966cb-66dmv" event={"ID":"a0b9abce-8b6f-4346-b18c-2bfb7e5982eb","Type":"ContainerStarted","Data":"5a6cae267669bf9865700e7923e707ca2f9a9c9fd07c5ade06fb9066e508ae1a"} Feb 17 15:57:13 crc kubenswrapper[4808]: I0217 15:57:13.931063 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-58c84966cb-66dmv" Feb 17 15:57:13 crc kubenswrapper[4808]: I0217 15:57:13.938371 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-58c84966cb-66dmv" Feb 17 15:57:13 crc kubenswrapper[4808]: I0217 15:57:13.947920 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-79d5bcd6bf-cd2bq" podStartSLOduration=2.947901166 podStartE2EDuration="2.947901166s" podCreationTimestamp="2026-02-17 15:57:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:57:13.94546286 +0000 UTC m=+197.461821933" watchObservedRunningTime="2026-02-17 15:57:13.947901166 +0000 UTC m=+197.464260239" Feb 17 15:57:13 crc kubenswrapper[4808]: I0217 15:57:13.967357 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-58c84966cb-66dmv" podStartSLOduration=2.967332302 podStartE2EDuration="2.967332302s" podCreationTimestamp="2026-02-17 15:57:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:57:13.962199503 +0000 UTC m=+197.478558566" watchObservedRunningTime="2026-02-17 15:57:13.967332302 +0000 UTC m=+197.483691375" Feb 17 15:57:14 crc kubenswrapper[4808]: I0217 15:57:14.154170 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-79d5bcd6bf-cd2bq" Feb 17 15:57:15 crc kubenswrapper[4808]: I0217 15:57:15.009086 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-6vvmq" Feb 17 15:57:15 crc kubenswrapper[4808]: I0217 15:57:15.010604 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6vvmq" Feb 17 15:57:15 crc kubenswrapper[4808]: I0217 15:57:15.159480 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6vvmq" Feb 17 15:57:15 crc kubenswrapper[4808]: I0217 15:57:15.185560 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-wsbjl" Feb 17 15:57:15 crc kubenswrapper[4808]: I0217 15:57:15.185651 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-wsbjl" Feb 17 15:57:15 crc kubenswrapper[4808]: I0217 15:57:15.222939 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-wsbjl" Feb 17 15:57:15 crc kubenswrapper[4808]: I0217 15:57:15.820075 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 17 15:57:15 crc kubenswrapper[4808]: I0217 15:57:15.821114 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 17 15:57:15 crc kubenswrapper[4808]: I0217 15:57:15.825641 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 17 15:57:15 crc kubenswrapper[4808]: I0217 15:57:15.826091 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 17 15:57:15 crc kubenswrapper[4808]: I0217 15:57:15.829483 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 17 15:57:15 crc kubenswrapper[4808]: I0217 15:57:15.955278 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2a35eed2-a26d-4fc0-9daa-41e30256780e-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"2a35eed2-a26d-4fc0-9daa-41e30256780e\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 17 15:57:15 crc kubenswrapper[4808]: I0217 15:57:15.955332 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2a35eed2-a26d-4fc0-9daa-41e30256780e-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"2a35eed2-a26d-4fc0-9daa-41e30256780e\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 17 15:57:15 crc kubenswrapper[4808]: I0217 15:57:15.994644 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-wsbjl" Feb 17 15:57:16 crc kubenswrapper[4808]: I0217 15:57:16.005877 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6vvmq" Feb 17 15:57:16 crc kubenswrapper[4808]: I0217 15:57:16.057243 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2a35eed2-a26d-4fc0-9daa-41e30256780e-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"2a35eed2-a26d-4fc0-9daa-41e30256780e\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 17 15:57:16 crc kubenswrapper[4808]: I0217 15:57:16.057322 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2a35eed2-a26d-4fc0-9daa-41e30256780e-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"2a35eed2-a26d-4fc0-9daa-41e30256780e\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 17 15:57:16 crc kubenswrapper[4808]: I0217 15:57:16.057930 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2a35eed2-a26d-4fc0-9daa-41e30256780e-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"2a35eed2-a26d-4fc0-9daa-41e30256780e\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 17 15:57:16 crc kubenswrapper[4808]: I0217 15:57:16.079814 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2a35eed2-a26d-4fc0-9daa-41e30256780e-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"2a35eed2-a26d-4fc0-9daa-41e30256780e\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 17 15:57:16 crc kubenswrapper[4808]: I0217 15:57:16.204966 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 17 15:57:16 crc kubenswrapper[4808]: I0217 15:57:16.487219 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 17 15:57:16 crc kubenswrapper[4808]: I0217 15:57:16.920755 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-ts9gs" Feb 17 15:57:16 crc kubenswrapper[4808]: I0217 15:57:16.921181 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-ts9gs" Feb 17 15:57:16 crc kubenswrapper[4808]: I0217 15:57:16.953751 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"2a35eed2-a26d-4fc0-9daa-41e30256780e","Type":"ContainerStarted","Data":"8a67257d2f9fdfe95a5cbf4aabe44195eecc463b7d295e846399994ff28b484b"} Feb 17 15:57:16 crc kubenswrapper[4808]: I0217 15:57:16.953813 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"2a35eed2-a26d-4fc0-9daa-41e30256780e","Type":"ContainerStarted","Data":"56afd58e8a64a79de748ecc17d0404972690d47a6f6b7d4f90f438cdb2799a9f"} Feb 17 15:57:16 crc kubenswrapper[4808]: I0217 15:57:16.965984 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-ts9gs" Feb 17 15:57:16 crc kubenswrapper[4808]: I0217 15:57:16.968681 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=1.968657348 podStartE2EDuration="1.968657348s" podCreationTimestamp="2026-02-17 15:57:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:57:16.968019301 +0000 UTC m=+200.484378384" watchObservedRunningTime="2026-02-17 15:57:16.968657348 +0000 UTC m=+200.485016421" Feb 17 15:57:17 crc kubenswrapper[4808]: I0217 15:57:17.021789 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-ts9gs" Feb 17 15:57:17 crc kubenswrapper[4808]: I0217 15:57:17.961232 4808 generic.go:334] "Generic (PLEG): container finished" podID="2a35eed2-a26d-4fc0-9daa-41e30256780e" containerID="8a67257d2f9fdfe95a5cbf4aabe44195eecc463b7d295e846399994ff28b484b" exitCode=0 Feb 17 15:57:17 crc kubenswrapper[4808]: I0217 15:57:17.961341 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"2a35eed2-a26d-4fc0-9daa-41e30256780e","Type":"ContainerDied","Data":"8a67257d2f9fdfe95a5cbf4aabe44195eecc463b7d295e846399994ff28b484b"} Feb 17 15:57:17 crc kubenswrapper[4808]: I0217 15:57:17.971325 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hn7fn" event={"ID":"a1db3ff7-c43f-412e-ab72-3d592b6352b0","Type":"ContainerStarted","Data":"56e991bdc7726b6c61887160d04bc51376a606946a766ba535be7f736adc85e3"} Feb 17 15:57:18 crc kubenswrapper[4808]: I0217 15:57:18.037379 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6vvmq"] Feb 17 15:57:18 crc kubenswrapper[4808]: I0217 15:57:18.237072 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wsbjl"] Feb 17 15:57:18 crc kubenswrapper[4808]: I0217 15:57:18.237762 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-wsbjl" podUID="2f04008a-114c-4f19-971a-34fa574846f5" containerName="registry-server" containerID="cri-o://0e7ffda38dadb23c7fa43fc3d035ca26df0c3b1d59fe1979ae7c5702a3647add" gracePeriod=2 Feb 17 15:57:18 crc kubenswrapper[4808]: I0217 15:57:18.634239 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wsbjl" Feb 17 15:57:18 crc kubenswrapper[4808]: I0217 15:57:18.801234 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f04008a-114c-4f19-971a-34fa574846f5-utilities\") pod \"2f04008a-114c-4f19-971a-34fa574846f5\" (UID: \"2f04008a-114c-4f19-971a-34fa574846f5\") " Feb 17 15:57:18 crc kubenswrapper[4808]: I0217 15:57:18.801318 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z4v4z\" (UniqueName: \"kubernetes.io/projected/2f04008a-114c-4f19-971a-34fa574846f5-kube-api-access-z4v4z\") pod \"2f04008a-114c-4f19-971a-34fa574846f5\" (UID: \"2f04008a-114c-4f19-971a-34fa574846f5\") " Feb 17 15:57:18 crc kubenswrapper[4808]: I0217 15:57:18.801407 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f04008a-114c-4f19-971a-34fa574846f5-catalog-content\") pod \"2f04008a-114c-4f19-971a-34fa574846f5\" (UID: \"2f04008a-114c-4f19-971a-34fa574846f5\") " Feb 17 15:57:18 crc kubenswrapper[4808]: I0217 15:57:18.802108 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f04008a-114c-4f19-971a-34fa574846f5-utilities" (OuterVolumeSpecName: "utilities") pod "2f04008a-114c-4f19-971a-34fa574846f5" (UID: "2f04008a-114c-4f19-971a-34fa574846f5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 15:57:18 crc kubenswrapper[4808]: I0217 15:57:18.825511 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f04008a-114c-4f19-971a-34fa574846f5-kube-api-access-z4v4z" (OuterVolumeSpecName: "kube-api-access-z4v4z") pod "2f04008a-114c-4f19-971a-34fa574846f5" (UID: "2f04008a-114c-4f19-971a-34fa574846f5"). InnerVolumeSpecName "kube-api-access-z4v4z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:57:18 crc kubenswrapper[4808]: I0217 15:57:18.903721 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f04008a-114c-4f19-971a-34fa574846f5-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 15:57:18 crc kubenswrapper[4808]: I0217 15:57:18.903762 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z4v4z\" (UniqueName: \"kubernetes.io/projected/2f04008a-114c-4f19-971a-34fa574846f5-kube-api-access-z4v4z\") on node \"crc\" DevicePath \"\"" Feb 17 15:57:18 crc kubenswrapper[4808]: I0217 15:57:18.979700 4808 generic.go:334] "Generic (PLEG): container finished" podID="2f04008a-114c-4f19-971a-34fa574846f5" containerID="0e7ffda38dadb23c7fa43fc3d035ca26df0c3b1d59fe1979ae7c5702a3647add" exitCode=0 Feb 17 15:57:18 crc kubenswrapper[4808]: I0217 15:57:18.979775 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wsbjl" event={"ID":"2f04008a-114c-4f19-971a-34fa574846f5","Type":"ContainerDied","Data":"0e7ffda38dadb23c7fa43fc3d035ca26df0c3b1d59fe1979ae7c5702a3647add"} Feb 17 15:57:18 crc kubenswrapper[4808]: I0217 15:57:18.979831 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wsbjl" event={"ID":"2f04008a-114c-4f19-971a-34fa574846f5","Type":"ContainerDied","Data":"735c6effafb73a77d28e55e021aec1242fb9a889fb9fde23203faa6b85d31dbc"} Feb 17 15:57:18 crc kubenswrapper[4808]: I0217 15:57:18.979848 4808 scope.go:117] "RemoveContainer" containerID="0e7ffda38dadb23c7fa43fc3d035ca26df0c3b1d59fe1979ae7c5702a3647add" Feb 17 15:57:18 crc kubenswrapper[4808]: I0217 15:57:18.979988 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wsbjl" Feb 17 15:57:18 crc kubenswrapper[4808]: I0217 15:57:18.992978 4808 generic.go:334] "Generic (PLEG): container finished" podID="a1db3ff7-c43f-412e-ab72-3d592b6352b0" containerID="56e991bdc7726b6c61887160d04bc51376a606946a766ba535be7f736adc85e3" exitCode=0 Feb 17 15:57:18 crc kubenswrapper[4808]: I0217 15:57:18.993243 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hn7fn" event={"ID":"a1db3ff7-c43f-412e-ab72-3d592b6352b0","Type":"ContainerDied","Data":"56e991bdc7726b6c61887160d04bc51376a606946a766ba535be7f736adc85e3"} Feb 17 15:57:18 crc kubenswrapper[4808]: I0217 15:57:18.993892 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-6vvmq" podUID="57300b85-6c7e-49da-bb14-40055f48a85c" containerName="registry-server" containerID="cri-o://4af04fd40045e9e7dfaadf911b9f31ed6ee225c9d6497d579fe01321855f1de4" gracePeriod=2 Feb 17 15:57:19 crc kubenswrapper[4808]: I0217 15:57:19.038042 4808 scope.go:117] "RemoveContainer" containerID="b4900ba4eb2857f22d6e65bf801ac98b6168df05a60b82365a27f7fac0951d6c" Feb 17 15:57:19 crc kubenswrapper[4808]: I0217 15:57:19.068498 4808 scope.go:117] "RemoveContainer" containerID="f9c248e0102ac7a597ac6e8de2b6e8d0d34fbaee650f849f4734c52dfbfaedd5" Feb 17 15:57:19 crc kubenswrapper[4808]: I0217 15:57:19.084063 4808 scope.go:117] "RemoveContainer" containerID="0e7ffda38dadb23c7fa43fc3d035ca26df0c3b1d59fe1979ae7c5702a3647add" Feb 17 15:57:19 crc kubenswrapper[4808]: E0217 15:57:19.084887 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e7ffda38dadb23c7fa43fc3d035ca26df0c3b1d59fe1979ae7c5702a3647add\": container with ID starting with 0e7ffda38dadb23c7fa43fc3d035ca26df0c3b1d59fe1979ae7c5702a3647add not found: ID does not exist" containerID="0e7ffda38dadb23c7fa43fc3d035ca26df0c3b1d59fe1979ae7c5702a3647add" Feb 17 15:57:19 crc kubenswrapper[4808]: I0217 15:57:19.084931 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e7ffda38dadb23c7fa43fc3d035ca26df0c3b1d59fe1979ae7c5702a3647add"} err="failed to get container status \"0e7ffda38dadb23c7fa43fc3d035ca26df0c3b1d59fe1979ae7c5702a3647add\": rpc error: code = NotFound desc = could not find container \"0e7ffda38dadb23c7fa43fc3d035ca26df0c3b1d59fe1979ae7c5702a3647add\": container with ID starting with 0e7ffda38dadb23c7fa43fc3d035ca26df0c3b1d59fe1979ae7c5702a3647add not found: ID does not exist" Feb 17 15:57:19 crc kubenswrapper[4808]: I0217 15:57:19.084961 4808 scope.go:117] "RemoveContainer" containerID="b4900ba4eb2857f22d6e65bf801ac98b6168df05a60b82365a27f7fac0951d6c" Feb 17 15:57:19 crc kubenswrapper[4808]: E0217 15:57:19.085310 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4900ba4eb2857f22d6e65bf801ac98b6168df05a60b82365a27f7fac0951d6c\": container with ID starting with b4900ba4eb2857f22d6e65bf801ac98b6168df05a60b82365a27f7fac0951d6c not found: ID does not exist" containerID="b4900ba4eb2857f22d6e65bf801ac98b6168df05a60b82365a27f7fac0951d6c" Feb 17 15:57:19 crc kubenswrapper[4808]: I0217 15:57:19.085446 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4900ba4eb2857f22d6e65bf801ac98b6168df05a60b82365a27f7fac0951d6c"} err="failed to get container status \"b4900ba4eb2857f22d6e65bf801ac98b6168df05a60b82365a27f7fac0951d6c\": rpc error: code = NotFound desc = could not find container \"b4900ba4eb2857f22d6e65bf801ac98b6168df05a60b82365a27f7fac0951d6c\": container with ID starting with b4900ba4eb2857f22d6e65bf801ac98b6168df05a60b82365a27f7fac0951d6c not found: ID does not exist" Feb 17 15:57:19 crc kubenswrapper[4808]: I0217 15:57:19.085468 4808 scope.go:117] "RemoveContainer" containerID="f9c248e0102ac7a597ac6e8de2b6e8d0d34fbaee650f849f4734c52dfbfaedd5" Feb 17 15:57:19 crc kubenswrapper[4808]: E0217 15:57:19.086041 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f9c248e0102ac7a597ac6e8de2b6e8d0d34fbaee650f849f4734c52dfbfaedd5\": container with ID starting with f9c248e0102ac7a597ac6e8de2b6e8d0d34fbaee650f849f4734c52dfbfaedd5 not found: ID does not exist" containerID="f9c248e0102ac7a597ac6e8de2b6e8d0d34fbaee650f849f4734c52dfbfaedd5" Feb 17 15:57:19 crc kubenswrapper[4808]: I0217 15:57:19.086077 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9c248e0102ac7a597ac6e8de2b6e8d0d34fbaee650f849f4734c52dfbfaedd5"} err="failed to get container status \"f9c248e0102ac7a597ac6e8de2b6e8d0d34fbaee650f849f4734c52dfbfaedd5\": rpc error: code = NotFound desc = could not find container \"f9c248e0102ac7a597ac6e8de2b6e8d0d34fbaee650f849f4734c52dfbfaedd5\": container with ID starting with f9c248e0102ac7a597ac6e8de2b6e8d0d34fbaee650f849f4734c52dfbfaedd5 not found: ID does not exist" Feb 17 15:57:19 crc kubenswrapper[4808]: I0217 15:57:19.252297 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f04008a-114c-4f19-971a-34fa574846f5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2f04008a-114c-4f19-971a-34fa574846f5" (UID: "2f04008a-114c-4f19-971a-34fa574846f5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 15:57:19 crc kubenswrapper[4808]: I0217 15:57:19.281786 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 17 15:57:19 crc kubenswrapper[4808]: I0217 15:57:19.317166 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wsbjl"] Feb 17 15:57:19 crc kubenswrapper[4808]: I0217 15:57:19.321194 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-wsbjl"] Feb 17 15:57:19 crc kubenswrapper[4808]: I0217 15:57:19.341486 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f04008a-114c-4f19-971a-34fa574846f5-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 15:57:19 crc kubenswrapper[4808]: I0217 15:57:19.442555 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2a35eed2-a26d-4fc0-9daa-41e30256780e-kube-api-access\") pod \"2a35eed2-a26d-4fc0-9daa-41e30256780e\" (UID: \"2a35eed2-a26d-4fc0-9daa-41e30256780e\") " Feb 17 15:57:19 crc kubenswrapper[4808]: I0217 15:57:19.442907 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2a35eed2-a26d-4fc0-9daa-41e30256780e-kubelet-dir\") pod \"2a35eed2-a26d-4fc0-9daa-41e30256780e\" (UID: \"2a35eed2-a26d-4fc0-9daa-41e30256780e\") " Feb 17 15:57:19 crc kubenswrapper[4808]: I0217 15:57:19.443045 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2a35eed2-a26d-4fc0-9daa-41e30256780e-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "2a35eed2-a26d-4fc0-9daa-41e30256780e" (UID: "2a35eed2-a26d-4fc0-9daa-41e30256780e"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:57:19 crc kubenswrapper[4808]: I0217 15:57:19.443197 4808 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2a35eed2-a26d-4fc0-9daa-41e30256780e-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 17 15:57:19 crc kubenswrapper[4808]: I0217 15:57:19.451081 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a35eed2-a26d-4fc0-9daa-41e30256780e-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "2a35eed2-a26d-4fc0-9daa-41e30256780e" (UID: "2a35eed2-a26d-4fc0-9daa-41e30256780e"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:57:19 crc kubenswrapper[4808]: I0217 15:57:19.544645 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2a35eed2-a26d-4fc0-9daa-41e30256780e-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 17 15:57:20 crc kubenswrapper[4808]: I0217 15:57:20.008718 4808 generic.go:334] "Generic (PLEG): container finished" podID="57300b85-6c7e-49da-bb14-40055f48a85c" containerID="4af04fd40045e9e7dfaadf911b9f31ed6ee225c9d6497d579fe01321855f1de4" exitCode=0 Feb 17 15:57:20 crc kubenswrapper[4808]: I0217 15:57:20.009733 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6vvmq" event={"ID":"57300b85-6c7e-49da-bb14-40055f48a85c","Type":"ContainerDied","Data":"4af04fd40045e9e7dfaadf911b9f31ed6ee225c9d6497d579fe01321855f1de4"} Feb 17 15:57:20 crc kubenswrapper[4808]: I0217 15:57:20.018413 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8jsrz" event={"ID":"e22d34a8-92f6-4a2a-a0f5-e063c25afac1","Type":"ContainerStarted","Data":"616c2fdd03b2d5398b274f5ab3d43d25dcd8bacb210382e6b982a39d3da41dd3"} Feb 17 15:57:20 crc kubenswrapper[4808]: I0217 15:57:20.022756 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"2a35eed2-a26d-4fc0-9daa-41e30256780e","Type":"ContainerDied","Data":"56afd58e8a64a79de748ecc17d0404972690d47a6f6b7d4f90f438cdb2799a9f"} Feb 17 15:57:20 crc kubenswrapper[4808]: I0217 15:57:20.022796 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="56afd58e8a64a79de748ecc17d0404972690d47a6f6b7d4f90f438cdb2799a9f" Feb 17 15:57:20 crc kubenswrapper[4808]: I0217 15:57:20.022850 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 17 15:57:20 crc kubenswrapper[4808]: I0217 15:57:20.029317 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hn7fn" event={"ID":"a1db3ff7-c43f-412e-ab72-3d592b6352b0","Type":"ContainerStarted","Data":"ab1f4fdafb32d3b5b88908e1013b0deb27471f76f61f16612081d0858b9c0b31"} Feb 17 15:57:20 crc kubenswrapper[4808]: I0217 15:57:20.056122 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6vvmq" Feb 17 15:57:20 crc kubenswrapper[4808]: I0217 15:57:20.095971 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-hn7fn" podStartSLOduration=3.682103454 podStartE2EDuration="46.095939282s" podCreationTimestamp="2026-02-17 15:56:34 +0000 UTC" firstStartedPulling="2026-02-17 15:56:37.081235122 +0000 UTC m=+160.597594195" lastFinishedPulling="2026-02-17 15:57:19.49507094 +0000 UTC m=+203.011430023" observedRunningTime="2026-02-17 15:57:20.068384406 +0000 UTC m=+203.584743529" watchObservedRunningTime="2026-02-17 15:57:20.095939282 +0000 UTC m=+203.612298395" Feb 17 15:57:20 crc kubenswrapper[4808]: I0217 15:57:20.253492 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57300b85-6c7e-49da-bb14-40055f48a85c-utilities\") pod \"57300b85-6c7e-49da-bb14-40055f48a85c\" (UID: \"57300b85-6c7e-49da-bb14-40055f48a85c\") " Feb 17 15:57:20 crc kubenswrapper[4808]: I0217 15:57:20.253653 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57300b85-6c7e-49da-bb14-40055f48a85c-catalog-content\") pod \"57300b85-6c7e-49da-bb14-40055f48a85c\" (UID: \"57300b85-6c7e-49da-bb14-40055f48a85c\") " Feb 17 15:57:20 crc kubenswrapper[4808]: I0217 15:57:20.253718 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pzvbx\" (UniqueName: \"kubernetes.io/projected/57300b85-6c7e-49da-bb14-40055f48a85c-kube-api-access-pzvbx\") pod \"57300b85-6c7e-49da-bb14-40055f48a85c\" (UID: \"57300b85-6c7e-49da-bb14-40055f48a85c\") " Feb 17 15:57:20 crc kubenswrapper[4808]: I0217 15:57:20.254352 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57300b85-6c7e-49da-bb14-40055f48a85c-utilities" (OuterVolumeSpecName: "utilities") pod "57300b85-6c7e-49da-bb14-40055f48a85c" (UID: "57300b85-6c7e-49da-bb14-40055f48a85c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 15:57:20 crc kubenswrapper[4808]: I0217 15:57:20.263218 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57300b85-6c7e-49da-bb14-40055f48a85c-kube-api-access-pzvbx" (OuterVolumeSpecName: "kube-api-access-pzvbx") pod "57300b85-6c7e-49da-bb14-40055f48a85c" (UID: "57300b85-6c7e-49da-bb14-40055f48a85c"). InnerVolumeSpecName "kube-api-access-pzvbx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:57:20 crc kubenswrapper[4808]: I0217 15:57:20.312263 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57300b85-6c7e-49da-bb14-40055f48a85c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57300b85-6c7e-49da-bb14-40055f48a85c" (UID: "57300b85-6c7e-49da-bb14-40055f48a85c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 15:57:20 crc kubenswrapper[4808]: I0217 15:57:20.354877 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57300b85-6c7e-49da-bb14-40055f48a85c-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 15:57:20 crc kubenswrapper[4808]: I0217 15:57:20.354927 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57300b85-6c7e-49da-bb14-40055f48a85c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 15:57:20 crc kubenswrapper[4808]: I0217 15:57:20.354940 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pzvbx\" (UniqueName: \"kubernetes.io/projected/57300b85-6c7e-49da-bb14-40055f48a85c-kube-api-access-pzvbx\") on node \"crc\" DevicePath \"\"" Feb 17 15:57:20 crc kubenswrapper[4808]: I0217 15:57:20.436557 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ts9gs"] Feb 17 15:57:20 crc kubenswrapper[4808]: I0217 15:57:20.436863 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-ts9gs" podUID="92dfded8-f453-4bfc-809e-e7ed7e25de27" containerName="registry-server" containerID="cri-o://79c59f236601db2e02bc2df82891cddc398d12a9a7f46934d64515020f07caa8" gracePeriod=2 Feb 17 15:57:20 crc kubenswrapper[4808]: I0217 15:57:20.885316 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ts9gs" Feb 17 15:57:21 crc kubenswrapper[4808]: I0217 15:57:21.039846 4808 generic.go:334] "Generic (PLEG): container finished" podID="e22d34a8-92f6-4a2a-a0f5-e063c25afac1" containerID="616c2fdd03b2d5398b274f5ab3d43d25dcd8bacb210382e6b982a39d3da41dd3" exitCode=0 Feb 17 15:57:21 crc kubenswrapper[4808]: I0217 15:57:21.039864 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8jsrz" event={"ID":"e22d34a8-92f6-4a2a-a0f5-e063c25afac1","Type":"ContainerDied","Data":"616c2fdd03b2d5398b274f5ab3d43d25dcd8bacb210382e6b982a39d3da41dd3"} Feb 17 15:57:21 crc kubenswrapper[4808]: I0217 15:57:21.045064 4808 generic.go:334] "Generic (PLEG): container finished" podID="92dfded8-f453-4bfc-809e-e7ed7e25de27" containerID="79c59f236601db2e02bc2df82891cddc398d12a9a7f46934d64515020f07caa8" exitCode=0 Feb 17 15:57:21 crc kubenswrapper[4808]: I0217 15:57:21.045133 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ts9gs" event={"ID":"92dfded8-f453-4bfc-809e-e7ed7e25de27","Type":"ContainerDied","Data":"79c59f236601db2e02bc2df82891cddc398d12a9a7f46934d64515020f07caa8"} Feb 17 15:57:21 crc kubenswrapper[4808]: I0217 15:57:21.045183 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ts9gs" event={"ID":"92dfded8-f453-4bfc-809e-e7ed7e25de27","Type":"ContainerDied","Data":"f4563d14e850e83b34a7ac316296bd63282dec1b6828a89346f08302aa89387a"} Feb 17 15:57:21 crc kubenswrapper[4808]: I0217 15:57:21.045199 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ts9gs" Feb 17 15:57:21 crc kubenswrapper[4808]: I0217 15:57:21.045212 4808 scope.go:117] "RemoveContainer" containerID="79c59f236601db2e02bc2df82891cddc398d12a9a7f46934d64515020f07caa8" Feb 17 15:57:21 crc kubenswrapper[4808]: I0217 15:57:21.050808 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6vvmq" event={"ID":"57300b85-6c7e-49da-bb14-40055f48a85c","Type":"ContainerDied","Data":"978f619d6b3d5011491c32f00a6237544c3cbc039e50f7389d14d76374df3c9e"} Feb 17 15:57:21 crc kubenswrapper[4808]: I0217 15:57:21.050932 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6vvmq" Feb 17 15:57:21 crc kubenswrapper[4808]: I0217 15:57:21.064858 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92dfded8-f453-4bfc-809e-e7ed7e25de27-catalog-content\") pod \"92dfded8-f453-4bfc-809e-e7ed7e25de27\" (UID: \"92dfded8-f453-4bfc-809e-e7ed7e25de27\") " Feb 17 15:57:21 crc kubenswrapper[4808]: I0217 15:57:21.064926 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kbjtv\" (UniqueName: \"kubernetes.io/projected/92dfded8-f453-4bfc-809e-e7ed7e25de27-kube-api-access-kbjtv\") pod \"92dfded8-f453-4bfc-809e-e7ed7e25de27\" (UID: \"92dfded8-f453-4bfc-809e-e7ed7e25de27\") " Feb 17 15:57:21 crc kubenswrapper[4808]: I0217 15:57:21.064958 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92dfded8-f453-4bfc-809e-e7ed7e25de27-utilities\") pod \"92dfded8-f453-4bfc-809e-e7ed7e25de27\" (UID: \"92dfded8-f453-4bfc-809e-e7ed7e25de27\") " Feb 17 15:57:21 crc kubenswrapper[4808]: I0217 15:57:21.065877 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92dfded8-f453-4bfc-809e-e7ed7e25de27-utilities" (OuterVolumeSpecName: "utilities") pod "92dfded8-f453-4bfc-809e-e7ed7e25de27" (UID: "92dfded8-f453-4bfc-809e-e7ed7e25de27"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 15:57:21 crc kubenswrapper[4808]: I0217 15:57:21.072218 4808 scope.go:117] "RemoveContainer" containerID="05108c0dc38f3bc05084f54e3c00bb8e1ea701f996797f792c1317ab21953190" Feb 17 15:57:21 crc kubenswrapper[4808]: I0217 15:57:21.086445 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92dfded8-f453-4bfc-809e-e7ed7e25de27-kube-api-access-kbjtv" (OuterVolumeSpecName: "kube-api-access-kbjtv") pod "92dfded8-f453-4bfc-809e-e7ed7e25de27" (UID: "92dfded8-f453-4bfc-809e-e7ed7e25de27"). InnerVolumeSpecName "kube-api-access-kbjtv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:57:21 crc kubenswrapper[4808]: I0217 15:57:21.099911 4808 scope.go:117] "RemoveContainer" containerID="9354679fc175439a552de7724a5e6bda5b9e9fec4478f89999a50a2ea884f0d2" Feb 17 15:57:21 crc kubenswrapper[4808]: I0217 15:57:21.110509 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92dfded8-f453-4bfc-809e-e7ed7e25de27-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "92dfded8-f453-4bfc-809e-e7ed7e25de27" (UID: "92dfded8-f453-4bfc-809e-e7ed7e25de27"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 15:57:21 crc kubenswrapper[4808]: I0217 15:57:21.132496 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6vvmq"] Feb 17 15:57:21 crc kubenswrapper[4808]: I0217 15:57:21.135635 4808 scope.go:117] "RemoveContainer" containerID="79c59f236601db2e02bc2df82891cddc398d12a9a7f46934d64515020f07caa8" Feb 17 15:57:21 crc kubenswrapper[4808]: I0217 15:57:21.135924 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-6vvmq"] Feb 17 15:57:21 crc kubenswrapper[4808]: E0217 15:57:21.136224 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79c59f236601db2e02bc2df82891cddc398d12a9a7f46934d64515020f07caa8\": container with ID starting with 79c59f236601db2e02bc2df82891cddc398d12a9a7f46934d64515020f07caa8 not found: ID does not exist" containerID="79c59f236601db2e02bc2df82891cddc398d12a9a7f46934d64515020f07caa8" Feb 17 15:57:21 crc kubenswrapper[4808]: I0217 15:57:21.136703 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79c59f236601db2e02bc2df82891cddc398d12a9a7f46934d64515020f07caa8"} err="failed to get container status \"79c59f236601db2e02bc2df82891cddc398d12a9a7f46934d64515020f07caa8\": rpc error: code = NotFound desc = could not find container \"79c59f236601db2e02bc2df82891cddc398d12a9a7f46934d64515020f07caa8\": container with ID starting with 79c59f236601db2e02bc2df82891cddc398d12a9a7f46934d64515020f07caa8 not found: ID does not exist" Feb 17 15:57:21 crc kubenswrapper[4808]: I0217 15:57:21.136759 4808 scope.go:117] "RemoveContainer" containerID="05108c0dc38f3bc05084f54e3c00bb8e1ea701f996797f792c1317ab21953190" Feb 17 15:57:21 crc kubenswrapper[4808]: E0217 15:57:21.137384 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"05108c0dc38f3bc05084f54e3c00bb8e1ea701f996797f792c1317ab21953190\": container with ID starting with 05108c0dc38f3bc05084f54e3c00bb8e1ea701f996797f792c1317ab21953190 not found: ID does not exist" containerID="05108c0dc38f3bc05084f54e3c00bb8e1ea701f996797f792c1317ab21953190" Feb 17 15:57:21 crc kubenswrapper[4808]: I0217 15:57:21.137438 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05108c0dc38f3bc05084f54e3c00bb8e1ea701f996797f792c1317ab21953190"} err="failed to get container status \"05108c0dc38f3bc05084f54e3c00bb8e1ea701f996797f792c1317ab21953190\": rpc error: code = NotFound desc = could not find container \"05108c0dc38f3bc05084f54e3c00bb8e1ea701f996797f792c1317ab21953190\": container with ID starting with 05108c0dc38f3bc05084f54e3c00bb8e1ea701f996797f792c1317ab21953190 not found: ID does not exist" Feb 17 15:57:21 crc kubenswrapper[4808]: I0217 15:57:21.137476 4808 scope.go:117] "RemoveContainer" containerID="9354679fc175439a552de7724a5e6bda5b9e9fec4478f89999a50a2ea884f0d2" Feb 17 15:57:21 crc kubenswrapper[4808]: E0217 15:57:21.137830 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9354679fc175439a552de7724a5e6bda5b9e9fec4478f89999a50a2ea884f0d2\": container with ID starting with 9354679fc175439a552de7724a5e6bda5b9e9fec4478f89999a50a2ea884f0d2 not found: ID does not exist" containerID="9354679fc175439a552de7724a5e6bda5b9e9fec4478f89999a50a2ea884f0d2" Feb 17 15:57:21 crc kubenswrapper[4808]: I0217 15:57:21.137868 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9354679fc175439a552de7724a5e6bda5b9e9fec4478f89999a50a2ea884f0d2"} err="failed to get container status \"9354679fc175439a552de7724a5e6bda5b9e9fec4478f89999a50a2ea884f0d2\": rpc error: code = NotFound desc = could not find container \"9354679fc175439a552de7724a5e6bda5b9e9fec4478f89999a50a2ea884f0d2\": container with ID starting with 9354679fc175439a552de7724a5e6bda5b9e9fec4478f89999a50a2ea884f0d2 not found: ID does not exist" Feb 17 15:57:21 crc kubenswrapper[4808]: I0217 15:57:21.137893 4808 scope.go:117] "RemoveContainer" containerID="4af04fd40045e9e7dfaadf911b9f31ed6ee225c9d6497d579fe01321855f1de4" Feb 17 15:57:21 crc kubenswrapper[4808]: I0217 15:57:21.150329 4808 scope.go:117] "RemoveContainer" containerID="bbcda24c56c4da1bf611a909ec28352a94064de773428161e7634b8284dbcb93" Feb 17 15:57:21 crc kubenswrapper[4808]: I0217 15:57:21.152869 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f04008a-114c-4f19-971a-34fa574846f5" path="/var/lib/kubelet/pods/2f04008a-114c-4f19-971a-34fa574846f5/volumes" Feb 17 15:57:21 crc kubenswrapper[4808]: I0217 15:57:21.153490 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57300b85-6c7e-49da-bb14-40055f48a85c" path="/var/lib/kubelet/pods/57300b85-6c7e-49da-bb14-40055f48a85c/volumes" Feb 17 15:57:21 crc kubenswrapper[4808]: I0217 15:57:21.166158 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92dfded8-f453-4bfc-809e-e7ed7e25de27-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 15:57:21 crc kubenswrapper[4808]: I0217 15:57:21.166188 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kbjtv\" (UniqueName: \"kubernetes.io/projected/92dfded8-f453-4bfc-809e-e7ed7e25de27-kube-api-access-kbjtv\") on node \"crc\" DevicePath \"\"" Feb 17 15:57:21 crc kubenswrapper[4808]: I0217 15:57:21.166198 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92dfded8-f453-4bfc-809e-e7ed7e25de27-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 15:57:21 crc kubenswrapper[4808]: I0217 15:57:21.173967 4808 scope.go:117] "RemoveContainer" containerID="a0e2eeefc3bf87bde55affaedf8d295a474fecb9dcf906520b5bc6b26957f78c" Feb 17 15:57:21 crc kubenswrapper[4808]: I0217 15:57:21.393210 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ts9gs"] Feb 17 15:57:21 crc kubenswrapper[4808]: I0217 15:57:21.399935 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-ts9gs"] Feb 17 15:57:21 crc kubenswrapper[4808]: I0217 15:57:21.592558 4808 patch_prober.go:28] interesting pod/machine-config-daemon-k8v8k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 15:57:21 crc kubenswrapper[4808]: I0217 15:57:21.592827 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 15:57:21 crc kubenswrapper[4808]: I0217 15:57:21.592900 4808 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" Feb 17 15:57:21 crc kubenswrapper[4808]: I0217 15:57:21.593858 4808 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"383650c9e8169aa5621d731ebcbfdd1ace0491ad4e7931fca1f6b595e0e782b9"} pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 15:57:21 crc kubenswrapper[4808]: I0217 15:57:21.593935 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" containerID="cri-o://383650c9e8169aa5621d731ebcbfdd1ace0491ad4e7931fca1f6b595e0e782b9" gracePeriod=600 Feb 17 15:57:22 crc kubenswrapper[4808]: I0217 15:57:22.062598 4808 generic.go:334] "Generic (PLEG): container finished" podID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerID="383650c9e8169aa5621d731ebcbfdd1ace0491ad4e7931fca1f6b595e0e782b9" exitCode=0 Feb 17 15:57:22 crc kubenswrapper[4808]: I0217 15:57:22.063071 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" event={"ID":"ca38b6e7-b21c-453d-8b6c-a163dac84b35","Type":"ContainerDied","Data":"383650c9e8169aa5621d731ebcbfdd1ace0491ad4e7931fca1f6b595e0e782b9"} Feb 17 15:57:22 crc kubenswrapper[4808]: I0217 15:57:22.063105 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" event={"ID":"ca38b6e7-b21c-453d-8b6c-a163dac84b35","Type":"ContainerStarted","Data":"77d27579afc79c7f9499a81b219b4983465c9c8999e7fd27d50b7990ea6072c1"} Feb 17 15:57:22 crc kubenswrapper[4808]: I0217 15:57:22.066733 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8jsrz" event={"ID":"e22d34a8-92f6-4a2a-a0f5-e063c25afac1","Type":"ContainerStarted","Data":"aa3fed03abacd35eb7bb1f3065835e28313c3e4962262338c33f30c7827d8852"} Feb 17 15:57:22 crc kubenswrapper[4808]: I0217 15:57:22.104206 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8jsrz" podStartSLOduration=2.8835563669999997 podStartE2EDuration="45.104177748s" podCreationTimestamp="2026-02-17 15:56:37 +0000 UTC" firstStartedPulling="2026-02-17 15:56:39.23291018 +0000 UTC m=+162.749269253" lastFinishedPulling="2026-02-17 15:57:21.453531551 +0000 UTC m=+204.969890634" observedRunningTime="2026-02-17 15:57:22.097912049 +0000 UTC m=+205.614271162" watchObservedRunningTime="2026-02-17 15:57:22.104177748 +0000 UTC m=+205.620536831" Feb 17 15:57:22 crc kubenswrapper[4808]: I0217 15:57:22.818044 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 17 15:57:22 crc kubenswrapper[4808]: E0217 15:57:22.818776 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f04008a-114c-4f19-971a-34fa574846f5" containerName="extract-utilities" Feb 17 15:57:22 crc kubenswrapper[4808]: I0217 15:57:22.818789 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f04008a-114c-4f19-971a-34fa574846f5" containerName="extract-utilities" Feb 17 15:57:22 crc kubenswrapper[4808]: E0217 15:57:22.818801 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a35eed2-a26d-4fc0-9daa-41e30256780e" containerName="pruner" Feb 17 15:57:22 crc kubenswrapper[4808]: I0217 15:57:22.818807 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a35eed2-a26d-4fc0-9daa-41e30256780e" containerName="pruner" Feb 17 15:57:22 crc kubenswrapper[4808]: E0217 15:57:22.818815 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57300b85-6c7e-49da-bb14-40055f48a85c" containerName="extract-content" Feb 17 15:57:22 crc kubenswrapper[4808]: I0217 15:57:22.818824 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="57300b85-6c7e-49da-bb14-40055f48a85c" containerName="extract-content" Feb 17 15:57:22 crc kubenswrapper[4808]: E0217 15:57:22.818831 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f04008a-114c-4f19-971a-34fa574846f5" containerName="extract-content" Feb 17 15:57:22 crc kubenswrapper[4808]: I0217 15:57:22.818837 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f04008a-114c-4f19-971a-34fa574846f5" containerName="extract-content" Feb 17 15:57:22 crc kubenswrapper[4808]: E0217 15:57:22.818851 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f04008a-114c-4f19-971a-34fa574846f5" containerName="registry-server" Feb 17 15:57:22 crc kubenswrapper[4808]: I0217 15:57:22.818857 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f04008a-114c-4f19-971a-34fa574846f5" containerName="registry-server" Feb 17 15:57:22 crc kubenswrapper[4808]: E0217 15:57:22.818869 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92dfded8-f453-4bfc-809e-e7ed7e25de27" containerName="registry-server" Feb 17 15:57:22 crc kubenswrapper[4808]: I0217 15:57:22.818875 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="92dfded8-f453-4bfc-809e-e7ed7e25de27" containerName="registry-server" Feb 17 15:57:22 crc kubenswrapper[4808]: E0217 15:57:22.818884 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57300b85-6c7e-49da-bb14-40055f48a85c" containerName="extract-utilities" Feb 17 15:57:22 crc kubenswrapper[4808]: I0217 15:57:22.818892 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="57300b85-6c7e-49da-bb14-40055f48a85c" containerName="extract-utilities" Feb 17 15:57:22 crc kubenswrapper[4808]: E0217 15:57:22.818902 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92dfded8-f453-4bfc-809e-e7ed7e25de27" containerName="extract-content" Feb 17 15:57:22 crc kubenswrapper[4808]: I0217 15:57:22.818909 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="92dfded8-f453-4bfc-809e-e7ed7e25de27" containerName="extract-content" Feb 17 15:57:22 crc kubenswrapper[4808]: E0217 15:57:22.818916 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57300b85-6c7e-49da-bb14-40055f48a85c" containerName="registry-server" Feb 17 15:57:22 crc kubenswrapper[4808]: I0217 15:57:22.818923 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="57300b85-6c7e-49da-bb14-40055f48a85c" containerName="registry-server" Feb 17 15:57:22 crc kubenswrapper[4808]: E0217 15:57:22.818932 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92dfded8-f453-4bfc-809e-e7ed7e25de27" containerName="extract-utilities" Feb 17 15:57:22 crc kubenswrapper[4808]: I0217 15:57:22.818938 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="92dfded8-f453-4bfc-809e-e7ed7e25de27" containerName="extract-utilities" Feb 17 15:57:22 crc kubenswrapper[4808]: I0217 15:57:22.819048 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="92dfded8-f453-4bfc-809e-e7ed7e25de27" containerName="registry-server" Feb 17 15:57:22 crc kubenswrapper[4808]: I0217 15:57:22.819059 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a35eed2-a26d-4fc0-9daa-41e30256780e" containerName="pruner" Feb 17 15:57:22 crc kubenswrapper[4808]: I0217 15:57:22.819068 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f04008a-114c-4f19-971a-34fa574846f5" containerName="registry-server" Feb 17 15:57:22 crc kubenswrapper[4808]: I0217 15:57:22.819081 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="57300b85-6c7e-49da-bb14-40055f48a85c" containerName="registry-server" Feb 17 15:57:22 crc kubenswrapper[4808]: I0217 15:57:22.819529 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 17 15:57:22 crc kubenswrapper[4808]: I0217 15:57:22.822817 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 17 15:57:22 crc kubenswrapper[4808]: I0217 15:57:22.825682 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 17 15:57:22 crc kubenswrapper[4808]: I0217 15:57:22.831750 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 17 15:57:22 crc kubenswrapper[4808]: I0217 15:57:22.900356 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3e6a81ca-0d6e-48d2-a0a2-ada5fcb8b25e-var-lock\") pod \"installer-9-crc\" (UID: \"3e6a81ca-0d6e-48d2-a0a2-ada5fcb8b25e\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 17 15:57:22 crc kubenswrapper[4808]: I0217 15:57:22.900421 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3e6a81ca-0d6e-48d2-a0a2-ada5fcb8b25e-kubelet-dir\") pod \"installer-9-crc\" (UID: \"3e6a81ca-0d6e-48d2-a0a2-ada5fcb8b25e\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 17 15:57:22 crc kubenswrapper[4808]: I0217 15:57:22.900463 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3e6a81ca-0d6e-48d2-a0a2-ada5fcb8b25e-kube-api-access\") pod \"installer-9-crc\" (UID: \"3e6a81ca-0d6e-48d2-a0a2-ada5fcb8b25e\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 17 15:57:23 crc kubenswrapper[4808]: I0217 15:57:23.002365 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3e6a81ca-0d6e-48d2-a0a2-ada5fcb8b25e-var-lock\") pod \"installer-9-crc\" (UID: \"3e6a81ca-0d6e-48d2-a0a2-ada5fcb8b25e\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 17 15:57:23 crc kubenswrapper[4808]: I0217 15:57:23.002452 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3e6a81ca-0d6e-48d2-a0a2-ada5fcb8b25e-kubelet-dir\") pod \"installer-9-crc\" (UID: \"3e6a81ca-0d6e-48d2-a0a2-ada5fcb8b25e\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 17 15:57:23 crc kubenswrapper[4808]: I0217 15:57:23.002492 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3e6a81ca-0d6e-48d2-a0a2-ada5fcb8b25e-kube-api-access\") pod \"installer-9-crc\" (UID: \"3e6a81ca-0d6e-48d2-a0a2-ada5fcb8b25e\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 17 15:57:23 crc kubenswrapper[4808]: I0217 15:57:23.002597 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3e6a81ca-0d6e-48d2-a0a2-ada5fcb8b25e-var-lock\") pod \"installer-9-crc\" (UID: \"3e6a81ca-0d6e-48d2-a0a2-ada5fcb8b25e\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 17 15:57:23 crc kubenswrapper[4808]: I0217 15:57:23.002590 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3e6a81ca-0d6e-48d2-a0a2-ada5fcb8b25e-kubelet-dir\") pod \"installer-9-crc\" (UID: \"3e6a81ca-0d6e-48d2-a0a2-ada5fcb8b25e\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 17 15:57:23 crc kubenswrapper[4808]: I0217 15:57:23.042109 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3e6a81ca-0d6e-48d2-a0a2-ada5fcb8b25e-kube-api-access\") pod \"installer-9-crc\" (UID: \"3e6a81ca-0d6e-48d2-a0a2-ada5fcb8b25e\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 17 15:57:23 crc kubenswrapper[4808]: I0217 15:57:23.139971 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 17 15:57:23 crc kubenswrapper[4808]: I0217 15:57:23.152134 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92dfded8-f453-4bfc-809e-e7ed7e25de27" path="/var/lib/kubelet/pods/92dfded8-f453-4bfc-809e-e7ed7e25de27/volumes" Feb 17 15:57:23 crc kubenswrapper[4808]: I0217 15:57:23.656977 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 17 15:57:23 crc kubenswrapper[4808]: W0217 15:57:23.687559 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod3e6a81ca_0d6e_48d2_a0a2_ada5fcb8b25e.slice/crio-c7a19d1c77507692cfde7142aa7d8a5076017b742b37e3a0c970625447aea416 WatchSource:0}: Error finding container c7a19d1c77507692cfde7142aa7d8a5076017b742b37e3a0c970625447aea416: Status 404 returned error can't find the container with id c7a19d1c77507692cfde7142aa7d8a5076017b742b37e3a0c970625447aea416 Feb 17 15:57:24 crc kubenswrapper[4808]: I0217 15:57:24.080188 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"3e6a81ca-0d6e-48d2-a0a2-ada5fcb8b25e","Type":"ContainerStarted","Data":"e259bf574b3e5b34a738dc5aa049367d026f2cbb8c3d1e0e5771dc0d329364c7"} Feb 17 15:57:24 crc kubenswrapper[4808]: I0217 15:57:24.080694 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"3e6a81ca-0d6e-48d2-a0a2-ada5fcb8b25e","Type":"ContainerStarted","Data":"c7a19d1c77507692cfde7142aa7d8a5076017b742b37e3a0c970625447aea416"} Feb 17 15:57:24 crc kubenswrapper[4808]: I0217 15:57:24.083734 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qhtfr" event={"ID":"df27437e-6547-4705-bbe7-08a726639dbe","Type":"ContainerStarted","Data":"1704dbc2b68e2b10e28ffd609ebd58eead43e61a6bd1ead6a6230baca3c1409e"} Feb 17 15:57:24 crc kubenswrapper[4808]: I0217 15:57:24.086834 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cs597" event={"ID":"48efd125-e3aa-444d-91a3-fa915be48b46","Type":"ContainerStarted","Data":"2e27c972236a280162abd4cf4685ed84882d0bc3042df73d9e827a7ec611814e"} Feb 17 15:57:24 crc kubenswrapper[4808]: I0217 15:57:24.105000 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=2.104782529 podStartE2EDuration="2.104782529s" podCreationTimestamp="2026-02-17 15:57:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:57:24.095045046 +0000 UTC m=+207.611404139" watchObservedRunningTime="2026-02-17 15:57:24.104782529 +0000 UTC m=+207.621141602" Feb 17 15:57:24 crc kubenswrapper[4808]: I0217 15:57:24.787807 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-hn7fn" Feb 17 15:57:24 crc kubenswrapper[4808]: I0217 15:57:24.792808 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-hn7fn" Feb 17 15:57:24 crc kubenswrapper[4808]: I0217 15:57:24.845358 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-hn7fn" Feb 17 15:57:25 crc kubenswrapper[4808]: I0217 15:57:25.095871 4808 generic.go:334] "Generic (PLEG): container finished" podID="543b2019-8399-411e-8e8b-45787b96873f" containerID="335aab9c25e746284f138cf133ee4f794236186f62c6450d29a99ecbca2622cc" exitCode=0 Feb 17 15:57:25 crc kubenswrapper[4808]: I0217 15:57:25.095953 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-22x8m" event={"ID":"543b2019-8399-411e-8e8b-45787b96873f","Type":"ContainerDied","Data":"335aab9c25e746284f138cf133ee4f794236186f62c6450d29a99ecbca2622cc"} Feb 17 15:57:25 crc kubenswrapper[4808]: I0217 15:57:25.099317 4808 generic.go:334] "Generic (PLEG): container finished" podID="48efd125-e3aa-444d-91a3-fa915be48b46" containerID="2e27c972236a280162abd4cf4685ed84882d0bc3042df73d9e827a7ec611814e" exitCode=0 Feb 17 15:57:25 crc kubenswrapper[4808]: I0217 15:57:25.099369 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cs597" event={"ID":"48efd125-e3aa-444d-91a3-fa915be48b46","Type":"ContainerDied","Data":"2e27c972236a280162abd4cf4685ed84882d0bc3042df73d9e827a7ec611814e"} Feb 17 15:57:25 crc kubenswrapper[4808]: I0217 15:57:25.102591 4808 generic.go:334] "Generic (PLEG): container finished" podID="df27437e-6547-4705-bbe7-08a726639dbe" containerID="1704dbc2b68e2b10e28ffd609ebd58eead43e61a6bd1ead6a6230baca3c1409e" exitCode=0 Feb 17 15:57:25 crc kubenswrapper[4808]: I0217 15:57:25.102946 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qhtfr" event={"ID":"df27437e-6547-4705-bbe7-08a726639dbe","Type":"ContainerDied","Data":"1704dbc2b68e2b10e28ffd609ebd58eead43e61a6bd1ead6a6230baca3c1409e"} Feb 17 15:57:25 crc kubenswrapper[4808]: I0217 15:57:25.164015 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-hn7fn" Feb 17 15:57:26 crc kubenswrapper[4808]: I0217 15:57:26.136217 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cs597" event={"ID":"48efd125-e3aa-444d-91a3-fa915be48b46","Type":"ContainerStarted","Data":"1789b161d1d589d4f4b637bcd20330b171b3967cd4acb37da4ed2b0c3bffddf0"} Feb 17 15:57:26 crc kubenswrapper[4808]: I0217 15:57:26.138227 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qhtfr" event={"ID":"df27437e-6547-4705-bbe7-08a726639dbe","Type":"ContainerStarted","Data":"ab5bf34de9e08f53fdffa63c8df6a1c54b35f7cc20e2c243fa6aac5b8aadc2b5"} Feb 17 15:57:26 crc kubenswrapper[4808]: I0217 15:57:26.141045 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-22x8m" event={"ID":"543b2019-8399-411e-8e8b-45787b96873f","Type":"ContainerStarted","Data":"5e0ccb5571695b0a11ced97259c836c8ed65e804c680e02618b7b777ab17bf5c"} Feb 17 15:57:26 crc kubenswrapper[4808]: I0217 15:57:26.160940 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-cs597" podStartSLOduration=2.777519122 podStartE2EDuration="50.160918311s" podCreationTimestamp="2026-02-17 15:56:36 +0000 UTC" firstStartedPulling="2026-02-17 15:56:38.158954072 +0000 UTC m=+161.675313145" lastFinishedPulling="2026-02-17 15:57:25.542353261 +0000 UTC m=+209.058712334" observedRunningTime="2026-02-17 15:57:26.157338654 +0000 UTC m=+209.673697727" watchObservedRunningTime="2026-02-17 15:57:26.160918311 +0000 UTC m=+209.677277384" Feb 17 15:57:26 crc kubenswrapper[4808]: I0217 15:57:26.178747 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-22x8m" podStartSLOduration=3.574077323 podStartE2EDuration="52.178725123s" podCreationTimestamp="2026-02-17 15:56:34 +0000 UTC" firstStartedPulling="2026-02-17 15:56:37.074696876 +0000 UTC m=+160.591055949" lastFinishedPulling="2026-02-17 15:57:25.679344676 +0000 UTC m=+209.195703749" observedRunningTime="2026-02-17 15:57:26.174763385 +0000 UTC m=+209.691122458" watchObservedRunningTime="2026-02-17 15:57:26.178725123 +0000 UTC m=+209.695084196" Feb 17 15:57:26 crc kubenswrapper[4808]: I0217 15:57:26.216519 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qhtfr" podStartSLOduration=4.024033142 podStartE2EDuration="49.216495584s" podCreationTimestamp="2026-02-17 15:56:37 +0000 UTC" firstStartedPulling="2026-02-17 15:56:40.358900003 +0000 UTC m=+163.875259076" lastFinishedPulling="2026-02-17 15:57:25.551362445 +0000 UTC m=+209.067721518" observedRunningTime="2026-02-17 15:57:26.213633667 +0000 UTC m=+209.729992760" watchObservedRunningTime="2026-02-17 15:57:26.216495584 +0000 UTC m=+209.732854657" Feb 17 15:57:26 crc kubenswrapper[4808]: I0217 15:57:26.567190 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-cs597" Feb 17 15:57:26 crc kubenswrapper[4808]: I0217 15:57:26.567278 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-cs597" Feb 17 15:57:27 crc kubenswrapper[4808]: I0217 15:57:27.611370 4808 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-cs597" podUID="48efd125-e3aa-444d-91a3-fa915be48b46" containerName="registry-server" probeResult="failure" output=< Feb 17 15:57:27 crc kubenswrapper[4808]: timeout: failed to connect service ":50051" within 1s Feb 17 15:57:27 crc kubenswrapper[4808]: > Feb 17 15:57:27 crc kubenswrapper[4808]: I0217 15:57:27.935222 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8jsrz" Feb 17 15:57:27 crc kubenswrapper[4808]: I0217 15:57:27.935771 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-8jsrz" Feb 17 15:57:28 crc kubenswrapper[4808]: I0217 15:57:28.398703 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qhtfr" Feb 17 15:57:28 crc kubenswrapper[4808]: I0217 15:57:28.399124 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qhtfr" Feb 17 15:57:28 crc kubenswrapper[4808]: I0217 15:57:28.973943 4808 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8jsrz" podUID="e22d34a8-92f6-4a2a-a0f5-e063c25afac1" containerName="registry-server" probeResult="failure" output=< Feb 17 15:57:28 crc kubenswrapper[4808]: timeout: failed to connect service ":50051" within 1s Feb 17 15:57:28 crc kubenswrapper[4808]: > Feb 17 15:57:29 crc kubenswrapper[4808]: I0217 15:57:29.466702 4808 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qhtfr" podUID="df27437e-6547-4705-bbe7-08a726639dbe" containerName="registry-server" probeResult="failure" output=< Feb 17 15:57:29 crc kubenswrapper[4808]: timeout: failed to connect service ":50051" within 1s Feb 17 15:57:29 crc kubenswrapper[4808]: > Feb 17 15:57:31 crc kubenswrapper[4808]: I0217 15:57:31.370388 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-58c84966cb-66dmv"] Feb 17 15:57:31 crc kubenswrapper[4808]: I0217 15:57:31.370701 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-58c84966cb-66dmv" podUID="a0b9abce-8b6f-4346-b18c-2bfb7e5982eb" containerName="controller-manager" containerID="cri-o://04835832bfc8343ab9fa813877ab509d95417e7a4406a2dd5c0ba0c9d44fac95" gracePeriod=30 Feb 17 15:57:31 crc kubenswrapper[4808]: I0217 15:57:31.386757 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-79d5bcd6bf-cd2bq"] Feb 17 15:57:31 crc kubenswrapper[4808]: I0217 15:57:31.388085 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-79d5bcd6bf-cd2bq" podUID="9455640b-d252-4198-b7df-a410bf7df2fe" containerName="route-controller-manager" containerID="cri-o://2c9dbd682946c3e5c2cfca8b85377da096ea534bb79d801e3a40476342b68450" gracePeriod=30 Feb 17 15:57:32 crc kubenswrapper[4808]: I0217 15:57:32.210806 4808 generic.go:334] "Generic (PLEG): container finished" podID="9455640b-d252-4198-b7df-a410bf7df2fe" containerID="2c9dbd682946c3e5c2cfca8b85377da096ea534bb79d801e3a40476342b68450" exitCode=0 Feb 17 15:57:32 crc kubenswrapper[4808]: I0217 15:57:32.210942 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-79d5bcd6bf-cd2bq" event={"ID":"9455640b-d252-4198-b7df-a410bf7df2fe","Type":"ContainerDied","Data":"2c9dbd682946c3e5c2cfca8b85377da096ea534bb79d801e3a40476342b68450"} Feb 17 15:57:32 crc kubenswrapper[4808]: I0217 15:57:32.215306 4808 generic.go:334] "Generic (PLEG): container finished" podID="a0b9abce-8b6f-4346-b18c-2bfb7e5982eb" containerID="04835832bfc8343ab9fa813877ab509d95417e7a4406a2dd5c0ba0c9d44fac95" exitCode=0 Feb 17 15:57:32 crc kubenswrapper[4808]: I0217 15:57:32.215355 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-58c84966cb-66dmv" event={"ID":"a0b9abce-8b6f-4346-b18c-2bfb7e5982eb","Type":"ContainerDied","Data":"04835832bfc8343ab9fa813877ab509d95417e7a4406a2dd5c0ba0c9d44fac95"} Feb 17 15:57:32 crc kubenswrapper[4808]: I0217 15:57:32.523552 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-79d5bcd6bf-cd2bq" Feb 17 15:57:32 crc kubenswrapper[4808]: I0217 15:57:32.564445 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-567cdd88c5-bmx27"] Feb 17 15:57:32 crc kubenswrapper[4808]: E0217 15:57:32.564955 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9455640b-d252-4198-b7df-a410bf7df2fe" containerName="route-controller-manager" Feb 17 15:57:32 crc kubenswrapper[4808]: I0217 15:57:32.565027 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="9455640b-d252-4198-b7df-a410bf7df2fe" containerName="route-controller-manager" Feb 17 15:57:32 crc kubenswrapper[4808]: I0217 15:57:32.565196 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="9455640b-d252-4198-b7df-a410bf7df2fe" containerName="route-controller-manager" Feb 17 15:57:32 crc kubenswrapper[4808]: I0217 15:57:32.565859 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-567cdd88c5-bmx27" Feb 17 15:57:32 crc kubenswrapper[4808]: I0217 15:57:32.588313 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-567cdd88c5-bmx27"] Feb 17 15:57:32 crc kubenswrapper[4808]: I0217 15:57:32.613035 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-58c84966cb-66dmv" Feb 17 15:57:32 crc kubenswrapper[4808]: I0217 15:57:32.648243 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mdvvn\" (UniqueName: \"kubernetes.io/projected/9455640b-d252-4198-b7df-a410bf7df2fe-kube-api-access-mdvvn\") pod \"9455640b-d252-4198-b7df-a410bf7df2fe\" (UID: \"9455640b-d252-4198-b7df-a410bf7df2fe\") " Feb 17 15:57:32 crc kubenswrapper[4808]: I0217 15:57:32.648316 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9455640b-d252-4198-b7df-a410bf7df2fe-client-ca\") pod \"9455640b-d252-4198-b7df-a410bf7df2fe\" (UID: \"9455640b-d252-4198-b7df-a410bf7df2fe\") " Feb 17 15:57:32 crc kubenswrapper[4808]: I0217 15:57:32.648360 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9455640b-d252-4198-b7df-a410bf7df2fe-serving-cert\") pod \"9455640b-d252-4198-b7df-a410bf7df2fe\" (UID: \"9455640b-d252-4198-b7df-a410bf7df2fe\") " Feb 17 15:57:32 crc kubenswrapper[4808]: I0217 15:57:32.648441 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9455640b-d252-4198-b7df-a410bf7df2fe-config\") pod \"9455640b-d252-4198-b7df-a410bf7df2fe\" (UID: \"9455640b-d252-4198-b7df-a410bf7df2fe\") " Feb 17 15:57:32 crc kubenswrapper[4808]: I0217 15:57:32.649948 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9455640b-d252-4198-b7df-a410bf7df2fe-client-ca" (OuterVolumeSpecName: "client-ca") pod "9455640b-d252-4198-b7df-a410bf7df2fe" (UID: "9455640b-d252-4198-b7df-a410bf7df2fe"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:57:32 crc kubenswrapper[4808]: I0217 15:57:32.650068 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9455640b-d252-4198-b7df-a410bf7df2fe-config" (OuterVolumeSpecName: "config") pod "9455640b-d252-4198-b7df-a410bf7df2fe" (UID: "9455640b-d252-4198-b7df-a410bf7df2fe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:57:32 crc kubenswrapper[4808]: I0217 15:57:32.656770 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9455640b-d252-4198-b7df-a410bf7df2fe-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9455640b-d252-4198-b7df-a410bf7df2fe" (UID: "9455640b-d252-4198-b7df-a410bf7df2fe"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:57:32 crc kubenswrapper[4808]: I0217 15:57:32.657163 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9455640b-d252-4198-b7df-a410bf7df2fe-kube-api-access-mdvvn" (OuterVolumeSpecName: "kube-api-access-mdvvn") pod "9455640b-d252-4198-b7df-a410bf7df2fe" (UID: "9455640b-d252-4198-b7df-a410bf7df2fe"). InnerVolumeSpecName "kube-api-access-mdvvn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:57:32 crc kubenswrapper[4808]: I0217 15:57:32.750269 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0b9abce-8b6f-4346-b18c-2bfb7e5982eb-serving-cert\") pod \"a0b9abce-8b6f-4346-b18c-2bfb7e5982eb\" (UID: \"a0b9abce-8b6f-4346-b18c-2bfb7e5982eb\") " Feb 17 15:57:32 crc kubenswrapper[4808]: I0217 15:57:32.750365 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a0b9abce-8b6f-4346-b18c-2bfb7e5982eb-client-ca\") pod \"a0b9abce-8b6f-4346-b18c-2bfb7e5982eb\" (UID: \"a0b9abce-8b6f-4346-b18c-2bfb7e5982eb\") " Feb 17 15:57:32 crc kubenswrapper[4808]: I0217 15:57:32.750439 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9b42c\" (UniqueName: \"kubernetes.io/projected/a0b9abce-8b6f-4346-b18c-2bfb7e5982eb-kube-api-access-9b42c\") pod \"a0b9abce-8b6f-4346-b18c-2bfb7e5982eb\" (UID: \"a0b9abce-8b6f-4346-b18c-2bfb7e5982eb\") " Feb 17 15:57:32 crc kubenswrapper[4808]: I0217 15:57:32.750527 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a0b9abce-8b6f-4346-b18c-2bfb7e5982eb-proxy-ca-bundles\") pod \"a0b9abce-8b6f-4346-b18c-2bfb7e5982eb\" (UID: \"a0b9abce-8b6f-4346-b18c-2bfb7e5982eb\") " Feb 17 15:57:32 crc kubenswrapper[4808]: I0217 15:57:32.750688 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0b9abce-8b6f-4346-b18c-2bfb7e5982eb-config\") pod \"a0b9abce-8b6f-4346-b18c-2bfb7e5982eb\" (UID: \"a0b9abce-8b6f-4346-b18c-2bfb7e5982eb\") " Feb 17 15:57:32 crc kubenswrapper[4808]: I0217 15:57:32.751078 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/80b9b1d0-5520-48a9-b0b7-2c524d8ba56d-client-ca\") pod \"route-controller-manager-567cdd88c5-bmx27\" (UID: \"80b9b1d0-5520-48a9-b0b7-2c524d8ba56d\") " pod="openshift-route-controller-manager/route-controller-manager-567cdd88c5-bmx27" Feb 17 15:57:32 crc kubenswrapper[4808]: I0217 15:57:32.751136 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvm6t\" (UniqueName: \"kubernetes.io/projected/80b9b1d0-5520-48a9-b0b7-2c524d8ba56d-kube-api-access-fvm6t\") pod \"route-controller-manager-567cdd88c5-bmx27\" (UID: \"80b9b1d0-5520-48a9-b0b7-2c524d8ba56d\") " pod="openshift-route-controller-manager/route-controller-manager-567cdd88c5-bmx27" Feb 17 15:57:32 crc kubenswrapper[4808]: I0217 15:57:32.751178 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80b9b1d0-5520-48a9-b0b7-2c524d8ba56d-config\") pod \"route-controller-manager-567cdd88c5-bmx27\" (UID: \"80b9b1d0-5520-48a9-b0b7-2c524d8ba56d\") " pod="openshift-route-controller-manager/route-controller-manager-567cdd88c5-bmx27" Feb 17 15:57:32 crc kubenswrapper[4808]: I0217 15:57:32.751286 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/80b9b1d0-5520-48a9-b0b7-2c524d8ba56d-serving-cert\") pod \"route-controller-manager-567cdd88c5-bmx27\" (UID: \"80b9b1d0-5520-48a9-b0b7-2c524d8ba56d\") " pod="openshift-route-controller-manager/route-controller-manager-567cdd88c5-bmx27" Feb 17 15:57:32 crc kubenswrapper[4808]: I0217 15:57:32.751397 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mdvvn\" (UniqueName: \"kubernetes.io/projected/9455640b-d252-4198-b7df-a410bf7df2fe-kube-api-access-mdvvn\") on node \"crc\" DevicePath \"\"" Feb 17 15:57:32 crc kubenswrapper[4808]: I0217 15:57:32.751424 4808 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9455640b-d252-4198-b7df-a410bf7df2fe-client-ca\") on node \"crc\" DevicePath \"\"" Feb 17 15:57:32 crc kubenswrapper[4808]: I0217 15:57:32.751444 4808 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9455640b-d252-4198-b7df-a410bf7df2fe-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 15:57:32 crc kubenswrapper[4808]: I0217 15:57:32.751468 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9455640b-d252-4198-b7df-a410bf7df2fe-config\") on node \"crc\" DevicePath \"\"" Feb 17 15:57:32 crc kubenswrapper[4808]: I0217 15:57:32.751784 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0b9abce-8b6f-4346-b18c-2bfb7e5982eb-client-ca" (OuterVolumeSpecName: "client-ca") pod "a0b9abce-8b6f-4346-b18c-2bfb7e5982eb" (UID: "a0b9abce-8b6f-4346-b18c-2bfb7e5982eb"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:57:32 crc kubenswrapper[4808]: I0217 15:57:32.752409 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0b9abce-8b6f-4346-b18c-2bfb7e5982eb-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "a0b9abce-8b6f-4346-b18c-2bfb7e5982eb" (UID: "a0b9abce-8b6f-4346-b18c-2bfb7e5982eb"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:57:32 crc kubenswrapper[4808]: I0217 15:57:32.753034 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0b9abce-8b6f-4346-b18c-2bfb7e5982eb-config" (OuterVolumeSpecName: "config") pod "a0b9abce-8b6f-4346-b18c-2bfb7e5982eb" (UID: "a0b9abce-8b6f-4346-b18c-2bfb7e5982eb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:57:32 crc kubenswrapper[4808]: I0217 15:57:32.753751 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0b9abce-8b6f-4346-b18c-2bfb7e5982eb-kube-api-access-9b42c" (OuterVolumeSpecName: "kube-api-access-9b42c") pod "a0b9abce-8b6f-4346-b18c-2bfb7e5982eb" (UID: "a0b9abce-8b6f-4346-b18c-2bfb7e5982eb"). InnerVolumeSpecName "kube-api-access-9b42c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:57:32 crc kubenswrapper[4808]: I0217 15:57:32.754815 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0b9abce-8b6f-4346-b18c-2bfb7e5982eb-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a0b9abce-8b6f-4346-b18c-2bfb7e5982eb" (UID: "a0b9abce-8b6f-4346-b18c-2bfb7e5982eb"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:57:32 crc kubenswrapper[4808]: I0217 15:57:32.852773 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/80b9b1d0-5520-48a9-b0b7-2c524d8ba56d-serving-cert\") pod \"route-controller-manager-567cdd88c5-bmx27\" (UID: \"80b9b1d0-5520-48a9-b0b7-2c524d8ba56d\") " pod="openshift-route-controller-manager/route-controller-manager-567cdd88c5-bmx27" Feb 17 15:57:32 crc kubenswrapper[4808]: I0217 15:57:32.853345 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/80b9b1d0-5520-48a9-b0b7-2c524d8ba56d-client-ca\") pod \"route-controller-manager-567cdd88c5-bmx27\" (UID: \"80b9b1d0-5520-48a9-b0b7-2c524d8ba56d\") " pod="openshift-route-controller-manager/route-controller-manager-567cdd88c5-bmx27" Feb 17 15:57:32 crc kubenswrapper[4808]: I0217 15:57:32.853612 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvm6t\" (UniqueName: \"kubernetes.io/projected/80b9b1d0-5520-48a9-b0b7-2c524d8ba56d-kube-api-access-fvm6t\") pod \"route-controller-manager-567cdd88c5-bmx27\" (UID: \"80b9b1d0-5520-48a9-b0b7-2c524d8ba56d\") " pod="openshift-route-controller-manager/route-controller-manager-567cdd88c5-bmx27" Feb 17 15:57:32 crc kubenswrapper[4808]: I0217 15:57:32.853891 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80b9b1d0-5520-48a9-b0b7-2c524d8ba56d-config\") pod \"route-controller-manager-567cdd88c5-bmx27\" (UID: \"80b9b1d0-5520-48a9-b0b7-2c524d8ba56d\") " pod="openshift-route-controller-manager/route-controller-manager-567cdd88c5-bmx27" Feb 17 15:57:32 crc kubenswrapper[4808]: I0217 15:57:32.854174 4808 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a0b9abce-8b6f-4346-b18c-2bfb7e5982eb-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 17 15:57:32 crc kubenswrapper[4808]: I0217 15:57:32.854330 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0b9abce-8b6f-4346-b18c-2bfb7e5982eb-config\") on node \"crc\" DevicePath \"\"" Feb 17 15:57:32 crc kubenswrapper[4808]: I0217 15:57:32.854471 4808 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0b9abce-8b6f-4346-b18c-2bfb7e5982eb-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 15:57:32 crc kubenswrapper[4808]: I0217 15:57:32.854624 4808 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a0b9abce-8b6f-4346-b18c-2bfb7e5982eb-client-ca\") on node \"crc\" DevicePath \"\"" Feb 17 15:57:32 crc kubenswrapper[4808]: I0217 15:57:32.854762 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9b42c\" (UniqueName: \"kubernetes.io/projected/a0b9abce-8b6f-4346-b18c-2bfb7e5982eb-kube-api-access-9b42c\") on node \"crc\" DevicePath \"\"" Feb 17 15:57:32 crc kubenswrapper[4808]: I0217 15:57:32.855121 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/80b9b1d0-5520-48a9-b0b7-2c524d8ba56d-client-ca\") pod \"route-controller-manager-567cdd88c5-bmx27\" (UID: \"80b9b1d0-5520-48a9-b0b7-2c524d8ba56d\") " pod="openshift-route-controller-manager/route-controller-manager-567cdd88c5-bmx27" Feb 17 15:57:32 crc kubenswrapper[4808]: I0217 15:57:32.855562 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80b9b1d0-5520-48a9-b0b7-2c524d8ba56d-config\") pod \"route-controller-manager-567cdd88c5-bmx27\" (UID: \"80b9b1d0-5520-48a9-b0b7-2c524d8ba56d\") " pod="openshift-route-controller-manager/route-controller-manager-567cdd88c5-bmx27" Feb 17 15:57:32 crc kubenswrapper[4808]: I0217 15:57:32.858775 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/80b9b1d0-5520-48a9-b0b7-2c524d8ba56d-serving-cert\") pod \"route-controller-manager-567cdd88c5-bmx27\" (UID: \"80b9b1d0-5520-48a9-b0b7-2c524d8ba56d\") " pod="openshift-route-controller-manager/route-controller-manager-567cdd88c5-bmx27" Feb 17 15:57:32 crc kubenswrapper[4808]: I0217 15:57:32.883087 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvm6t\" (UniqueName: \"kubernetes.io/projected/80b9b1d0-5520-48a9-b0b7-2c524d8ba56d-kube-api-access-fvm6t\") pod \"route-controller-manager-567cdd88c5-bmx27\" (UID: \"80b9b1d0-5520-48a9-b0b7-2c524d8ba56d\") " pod="openshift-route-controller-manager/route-controller-manager-567cdd88c5-bmx27" Feb 17 15:57:32 crc kubenswrapper[4808]: I0217 15:57:32.914858 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-567cdd88c5-bmx27" Feb 17 15:57:33 crc kubenswrapper[4808]: I0217 15:57:33.224834 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-58c84966cb-66dmv" event={"ID":"a0b9abce-8b6f-4346-b18c-2bfb7e5982eb","Type":"ContainerDied","Data":"5a6cae267669bf9865700e7923e707ca2f9a9c9fd07c5ade06fb9066e508ae1a"} Feb 17 15:57:33 crc kubenswrapper[4808]: I0217 15:57:33.226980 4808 scope.go:117] "RemoveContainer" containerID="04835832bfc8343ab9fa813877ab509d95417e7a4406a2dd5c0ba0c9d44fac95" Feb 17 15:57:33 crc kubenswrapper[4808]: I0217 15:57:33.225060 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-58c84966cb-66dmv" Feb 17 15:57:33 crc kubenswrapper[4808]: I0217 15:57:33.227968 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-79d5bcd6bf-cd2bq" event={"ID":"9455640b-d252-4198-b7df-a410bf7df2fe","Type":"ContainerDied","Data":"327f5a42044ba8a23bba834cc735ee73f16c693a4050fd5db7f91b4968d83e39"} Feb 17 15:57:33 crc kubenswrapper[4808]: I0217 15:57:33.228094 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-79d5bcd6bf-cd2bq" Feb 17 15:57:33 crc kubenswrapper[4808]: I0217 15:57:33.265112 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-58c84966cb-66dmv"] Feb 17 15:57:33 crc kubenswrapper[4808]: I0217 15:57:33.265928 4808 scope.go:117] "RemoveContainer" containerID="2c9dbd682946c3e5c2cfca8b85377da096ea534bb79d801e3a40476342b68450" Feb 17 15:57:33 crc kubenswrapper[4808]: I0217 15:57:33.273356 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-58c84966cb-66dmv"] Feb 17 15:57:33 crc kubenswrapper[4808]: I0217 15:57:33.278273 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-79d5bcd6bf-cd2bq"] Feb 17 15:57:33 crc kubenswrapper[4808]: I0217 15:57:33.282121 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-79d5bcd6bf-cd2bq"] Feb 17 15:57:33 crc kubenswrapper[4808]: I0217 15:57:33.406564 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-567cdd88c5-bmx27"] Feb 17 15:57:33 crc kubenswrapper[4808]: W0217 15:57:33.417715 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod80b9b1d0_5520_48a9_b0b7_2c524d8ba56d.slice/crio-70cc03d0f4a16d01a2409452eb79747f47e3f9835f1dc0806f2b12e87251321f WatchSource:0}: Error finding container 70cc03d0f4a16d01a2409452eb79747f47e3f9835f1dc0806f2b12e87251321f: Status 404 returned error can't find the container with id 70cc03d0f4a16d01a2409452eb79747f47e3f9835f1dc0806f2b12e87251321f Feb 17 15:57:34 crc kubenswrapper[4808]: I0217 15:57:34.243542 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-567cdd88c5-bmx27" event={"ID":"80b9b1d0-5520-48a9-b0b7-2c524d8ba56d","Type":"ContainerStarted","Data":"29b23adb7be4da7acebb0cc4e436ec05ecde2ceb12abd3e5503fc67622002028"} Feb 17 15:57:34 crc kubenswrapper[4808]: I0217 15:57:34.244129 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-567cdd88c5-bmx27" event={"ID":"80b9b1d0-5520-48a9-b0b7-2c524d8ba56d","Type":"ContainerStarted","Data":"70cc03d0f4a16d01a2409452eb79747f47e3f9835f1dc0806f2b12e87251321f"} Feb 17 15:57:34 crc kubenswrapper[4808]: I0217 15:57:34.244733 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-567cdd88c5-bmx27" Feb 17 15:57:34 crc kubenswrapper[4808]: I0217 15:57:34.253691 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-567cdd88c5-bmx27" Feb 17 15:57:34 crc kubenswrapper[4808]: I0217 15:57:34.270703 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-567cdd88c5-bmx27" podStartSLOduration=3.270675056 podStartE2EDuration="3.270675056s" podCreationTimestamp="2026-02-17 15:57:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:57:34.268508747 +0000 UTC m=+217.784867910" watchObservedRunningTime="2026-02-17 15:57:34.270675056 +0000 UTC m=+217.787034159" Feb 17 15:57:34 crc kubenswrapper[4808]: I0217 15:57:34.596226 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-22x8m" Feb 17 15:57:34 crc kubenswrapper[4808]: I0217 15:57:34.596304 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-22x8m" Feb 17 15:57:34 crc kubenswrapper[4808]: I0217 15:57:34.662434 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-22x8m" Feb 17 15:57:34 crc kubenswrapper[4808]: I0217 15:57:34.815964 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-8594bddbbb-l7kxx"] Feb 17 15:57:34 crc kubenswrapper[4808]: E0217 15:57:34.816457 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0b9abce-8b6f-4346-b18c-2bfb7e5982eb" containerName="controller-manager" Feb 17 15:57:34 crc kubenswrapper[4808]: I0217 15:57:34.816492 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0b9abce-8b6f-4346-b18c-2bfb7e5982eb" containerName="controller-manager" Feb 17 15:57:34 crc kubenswrapper[4808]: I0217 15:57:34.816779 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0b9abce-8b6f-4346-b18c-2bfb7e5982eb" containerName="controller-manager" Feb 17 15:57:34 crc kubenswrapper[4808]: I0217 15:57:34.817655 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8594bddbbb-l7kxx" Feb 17 15:57:34 crc kubenswrapper[4808]: I0217 15:57:34.821265 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 17 15:57:34 crc kubenswrapper[4808]: I0217 15:57:34.821692 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 17 15:57:34 crc kubenswrapper[4808]: I0217 15:57:34.821910 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 17 15:57:34 crc kubenswrapper[4808]: I0217 15:57:34.822132 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 17 15:57:34 crc kubenswrapper[4808]: I0217 15:57:34.822503 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 17 15:57:34 crc kubenswrapper[4808]: I0217 15:57:34.822735 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 17 15:57:34 crc kubenswrapper[4808]: I0217 15:57:34.834747 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-8594bddbbb-l7kxx"] Feb 17 15:57:34 crc kubenswrapper[4808]: I0217 15:57:34.838069 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 17 15:57:35 crc kubenswrapper[4808]: I0217 15:57:35.006614 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d49796a9-6bb5-4e1a-a203-95feb121a71b-proxy-ca-bundles\") pod \"controller-manager-8594bddbbb-l7kxx\" (UID: \"d49796a9-6bb5-4e1a-a203-95feb121a71b\") " pod="openshift-controller-manager/controller-manager-8594bddbbb-l7kxx" Feb 17 15:57:35 crc kubenswrapper[4808]: I0217 15:57:35.006766 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d49796a9-6bb5-4e1a-a203-95feb121a71b-config\") pod \"controller-manager-8594bddbbb-l7kxx\" (UID: \"d49796a9-6bb5-4e1a-a203-95feb121a71b\") " pod="openshift-controller-manager/controller-manager-8594bddbbb-l7kxx" Feb 17 15:57:35 crc kubenswrapper[4808]: I0217 15:57:35.006863 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d49796a9-6bb5-4e1a-a203-95feb121a71b-client-ca\") pod \"controller-manager-8594bddbbb-l7kxx\" (UID: \"d49796a9-6bb5-4e1a-a203-95feb121a71b\") " pod="openshift-controller-manager/controller-manager-8594bddbbb-l7kxx" Feb 17 15:57:35 crc kubenswrapper[4808]: I0217 15:57:35.007094 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d49796a9-6bb5-4e1a-a203-95feb121a71b-serving-cert\") pod \"controller-manager-8594bddbbb-l7kxx\" (UID: \"d49796a9-6bb5-4e1a-a203-95feb121a71b\") " pod="openshift-controller-manager/controller-manager-8594bddbbb-l7kxx" Feb 17 15:57:35 crc kubenswrapper[4808]: I0217 15:57:35.007264 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwn94\" (UniqueName: \"kubernetes.io/projected/d49796a9-6bb5-4e1a-a203-95feb121a71b-kube-api-access-bwn94\") pod \"controller-manager-8594bddbbb-l7kxx\" (UID: \"d49796a9-6bb5-4e1a-a203-95feb121a71b\") " pod="openshift-controller-manager/controller-manager-8594bddbbb-l7kxx" Feb 17 15:57:35 crc kubenswrapper[4808]: I0217 15:57:35.116821 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d49796a9-6bb5-4e1a-a203-95feb121a71b-client-ca\") pod \"controller-manager-8594bddbbb-l7kxx\" (UID: \"d49796a9-6bb5-4e1a-a203-95feb121a71b\") " pod="openshift-controller-manager/controller-manager-8594bddbbb-l7kxx" Feb 17 15:57:35 crc kubenswrapper[4808]: I0217 15:57:35.117069 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d49796a9-6bb5-4e1a-a203-95feb121a71b-serving-cert\") pod \"controller-manager-8594bddbbb-l7kxx\" (UID: \"d49796a9-6bb5-4e1a-a203-95feb121a71b\") " pod="openshift-controller-manager/controller-manager-8594bddbbb-l7kxx" Feb 17 15:57:35 crc kubenswrapper[4808]: I0217 15:57:35.117151 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwn94\" (UniqueName: \"kubernetes.io/projected/d49796a9-6bb5-4e1a-a203-95feb121a71b-kube-api-access-bwn94\") pod \"controller-manager-8594bddbbb-l7kxx\" (UID: \"d49796a9-6bb5-4e1a-a203-95feb121a71b\") " pod="openshift-controller-manager/controller-manager-8594bddbbb-l7kxx" Feb 17 15:57:35 crc kubenswrapper[4808]: I0217 15:57:35.117267 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d49796a9-6bb5-4e1a-a203-95feb121a71b-proxy-ca-bundles\") pod \"controller-manager-8594bddbbb-l7kxx\" (UID: \"d49796a9-6bb5-4e1a-a203-95feb121a71b\") " pod="openshift-controller-manager/controller-manager-8594bddbbb-l7kxx" Feb 17 15:57:35 crc kubenswrapper[4808]: I0217 15:57:35.117378 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d49796a9-6bb5-4e1a-a203-95feb121a71b-config\") pod \"controller-manager-8594bddbbb-l7kxx\" (UID: \"d49796a9-6bb5-4e1a-a203-95feb121a71b\") " pod="openshift-controller-manager/controller-manager-8594bddbbb-l7kxx" Feb 17 15:57:35 crc kubenswrapper[4808]: I0217 15:57:35.119248 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d49796a9-6bb5-4e1a-a203-95feb121a71b-client-ca\") pod \"controller-manager-8594bddbbb-l7kxx\" (UID: \"d49796a9-6bb5-4e1a-a203-95feb121a71b\") " pod="openshift-controller-manager/controller-manager-8594bddbbb-l7kxx" Feb 17 15:57:35 crc kubenswrapper[4808]: I0217 15:57:35.119635 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d49796a9-6bb5-4e1a-a203-95feb121a71b-proxy-ca-bundles\") pod \"controller-manager-8594bddbbb-l7kxx\" (UID: \"d49796a9-6bb5-4e1a-a203-95feb121a71b\") " pod="openshift-controller-manager/controller-manager-8594bddbbb-l7kxx" Feb 17 15:57:35 crc kubenswrapper[4808]: I0217 15:57:35.120297 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d49796a9-6bb5-4e1a-a203-95feb121a71b-config\") pod \"controller-manager-8594bddbbb-l7kxx\" (UID: \"d49796a9-6bb5-4e1a-a203-95feb121a71b\") " pod="openshift-controller-manager/controller-manager-8594bddbbb-l7kxx" Feb 17 15:57:35 crc kubenswrapper[4808]: I0217 15:57:35.128790 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d49796a9-6bb5-4e1a-a203-95feb121a71b-serving-cert\") pod \"controller-manager-8594bddbbb-l7kxx\" (UID: \"d49796a9-6bb5-4e1a-a203-95feb121a71b\") " pod="openshift-controller-manager/controller-manager-8594bddbbb-l7kxx" Feb 17 15:57:35 crc kubenswrapper[4808]: I0217 15:57:35.148663 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwn94\" (UniqueName: \"kubernetes.io/projected/d49796a9-6bb5-4e1a-a203-95feb121a71b-kube-api-access-bwn94\") pod \"controller-manager-8594bddbbb-l7kxx\" (UID: \"d49796a9-6bb5-4e1a-a203-95feb121a71b\") " pod="openshift-controller-manager/controller-manager-8594bddbbb-l7kxx" Feb 17 15:57:35 crc kubenswrapper[4808]: I0217 15:57:35.159139 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9455640b-d252-4198-b7df-a410bf7df2fe" path="/var/lib/kubelet/pods/9455640b-d252-4198-b7df-a410bf7df2fe/volumes" Feb 17 15:57:35 crc kubenswrapper[4808]: I0217 15:57:35.160466 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0b9abce-8b6f-4346-b18c-2bfb7e5982eb" path="/var/lib/kubelet/pods/a0b9abce-8b6f-4346-b18c-2bfb7e5982eb/volumes" Feb 17 15:57:35 crc kubenswrapper[4808]: I0217 15:57:35.169786 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8594bddbbb-l7kxx" Feb 17 15:57:35 crc kubenswrapper[4808]: I0217 15:57:35.328599 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-22x8m" Feb 17 15:57:35 crc kubenswrapper[4808]: I0217 15:57:35.482568 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-8594bddbbb-l7kxx"] Feb 17 15:57:35 crc kubenswrapper[4808]: W0217 15:57:35.527341 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd49796a9_6bb5_4e1a_a203_95feb121a71b.slice/crio-cffb29ff0e4b3be981d1a59a5ed6094fc613f38be25d2865e1dc1af0b4d0785b WatchSource:0}: Error finding container cffb29ff0e4b3be981d1a59a5ed6094fc613f38be25d2865e1dc1af0b4d0785b: Status 404 returned error can't find the container with id cffb29ff0e4b3be981d1a59a5ed6094fc613f38be25d2865e1dc1af0b4d0785b Feb 17 15:57:36 crc kubenswrapper[4808]: I0217 15:57:36.265553 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8594bddbbb-l7kxx" event={"ID":"d49796a9-6bb5-4e1a-a203-95feb121a71b","Type":"ContainerStarted","Data":"a5babccb833f23718d9dc43aa54f545e2591a0f290ded633c32f90221497b15a"} Feb 17 15:57:36 crc kubenswrapper[4808]: I0217 15:57:36.266179 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8594bddbbb-l7kxx" event={"ID":"d49796a9-6bb5-4e1a-a203-95feb121a71b","Type":"ContainerStarted","Data":"cffb29ff0e4b3be981d1a59a5ed6094fc613f38be25d2865e1dc1af0b4d0785b"} Feb 17 15:57:36 crc kubenswrapper[4808]: I0217 15:57:36.613615 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-cs597" Feb 17 15:57:36 crc kubenswrapper[4808]: I0217 15:57:36.660420 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-cs597" Feb 17 15:57:37 crc kubenswrapper[4808]: I0217 15:57:37.306337 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-8594bddbbb-l7kxx" podStartSLOduration=6.306318561 podStartE2EDuration="6.306318561s" podCreationTimestamp="2026-02-17 15:57:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:57:37.302822867 +0000 UTC m=+220.819181980" watchObservedRunningTime="2026-02-17 15:57:37.306318561 +0000 UTC m=+220.822677654" Feb 17 15:57:37 crc kubenswrapper[4808]: I0217 15:57:37.991788 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8jsrz" Feb 17 15:57:38 crc kubenswrapper[4808]: I0217 15:57:38.054056 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8jsrz" Feb 17 15:57:38 crc kubenswrapper[4808]: I0217 15:57:38.457755 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qhtfr" Feb 17 15:57:38 crc kubenswrapper[4808]: I0217 15:57:38.516656 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qhtfr" Feb 17 15:57:39 crc kubenswrapper[4808]: I0217 15:57:39.843636 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qhtfr"] Feb 17 15:57:40 crc kubenswrapper[4808]: I0217 15:57:40.301400 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-qhtfr" podUID="df27437e-6547-4705-bbe7-08a726639dbe" containerName="registry-server" containerID="cri-o://ab5bf34de9e08f53fdffa63c8df6a1c54b35f7cc20e2c243fa6aac5b8aadc2b5" gracePeriod=2 Feb 17 15:57:41 crc kubenswrapper[4808]: I0217 15:57:41.312164 4808 generic.go:334] "Generic (PLEG): container finished" podID="df27437e-6547-4705-bbe7-08a726639dbe" containerID="ab5bf34de9e08f53fdffa63c8df6a1c54b35f7cc20e2c243fa6aac5b8aadc2b5" exitCode=0 Feb 17 15:57:41 crc kubenswrapper[4808]: I0217 15:57:41.312224 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qhtfr" event={"ID":"df27437e-6547-4705-bbe7-08a726639dbe","Type":"ContainerDied","Data":"ab5bf34de9e08f53fdffa63c8df6a1c54b35f7cc20e2c243fa6aac5b8aadc2b5"} Feb 17 15:57:41 crc kubenswrapper[4808]: I0217 15:57:41.801276 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qhtfr" Feb 17 15:57:41 crc kubenswrapper[4808]: I0217 15:57:41.953014 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df27437e-6547-4705-bbe7-08a726639dbe-catalog-content\") pod \"df27437e-6547-4705-bbe7-08a726639dbe\" (UID: \"df27437e-6547-4705-bbe7-08a726639dbe\") " Feb 17 15:57:41 crc kubenswrapper[4808]: I0217 15:57:41.953102 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2255r\" (UniqueName: \"kubernetes.io/projected/df27437e-6547-4705-bbe7-08a726639dbe-kube-api-access-2255r\") pod \"df27437e-6547-4705-bbe7-08a726639dbe\" (UID: \"df27437e-6547-4705-bbe7-08a726639dbe\") " Feb 17 15:57:41 crc kubenswrapper[4808]: I0217 15:57:41.953197 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df27437e-6547-4705-bbe7-08a726639dbe-utilities\") pod \"df27437e-6547-4705-bbe7-08a726639dbe\" (UID: \"df27437e-6547-4705-bbe7-08a726639dbe\") " Feb 17 15:57:41 crc kubenswrapper[4808]: I0217 15:57:41.955518 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df27437e-6547-4705-bbe7-08a726639dbe-utilities" (OuterVolumeSpecName: "utilities") pod "df27437e-6547-4705-bbe7-08a726639dbe" (UID: "df27437e-6547-4705-bbe7-08a726639dbe"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 15:57:41 crc kubenswrapper[4808]: I0217 15:57:41.966841 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df27437e-6547-4705-bbe7-08a726639dbe-kube-api-access-2255r" (OuterVolumeSpecName: "kube-api-access-2255r") pod "df27437e-6547-4705-bbe7-08a726639dbe" (UID: "df27437e-6547-4705-bbe7-08a726639dbe"). InnerVolumeSpecName "kube-api-access-2255r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:57:42 crc kubenswrapper[4808]: I0217 15:57:42.055282 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2255r\" (UniqueName: \"kubernetes.io/projected/df27437e-6547-4705-bbe7-08a726639dbe-kube-api-access-2255r\") on node \"crc\" DevicePath \"\"" Feb 17 15:57:42 crc kubenswrapper[4808]: I0217 15:57:42.055353 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df27437e-6547-4705-bbe7-08a726639dbe-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 15:57:42 crc kubenswrapper[4808]: I0217 15:57:42.109152 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df27437e-6547-4705-bbe7-08a726639dbe-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "df27437e-6547-4705-bbe7-08a726639dbe" (UID: "df27437e-6547-4705-bbe7-08a726639dbe"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 15:57:42 crc kubenswrapper[4808]: I0217 15:57:42.156417 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df27437e-6547-4705-bbe7-08a726639dbe-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 15:57:42 crc kubenswrapper[4808]: I0217 15:57:42.323860 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qhtfr" event={"ID":"df27437e-6547-4705-bbe7-08a726639dbe","Type":"ContainerDied","Data":"1e19955de905028b28d439d0244d4c394edca2e38947d73637092653f1783480"} Feb 17 15:57:42 crc kubenswrapper[4808]: I0217 15:57:42.323938 4808 scope.go:117] "RemoveContainer" containerID="ab5bf34de9e08f53fdffa63c8df6a1c54b35f7cc20e2c243fa6aac5b8aadc2b5" Feb 17 15:57:42 crc kubenswrapper[4808]: I0217 15:57:42.324339 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qhtfr" Feb 17 15:57:42 crc kubenswrapper[4808]: I0217 15:57:42.353888 4808 scope.go:117] "RemoveContainer" containerID="1704dbc2b68e2b10e28ffd609ebd58eead43e61a6bd1ead6a6230baca3c1409e" Feb 17 15:57:42 crc kubenswrapper[4808]: I0217 15:57:42.384472 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qhtfr"] Feb 17 15:57:42 crc kubenswrapper[4808]: I0217 15:57:42.386875 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-qhtfr"] Feb 17 15:57:42 crc kubenswrapper[4808]: I0217 15:57:42.400287 4808 scope.go:117] "RemoveContainer" containerID="7be6898f1f88ea761e64c2d8022df14c7db8627e97d2f080f379df7514b92a85" Feb 17 15:57:43 crc kubenswrapper[4808]: I0217 15:57:43.155969 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df27437e-6547-4705-bbe7-08a726639dbe" path="/var/lib/kubelet/pods/df27437e-6547-4705-bbe7-08a726639dbe/volumes" Feb 17 15:57:45 crc kubenswrapper[4808]: I0217 15:57:45.170837 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-8594bddbbb-l7kxx" Feb 17 15:57:45 crc kubenswrapper[4808]: I0217 15:57:45.180453 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-8594bddbbb-l7kxx" Feb 17 15:57:47 crc kubenswrapper[4808]: I0217 15:57:47.440103 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-j6dgq"] Feb 17 15:57:51 crc kubenswrapper[4808]: I0217 15:57:51.395915 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-8594bddbbb-l7kxx"] Feb 17 15:57:51 crc kubenswrapper[4808]: I0217 15:57:51.397096 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-8594bddbbb-l7kxx" podUID="d49796a9-6bb5-4e1a-a203-95feb121a71b" containerName="controller-manager" containerID="cri-o://a5babccb833f23718d9dc43aa54f545e2591a0f290ded633c32f90221497b15a" gracePeriod=30 Feb 17 15:57:51 crc kubenswrapper[4808]: I0217 15:57:51.487823 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-567cdd88c5-bmx27"] Feb 17 15:57:51 crc kubenswrapper[4808]: I0217 15:57:51.488352 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-567cdd88c5-bmx27" podUID="80b9b1d0-5520-48a9-b0b7-2c524d8ba56d" containerName="route-controller-manager" containerID="cri-o://29b23adb7be4da7acebb0cc4e436ec05ecde2ceb12abd3e5503fc67622002028" gracePeriod=30 Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.037284 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-567cdd88c5-bmx27" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.045134 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fvm6t\" (UniqueName: \"kubernetes.io/projected/80b9b1d0-5520-48a9-b0b7-2c524d8ba56d-kube-api-access-fvm6t\") pod \"80b9b1d0-5520-48a9-b0b7-2c524d8ba56d\" (UID: \"80b9b1d0-5520-48a9-b0b7-2c524d8ba56d\") " Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.045235 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/80b9b1d0-5520-48a9-b0b7-2c524d8ba56d-serving-cert\") pod \"80b9b1d0-5520-48a9-b0b7-2c524d8ba56d\" (UID: \"80b9b1d0-5520-48a9-b0b7-2c524d8ba56d\") " Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.045285 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80b9b1d0-5520-48a9-b0b7-2c524d8ba56d-config\") pod \"80b9b1d0-5520-48a9-b0b7-2c524d8ba56d\" (UID: \"80b9b1d0-5520-48a9-b0b7-2c524d8ba56d\") " Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.045361 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/80b9b1d0-5520-48a9-b0b7-2c524d8ba56d-client-ca\") pod \"80b9b1d0-5520-48a9-b0b7-2c524d8ba56d\" (UID: \"80b9b1d0-5520-48a9-b0b7-2c524d8ba56d\") " Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.046348 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80b9b1d0-5520-48a9-b0b7-2c524d8ba56d-client-ca" (OuterVolumeSpecName: "client-ca") pod "80b9b1d0-5520-48a9-b0b7-2c524d8ba56d" (UID: "80b9b1d0-5520-48a9-b0b7-2c524d8ba56d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.046403 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80b9b1d0-5520-48a9-b0b7-2c524d8ba56d-config" (OuterVolumeSpecName: "config") pod "80b9b1d0-5520-48a9-b0b7-2c524d8ba56d" (UID: "80b9b1d0-5520-48a9-b0b7-2c524d8ba56d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.052717 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80b9b1d0-5520-48a9-b0b7-2c524d8ba56d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "80b9b1d0-5520-48a9-b0b7-2c524d8ba56d" (UID: "80b9b1d0-5520-48a9-b0b7-2c524d8ba56d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.057834 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80b9b1d0-5520-48a9-b0b7-2c524d8ba56d-kube-api-access-fvm6t" (OuterVolumeSpecName: "kube-api-access-fvm6t") pod "80b9b1d0-5520-48a9-b0b7-2c524d8ba56d" (UID: "80b9b1d0-5520-48a9-b0b7-2c524d8ba56d"). InnerVolumeSpecName "kube-api-access-fvm6t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.078237 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8594bddbbb-l7kxx" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.146819 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d49796a9-6bb5-4e1a-a203-95feb121a71b-serving-cert\") pod \"d49796a9-6bb5-4e1a-a203-95feb121a71b\" (UID: \"d49796a9-6bb5-4e1a-a203-95feb121a71b\") " Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.147225 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bwn94\" (UniqueName: \"kubernetes.io/projected/d49796a9-6bb5-4e1a-a203-95feb121a71b-kube-api-access-bwn94\") pod \"d49796a9-6bb5-4e1a-a203-95feb121a71b\" (UID: \"d49796a9-6bb5-4e1a-a203-95feb121a71b\") " Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.147337 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d49796a9-6bb5-4e1a-a203-95feb121a71b-client-ca\") pod \"d49796a9-6bb5-4e1a-a203-95feb121a71b\" (UID: \"d49796a9-6bb5-4e1a-a203-95feb121a71b\") " Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.147433 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d49796a9-6bb5-4e1a-a203-95feb121a71b-config\") pod \"d49796a9-6bb5-4e1a-a203-95feb121a71b\" (UID: \"d49796a9-6bb5-4e1a-a203-95feb121a71b\") " Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.147512 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d49796a9-6bb5-4e1a-a203-95feb121a71b-proxy-ca-bundles\") pod \"d49796a9-6bb5-4e1a-a203-95feb121a71b\" (UID: \"d49796a9-6bb5-4e1a-a203-95feb121a71b\") " Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.147874 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fvm6t\" (UniqueName: \"kubernetes.io/projected/80b9b1d0-5520-48a9-b0b7-2c524d8ba56d-kube-api-access-fvm6t\") on node \"crc\" DevicePath \"\"" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.150273 4808 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/80b9b1d0-5520-48a9-b0b7-2c524d8ba56d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.150372 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80b9b1d0-5520-48a9-b0b7-2c524d8ba56d-config\") on node \"crc\" DevicePath \"\"" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.150456 4808 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/80b9b1d0-5520-48a9-b0b7-2c524d8ba56d-client-ca\") on node \"crc\" DevicePath \"\"" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.149712 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d49796a9-6bb5-4e1a-a203-95feb121a71b-config" (OuterVolumeSpecName: "config") pod "d49796a9-6bb5-4e1a-a203-95feb121a71b" (UID: "d49796a9-6bb5-4e1a-a203-95feb121a71b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.149735 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d49796a9-6bb5-4e1a-a203-95feb121a71b-client-ca" (OuterVolumeSpecName: "client-ca") pod "d49796a9-6bb5-4e1a-a203-95feb121a71b" (UID: "d49796a9-6bb5-4e1a-a203-95feb121a71b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.150385 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d49796a9-6bb5-4e1a-a203-95feb121a71b-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "d49796a9-6bb5-4e1a-a203-95feb121a71b" (UID: "d49796a9-6bb5-4e1a-a203-95feb121a71b"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.153869 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d49796a9-6bb5-4e1a-a203-95feb121a71b-kube-api-access-bwn94" (OuterVolumeSpecName: "kube-api-access-bwn94") pod "d49796a9-6bb5-4e1a-a203-95feb121a71b" (UID: "d49796a9-6bb5-4e1a-a203-95feb121a71b"). InnerVolumeSpecName "kube-api-access-bwn94". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.154005 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d49796a9-6bb5-4e1a-a203-95feb121a71b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d49796a9-6bb5-4e1a-a203-95feb121a71b" (UID: "d49796a9-6bb5-4e1a-a203-95feb121a71b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.251500 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bwn94\" (UniqueName: \"kubernetes.io/projected/d49796a9-6bb5-4e1a-a203-95feb121a71b-kube-api-access-bwn94\") on node \"crc\" DevicePath \"\"" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.251546 4808 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d49796a9-6bb5-4e1a-a203-95feb121a71b-client-ca\") on node \"crc\" DevicePath \"\"" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.251556 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d49796a9-6bb5-4e1a-a203-95feb121a71b-config\") on node \"crc\" DevicePath \"\"" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.251567 4808 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d49796a9-6bb5-4e1a-a203-95feb121a71b-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.251596 4808 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d49796a9-6bb5-4e1a-a203-95feb121a71b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.396687 4808 generic.go:334] "Generic (PLEG): container finished" podID="d49796a9-6bb5-4e1a-a203-95feb121a71b" containerID="a5babccb833f23718d9dc43aa54f545e2591a0f290ded633c32f90221497b15a" exitCode=0 Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.396771 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8594bddbbb-l7kxx" event={"ID":"d49796a9-6bb5-4e1a-a203-95feb121a71b","Type":"ContainerDied","Data":"a5babccb833f23718d9dc43aa54f545e2591a0f290ded633c32f90221497b15a"} Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.396809 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8594bddbbb-l7kxx" event={"ID":"d49796a9-6bb5-4e1a-a203-95feb121a71b","Type":"ContainerDied","Data":"cffb29ff0e4b3be981d1a59a5ed6094fc613f38be25d2865e1dc1af0b4d0785b"} Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.396833 4808 scope.go:117] "RemoveContainer" containerID="a5babccb833f23718d9dc43aa54f545e2591a0f290ded633c32f90221497b15a" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.397009 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8594bddbbb-l7kxx" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.400083 4808 generic.go:334] "Generic (PLEG): container finished" podID="80b9b1d0-5520-48a9-b0b7-2c524d8ba56d" containerID="29b23adb7be4da7acebb0cc4e436ec05ecde2ceb12abd3e5503fc67622002028" exitCode=0 Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.400135 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-567cdd88c5-bmx27" event={"ID":"80b9b1d0-5520-48a9-b0b7-2c524d8ba56d","Type":"ContainerDied","Data":"29b23adb7be4da7acebb0cc4e436ec05ecde2ceb12abd3e5503fc67622002028"} Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.400168 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-567cdd88c5-bmx27" event={"ID":"80b9b1d0-5520-48a9-b0b7-2c524d8ba56d","Type":"ContainerDied","Data":"70cc03d0f4a16d01a2409452eb79747f47e3f9835f1dc0806f2b12e87251321f"} Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.400179 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-567cdd88c5-bmx27" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.417278 4808 scope.go:117] "RemoveContainer" containerID="a5babccb833f23718d9dc43aa54f545e2591a0f290ded633c32f90221497b15a" Feb 17 15:57:52 crc kubenswrapper[4808]: E0217 15:57:52.418745 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a5babccb833f23718d9dc43aa54f545e2591a0f290ded633c32f90221497b15a\": container with ID starting with a5babccb833f23718d9dc43aa54f545e2591a0f290ded633c32f90221497b15a not found: ID does not exist" containerID="a5babccb833f23718d9dc43aa54f545e2591a0f290ded633c32f90221497b15a" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.418803 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5babccb833f23718d9dc43aa54f545e2591a0f290ded633c32f90221497b15a"} err="failed to get container status \"a5babccb833f23718d9dc43aa54f545e2591a0f290ded633c32f90221497b15a\": rpc error: code = NotFound desc = could not find container \"a5babccb833f23718d9dc43aa54f545e2591a0f290ded633c32f90221497b15a\": container with ID starting with a5babccb833f23718d9dc43aa54f545e2591a0f290ded633c32f90221497b15a not found: ID does not exist" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.418862 4808 scope.go:117] "RemoveContainer" containerID="29b23adb7be4da7acebb0cc4e436ec05ecde2ceb12abd3e5503fc67622002028" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.432761 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-567cdd88c5-bmx27"] Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.436882 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-567cdd88c5-bmx27"] Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.436949 4808 scope.go:117] "RemoveContainer" containerID="29b23adb7be4da7acebb0cc4e436ec05ecde2ceb12abd3e5503fc67622002028" Feb 17 15:57:52 crc kubenswrapper[4808]: E0217 15:57:52.437884 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29b23adb7be4da7acebb0cc4e436ec05ecde2ceb12abd3e5503fc67622002028\": container with ID starting with 29b23adb7be4da7acebb0cc4e436ec05ecde2ceb12abd3e5503fc67622002028 not found: ID does not exist" containerID="29b23adb7be4da7acebb0cc4e436ec05ecde2ceb12abd3e5503fc67622002028" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.437934 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29b23adb7be4da7acebb0cc4e436ec05ecde2ceb12abd3e5503fc67622002028"} err="failed to get container status \"29b23adb7be4da7acebb0cc4e436ec05ecde2ceb12abd3e5503fc67622002028\": rpc error: code = NotFound desc = could not find container \"29b23adb7be4da7acebb0cc4e436ec05ecde2ceb12abd3e5503fc67622002028\": container with ID starting with 29b23adb7be4da7acebb0cc4e436ec05ecde2ceb12abd3e5503fc67622002028 not found: ID does not exist" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.451065 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-8594bddbbb-l7kxx"] Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.453290 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-8594bddbbb-l7kxx"] Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.826598 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6df9b784b8-zmkjg"] Feb 17 15:57:52 crc kubenswrapper[4808]: E0217 15:57:52.826915 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d49796a9-6bb5-4e1a-a203-95feb121a71b" containerName="controller-manager" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.826955 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="d49796a9-6bb5-4e1a-a203-95feb121a71b" containerName="controller-manager" Feb 17 15:57:52 crc kubenswrapper[4808]: E0217 15:57:52.826974 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df27437e-6547-4705-bbe7-08a726639dbe" containerName="extract-utilities" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.826981 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="df27437e-6547-4705-bbe7-08a726639dbe" containerName="extract-utilities" Feb 17 15:57:52 crc kubenswrapper[4808]: E0217 15:57:52.826992 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df27437e-6547-4705-bbe7-08a726639dbe" containerName="extract-content" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.826999 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="df27437e-6547-4705-bbe7-08a726639dbe" containerName="extract-content" Feb 17 15:57:52 crc kubenswrapper[4808]: E0217 15:57:52.827011 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df27437e-6547-4705-bbe7-08a726639dbe" containerName="registry-server" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.827017 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="df27437e-6547-4705-bbe7-08a726639dbe" containerName="registry-server" Feb 17 15:57:52 crc kubenswrapper[4808]: E0217 15:57:52.827027 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80b9b1d0-5520-48a9-b0b7-2c524d8ba56d" containerName="route-controller-manager" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.827034 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="80b9b1d0-5520-48a9-b0b7-2c524d8ba56d" containerName="route-controller-manager" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.827148 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="80b9b1d0-5520-48a9-b0b7-2c524d8ba56d" containerName="route-controller-manager" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.827170 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="d49796a9-6bb5-4e1a-a203-95feb121a71b" containerName="controller-manager" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.827178 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="df27437e-6547-4705-bbe7-08a726639dbe" containerName="registry-server" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.827681 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6df9b784b8-zmkjg" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.829639 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-56bc8c57dd-2hsb9"] Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.830509 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-56bc8c57dd-2hsb9" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.834829 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.835090 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.835101 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.835295 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.840801 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.840824 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.840879 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.841767 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.841787 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.841770 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.841918 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.842041 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.846427 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6df9b784b8-zmkjg"] Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.850956 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.853728 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-56bc8c57dd-2hsb9"] Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.860565 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfjng\" (UniqueName: \"kubernetes.io/projected/8bcf84d4-b914-475a-be97-ecf8b121caf2-kube-api-access-cfjng\") pod \"route-controller-manager-56bc8c57dd-2hsb9\" (UID: \"8bcf84d4-b914-475a-be97-ecf8b121caf2\") " pod="openshift-route-controller-manager/route-controller-manager-56bc8c57dd-2hsb9" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.860954 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8bcf84d4-b914-475a-be97-ecf8b121caf2-serving-cert\") pod \"route-controller-manager-56bc8c57dd-2hsb9\" (UID: \"8bcf84d4-b914-475a-be97-ecf8b121caf2\") " pod="openshift-route-controller-manager/route-controller-manager-56bc8c57dd-2hsb9" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.861168 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c8fa5f2-07b4-4b99-8448-3177c1e7d736-config\") pod \"controller-manager-6df9b784b8-zmkjg\" (UID: \"5c8fa5f2-07b4-4b99-8448-3177c1e7d736\") " pod="openshift-controller-manager/controller-manager-6df9b784b8-zmkjg" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.861349 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5c8fa5f2-07b4-4b99-8448-3177c1e7d736-proxy-ca-bundles\") pod \"controller-manager-6df9b784b8-zmkjg\" (UID: \"5c8fa5f2-07b4-4b99-8448-3177c1e7d736\") " pod="openshift-controller-manager/controller-manager-6df9b784b8-zmkjg" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.861498 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8bcf84d4-b914-475a-be97-ecf8b121caf2-client-ca\") pod \"route-controller-manager-56bc8c57dd-2hsb9\" (UID: \"8bcf84d4-b914-475a-be97-ecf8b121caf2\") " pod="openshift-route-controller-manager/route-controller-manager-56bc8c57dd-2hsb9" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.861626 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4k92\" (UniqueName: \"kubernetes.io/projected/5c8fa5f2-07b4-4b99-8448-3177c1e7d736-kube-api-access-v4k92\") pod \"controller-manager-6df9b784b8-zmkjg\" (UID: \"5c8fa5f2-07b4-4b99-8448-3177c1e7d736\") " pod="openshift-controller-manager/controller-manager-6df9b784b8-zmkjg" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.861758 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5c8fa5f2-07b4-4b99-8448-3177c1e7d736-client-ca\") pod \"controller-manager-6df9b784b8-zmkjg\" (UID: \"5c8fa5f2-07b4-4b99-8448-3177c1e7d736\") " pod="openshift-controller-manager/controller-manager-6df9b784b8-zmkjg" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.862061 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8bcf84d4-b914-475a-be97-ecf8b121caf2-config\") pod \"route-controller-manager-56bc8c57dd-2hsb9\" (UID: \"8bcf84d4-b914-475a-be97-ecf8b121caf2\") " pod="openshift-route-controller-manager/route-controller-manager-56bc8c57dd-2hsb9" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.862171 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5c8fa5f2-07b4-4b99-8448-3177c1e7d736-serving-cert\") pod \"controller-manager-6df9b784b8-zmkjg\" (UID: \"5c8fa5f2-07b4-4b99-8448-3177c1e7d736\") " pod="openshift-controller-manager/controller-manager-6df9b784b8-zmkjg" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.963693 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4k92\" (UniqueName: \"kubernetes.io/projected/5c8fa5f2-07b4-4b99-8448-3177c1e7d736-kube-api-access-v4k92\") pod \"controller-manager-6df9b784b8-zmkjg\" (UID: \"5c8fa5f2-07b4-4b99-8448-3177c1e7d736\") " pod="openshift-controller-manager/controller-manager-6df9b784b8-zmkjg" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.963749 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8bcf84d4-b914-475a-be97-ecf8b121caf2-client-ca\") pod \"route-controller-manager-56bc8c57dd-2hsb9\" (UID: \"8bcf84d4-b914-475a-be97-ecf8b121caf2\") " pod="openshift-route-controller-manager/route-controller-manager-56bc8c57dd-2hsb9" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.963779 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5c8fa5f2-07b4-4b99-8448-3177c1e7d736-client-ca\") pod \"controller-manager-6df9b784b8-zmkjg\" (UID: \"5c8fa5f2-07b4-4b99-8448-3177c1e7d736\") " pod="openshift-controller-manager/controller-manager-6df9b784b8-zmkjg" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.963812 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8bcf84d4-b914-475a-be97-ecf8b121caf2-config\") pod \"route-controller-manager-56bc8c57dd-2hsb9\" (UID: \"8bcf84d4-b914-475a-be97-ecf8b121caf2\") " pod="openshift-route-controller-manager/route-controller-manager-56bc8c57dd-2hsb9" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.963838 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5c8fa5f2-07b4-4b99-8448-3177c1e7d736-serving-cert\") pod \"controller-manager-6df9b784b8-zmkjg\" (UID: \"5c8fa5f2-07b4-4b99-8448-3177c1e7d736\") " pod="openshift-controller-manager/controller-manager-6df9b784b8-zmkjg" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.963881 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cfjng\" (UniqueName: \"kubernetes.io/projected/8bcf84d4-b914-475a-be97-ecf8b121caf2-kube-api-access-cfjng\") pod \"route-controller-manager-56bc8c57dd-2hsb9\" (UID: \"8bcf84d4-b914-475a-be97-ecf8b121caf2\") " pod="openshift-route-controller-manager/route-controller-manager-56bc8c57dd-2hsb9" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.963914 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8bcf84d4-b914-475a-be97-ecf8b121caf2-serving-cert\") pod \"route-controller-manager-56bc8c57dd-2hsb9\" (UID: \"8bcf84d4-b914-475a-be97-ecf8b121caf2\") " pod="openshift-route-controller-manager/route-controller-manager-56bc8c57dd-2hsb9" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.963953 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c8fa5f2-07b4-4b99-8448-3177c1e7d736-config\") pod \"controller-manager-6df9b784b8-zmkjg\" (UID: \"5c8fa5f2-07b4-4b99-8448-3177c1e7d736\") " pod="openshift-controller-manager/controller-manager-6df9b784b8-zmkjg" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.964003 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5c8fa5f2-07b4-4b99-8448-3177c1e7d736-proxy-ca-bundles\") pod \"controller-manager-6df9b784b8-zmkjg\" (UID: \"5c8fa5f2-07b4-4b99-8448-3177c1e7d736\") " pod="openshift-controller-manager/controller-manager-6df9b784b8-zmkjg" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.965743 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8bcf84d4-b914-475a-be97-ecf8b121caf2-client-ca\") pod \"route-controller-manager-56bc8c57dd-2hsb9\" (UID: \"8bcf84d4-b914-475a-be97-ecf8b121caf2\") " pod="openshift-route-controller-manager/route-controller-manager-56bc8c57dd-2hsb9" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.965743 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5c8fa5f2-07b4-4b99-8448-3177c1e7d736-client-ca\") pod \"controller-manager-6df9b784b8-zmkjg\" (UID: \"5c8fa5f2-07b4-4b99-8448-3177c1e7d736\") " pod="openshift-controller-manager/controller-manager-6df9b784b8-zmkjg" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.965905 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8bcf84d4-b914-475a-be97-ecf8b121caf2-config\") pod \"route-controller-manager-56bc8c57dd-2hsb9\" (UID: \"8bcf84d4-b914-475a-be97-ecf8b121caf2\") " pod="openshift-route-controller-manager/route-controller-manager-56bc8c57dd-2hsb9" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.966024 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c8fa5f2-07b4-4b99-8448-3177c1e7d736-config\") pod \"controller-manager-6df9b784b8-zmkjg\" (UID: \"5c8fa5f2-07b4-4b99-8448-3177c1e7d736\") " pod="openshift-controller-manager/controller-manager-6df9b784b8-zmkjg" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.967294 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5c8fa5f2-07b4-4b99-8448-3177c1e7d736-proxy-ca-bundles\") pod \"controller-manager-6df9b784b8-zmkjg\" (UID: \"5c8fa5f2-07b4-4b99-8448-3177c1e7d736\") " pod="openshift-controller-manager/controller-manager-6df9b784b8-zmkjg" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.970309 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8bcf84d4-b914-475a-be97-ecf8b121caf2-serving-cert\") pod \"route-controller-manager-56bc8c57dd-2hsb9\" (UID: \"8bcf84d4-b914-475a-be97-ecf8b121caf2\") " pod="openshift-route-controller-manager/route-controller-manager-56bc8c57dd-2hsb9" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.970330 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5c8fa5f2-07b4-4b99-8448-3177c1e7d736-serving-cert\") pod \"controller-manager-6df9b784b8-zmkjg\" (UID: \"5c8fa5f2-07b4-4b99-8448-3177c1e7d736\") " pod="openshift-controller-manager/controller-manager-6df9b784b8-zmkjg" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.989547 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4k92\" (UniqueName: \"kubernetes.io/projected/5c8fa5f2-07b4-4b99-8448-3177c1e7d736-kube-api-access-v4k92\") pod \"controller-manager-6df9b784b8-zmkjg\" (UID: \"5c8fa5f2-07b4-4b99-8448-3177c1e7d736\") " pod="openshift-controller-manager/controller-manager-6df9b784b8-zmkjg" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.996195 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cfjng\" (UniqueName: \"kubernetes.io/projected/8bcf84d4-b914-475a-be97-ecf8b121caf2-kube-api-access-cfjng\") pod \"route-controller-manager-56bc8c57dd-2hsb9\" (UID: \"8bcf84d4-b914-475a-be97-ecf8b121caf2\") " pod="openshift-route-controller-manager/route-controller-manager-56bc8c57dd-2hsb9" Feb 17 15:57:53 crc kubenswrapper[4808]: I0217 15:57:53.143130 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6df9b784b8-zmkjg" Feb 17 15:57:53 crc kubenswrapper[4808]: I0217 15:57:53.149877 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-56bc8c57dd-2hsb9" Feb 17 15:57:53 crc kubenswrapper[4808]: I0217 15:57:53.152399 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80b9b1d0-5520-48a9-b0b7-2c524d8ba56d" path="/var/lib/kubelet/pods/80b9b1d0-5520-48a9-b0b7-2c524d8ba56d/volumes" Feb 17 15:57:53 crc kubenswrapper[4808]: I0217 15:57:53.152983 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d49796a9-6bb5-4e1a-a203-95feb121a71b" path="/var/lib/kubelet/pods/d49796a9-6bb5-4e1a-a203-95feb121a71b/volumes" Feb 17 15:57:53 crc kubenswrapper[4808]: I0217 15:57:53.373400 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-56bc8c57dd-2hsb9"] Feb 17 15:57:53 crc kubenswrapper[4808]: I0217 15:57:53.428560 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-56bc8c57dd-2hsb9" event={"ID":"8bcf84d4-b914-475a-be97-ecf8b121caf2","Type":"ContainerStarted","Data":"b9fa5710c504d58a802b1d136f2dbe3019c05f292e7764254fa2863aa9a29b94"} Feb 17 15:57:53 crc kubenswrapper[4808]: I0217 15:57:53.439067 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6df9b784b8-zmkjg"] Feb 17 15:57:53 crc kubenswrapper[4808]: W0217 15:57:53.454265 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5c8fa5f2_07b4_4b99_8448_3177c1e7d736.slice/crio-d8092b7f0c9ca6d3d2bf3ee03aae06ec027a0e90e79137fa7d5589766588eb37 WatchSource:0}: Error finding container d8092b7f0c9ca6d3d2bf3ee03aae06ec027a0e90e79137fa7d5589766588eb37: Status 404 returned error can't find the container with id d8092b7f0c9ca6d3d2bf3ee03aae06ec027a0e90e79137fa7d5589766588eb37 Feb 17 15:57:54 crc kubenswrapper[4808]: I0217 15:57:54.437521 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6df9b784b8-zmkjg" event={"ID":"5c8fa5f2-07b4-4b99-8448-3177c1e7d736","Type":"ContainerStarted","Data":"2c188f25669dfc99284f724066294f22d46530bf5b5489d5f81017c230cd64a5"} Feb 17 15:57:54 crc kubenswrapper[4808]: I0217 15:57:54.437964 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6df9b784b8-zmkjg" event={"ID":"5c8fa5f2-07b4-4b99-8448-3177c1e7d736","Type":"ContainerStarted","Data":"d8092b7f0c9ca6d3d2bf3ee03aae06ec027a0e90e79137fa7d5589766588eb37"} Feb 17 15:57:54 crc kubenswrapper[4808]: I0217 15:57:54.437986 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6df9b784b8-zmkjg" Feb 17 15:57:54 crc kubenswrapper[4808]: I0217 15:57:54.442742 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-56bc8c57dd-2hsb9" event={"ID":"8bcf84d4-b914-475a-be97-ecf8b121caf2","Type":"ContainerStarted","Data":"40e9597401850875091ed883ff41d7cb3516ede401e5423d5484ae46fc9a9ae8"} Feb 17 15:57:54 crc kubenswrapper[4808]: I0217 15:57:54.443247 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-56bc8c57dd-2hsb9" Feb 17 15:57:54 crc kubenswrapper[4808]: I0217 15:57:54.446487 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6df9b784b8-zmkjg" Feb 17 15:57:54 crc kubenswrapper[4808]: I0217 15:57:54.463862 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-56bc8c57dd-2hsb9" Feb 17 15:57:54 crc kubenswrapper[4808]: I0217 15:57:54.493048 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6df9b784b8-zmkjg" podStartSLOduration=3.4930241730000002 podStartE2EDuration="3.493024173s" podCreationTimestamp="2026-02-17 15:57:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:57:54.468548981 +0000 UTC m=+237.984908064" watchObservedRunningTime="2026-02-17 15:57:54.493024173 +0000 UTC m=+238.009383256" Feb 17 15:58:01 crc kubenswrapper[4808]: I0217 15:58:01.808722 4808 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 17 15:58:01 crc kubenswrapper[4808]: I0217 15:58:01.810592 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 15:58:01 crc kubenswrapper[4808]: I0217 15:58:01.854886 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-56bc8c57dd-2hsb9" podStartSLOduration=10.854858472 podStartE2EDuration="10.854858472s" podCreationTimestamp="2026-02-17 15:57:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:57:54.520527516 +0000 UTC m=+238.036886589" watchObservedRunningTime="2026-02-17 15:58:01.854858472 +0000 UTC m=+245.371217555" Feb 17 15:58:01 crc kubenswrapper[4808]: I0217 15:58:01.856918 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 17 15:58:01 crc kubenswrapper[4808]: I0217 15:58:01.875674 4808 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 17 15:58:01 crc kubenswrapper[4808]: I0217 15:58:01.876051 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://5fa3ef5d82c776e482d3da2d223d74423393c75b813707483fadca8cfbb5ed3b" gracePeriod=15 Feb 17 15:58:01 crc kubenswrapper[4808]: I0217 15:58:01.876100 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://715d799f5e1732f88175b90bad28450b9c5148e89bf47ac3e47f9585acf3b392" gracePeriod=15 Feb 17 15:58:01 crc kubenswrapper[4808]: I0217 15:58:01.876223 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://77d0e25e29d8f9c5146809e50f50a20c537f5ddecea1b902928a94870b5d44ef" gracePeriod=15 Feb 17 15:58:01 crc kubenswrapper[4808]: I0217 15:58:01.876169 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://695c70a36ec8a626d22b6dc04fdaad77e3e1f27a035ce6f62b96afe1f2c29361" gracePeriod=15 Feb 17 15:58:01 crc kubenswrapper[4808]: I0217 15:58:01.876249 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://e2611c9a878eac336beeea637370ce7fe47a5a80a6f29002cb2fb79d4637a1c6" gracePeriod=15 Feb 17 15:58:01 crc kubenswrapper[4808]: I0217 15:58:01.878133 4808 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 17 15:58:01 crc kubenswrapper[4808]: E0217 15:58:01.878483 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 17 15:58:01 crc kubenswrapper[4808]: I0217 15:58:01.878513 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 17 15:58:01 crc kubenswrapper[4808]: E0217 15:58:01.878526 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 17 15:58:01 crc kubenswrapper[4808]: I0217 15:58:01.878533 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 17 15:58:01 crc kubenswrapper[4808]: E0217 15:58:01.878541 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 17 15:58:01 crc kubenswrapper[4808]: I0217 15:58:01.878548 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 17 15:58:01 crc kubenswrapper[4808]: E0217 15:58:01.878565 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 17 15:58:01 crc kubenswrapper[4808]: I0217 15:58:01.878601 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 17 15:58:01 crc kubenswrapper[4808]: E0217 15:58:01.878610 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 17 15:58:01 crc kubenswrapper[4808]: I0217 15:58:01.878617 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 17 15:58:01 crc kubenswrapper[4808]: E0217 15:58:01.878631 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 17 15:58:01 crc kubenswrapper[4808]: I0217 15:58:01.878637 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 17 15:58:01 crc kubenswrapper[4808]: E0217 15:58:01.878646 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 17 15:58:01 crc kubenswrapper[4808]: I0217 15:58:01.878652 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 17 15:58:01 crc kubenswrapper[4808]: I0217 15:58:01.878858 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 17 15:58:01 crc kubenswrapper[4808]: I0217 15:58:01.878874 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 17 15:58:01 crc kubenswrapper[4808]: I0217 15:58:01.878881 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 17 15:58:01 crc kubenswrapper[4808]: I0217 15:58:01.878892 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 17 15:58:01 crc kubenswrapper[4808]: I0217 15:58:01.878900 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 17 15:58:01 crc kubenswrapper[4808]: I0217 15:58:01.878907 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 17 15:58:01 crc kubenswrapper[4808]: I0217 15:58:01.911012 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 15:58:01 crc kubenswrapper[4808]: I0217 15:58:01.911198 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 15:58:01 crc kubenswrapper[4808]: I0217 15:58:01.911330 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:58:01 crc kubenswrapper[4808]: I0217 15:58:01.911462 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 15:58:01 crc kubenswrapper[4808]: I0217 15:58:01.911542 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:58:01 crc kubenswrapper[4808]: I0217 15:58:01.911658 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 15:58:01 crc kubenswrapper[4808]: I0217 15:58:01.911745 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 15:58:01 crc kubenswrapper[4808]: I0217 15:58:01.913108 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:58:02 crc kubenswrapper[4808]: I0217 15:58:02.015224 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 15:58:02 crc kubenswrapper[4808]: I0217 15:58:02.015328 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 15:58:02 crc kubenswrapper[4808]: I0217 15:58:02.015363 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:58:02 crc kubenswrapper[4808]: I0217 15:58:02.016091 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 15:58:02 crc kubenswrapper[4808]: I0217 15:58:02.016120 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:58:02 crc kubenswrapper[4808]: I0217 15:58:02.016146 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 15:58:02 crc kubenswrapper[4808]: I0217 15:58:02.016170 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 15:58:02 crc kubenswrapper[4808]: I0217 15:58:02.016208 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:58:02 crc kubenswrapper[4808]: I0217 15:58:02.016276 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:58:02 crc kubenswrapper[4808]: I0217 15:58:02.015799 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 15:58:02 crc kubenswrapper[4808]: I0217 15:58:02.016330 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:58:02 crc kubenswrapper[4808]: I0217 15:58:02.015982 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 15:58:02 crc kubenswrapper[4808]: I0217 15:58:02.015845 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:58:02 crc kubenswrapper[4808]: I0217 15:58:02.016394 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 15:58:02 crc kubenswrapper[4808]: I0217 15:58:02.016436 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 15:58:02 crc kubenswrapper[4808]: I0217 15:58:02.016458 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 15:58:02 crc kubenswrapper[4808]: I0217 15:58:02.145987 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 15:58:02 crc kubenswrapper[4808]: W0217 15:58:02.166275 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-9fa49b6f4e1e787f24ce9611632df8fda558e131cc56432bdbbe7931a33284c6 WatchSource:0}: Error finding container 9fa49b6f4e1e787f24ce9611632df8fda558e131cc56432bdbbe7931a33284c6: Status 404 returned error can't find the container with id 9fa49b6f4e1e787f24ce9611632df8fda558e131cc56432bdbbe7931a33284c6 Feb 17 15:58:02 crc kubenswrapper[4808]: E0217 15:58:02.171308 4808 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.64:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.189513e037c419d9 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 15:58:02.169358809 +0000 UTC m=+245.685717922,LastTimestamp:2026-02-17 15:58:02.169358809 +0000 UTC m=+245.685717922,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 15:58:02 crc kubenswrapper[4808]: I0217 15:58:02.504046 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 17 15:58:02 crc kubenswrapper[4808]: I0217 15:58:02.507276 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 17 15:58:02 crc kubenswrapper[4808]: I0217 15:58:02.508421 4808 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="77d0e25e29d8f9c5146809e50f50a20c537f5ddecea1b902928a94870b5d44ef" exitCode=0 Feb 17 15:58:02 crc kubenswrapper[4808]: I0217 15:58:02.508460 4808 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="715d799f5e1732f88175b90bad28450b9c5148e89bf47ac3e47f9585acf3b392" exitCode=0 Feb 17 15:58:02 crc kubenswrapper[4808]: I0217 15:58:02.508473 4808 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="695c70a36ec8a626d22b6dc04fdaad77e3e1f27a035ce6f62b96afe1f2c29361" exitCode=0 Feb 17 15:58:02 crc kubenswrapper[4808]: I0217 15:58:02.508487 4808 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="e2611c9a878eac336beeea637370ce7fe47a5a80a6f29002cb2fb79d4637a1c6" exitCode=2 Feb 17 15:58:02 crc kubenswrapper[4808]: I0217 15:58:02.508606 4808 scope.go:117] "RemoveContainer" containerID="68d1439ead0f87e8cde6925c6db2cfde8a7fe89c6e5afaf719868740138742df" Feb 17 15:58:02 crc kubenswrapper[4808]: I0217 15:58:02.511260 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"1aad017c95d37d7e1d108001e119581b1379d3c0c63d28c65df4fdfd7a716026"} Feb 17 15:58:02 crc kubenswrapper[4808]: I0217 15:58:02.511299 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"9fa49b6f4e1e787f24ce9611632df8fda558e131cc56432bdbbe7931a33284c6"} Feb 17 15:58:02 crc kubenswrapper[4808]: I0217 15:58:02.512724 4808 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 17 15:58:02 crc kubenswrapper[4808]: I0217 15:58:02.513040 4808 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 17 15:58:02 crc kubenswrapper[4808]: I0217 15:58:02.513276 4808 generic.go:334] "Generic (PLEG): container finished" podID="3e6a81ca-0d6e-48d2-a0a2-ada5fcb8b25e" containerID="e259bf574b3e5b34a738dc5aa049367d026f2cbb8c3d1e0e5771dc0d329364c7" exitCode=0 Feb 17 15:58:02 crc kubenswrapper[4808]: I0217 15:58:02.513314 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"3e6a81ca-0d6e-48d2-a0a2-ada5fcb8b25e","Type":"ContainerDied","Data":"e259bf574b3e5b34a738dc5aa049367d026f2cbb8c3d1e0e5771dc0d329364c7"} Feb 17 15:58:02 crc kubenswrapper[4808]: I0217 15:58:02.513967 4808 status_manager.go:851] "Failed to get status for pod" podUID="3e6a81ca-0d6e-48d2-a0a2-ada5fcb8b25e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 17 15:58:02 crc kubenswrapper[4808]: I0217 15:58:02.514500 4808 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 17 15:58:02 crc kubenswrapper[4808]: I0217 15:58:02.515339 4808 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 17 15:58:03 crc kubenswrapper[4808]: I0217 15:58:03.532435 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 17 15:58:04 crc kubenswrapper[4808]: I0217 15:58:04.037185 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 17 15:58:04 crc kubenswrapper[4808]: I0217 15:58:04.038204 4808 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 17 15:58:04 crc kubenswrapper[4808]: I0217 15:58:04.038434 4808 status_manager.go:851] "Failed to get status for pod" podUID="3e6a81ca-0d6e-48d2-a0a2-ada5fcb8b25e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 17 15:58:04 crc kubenswrapper[4808]: I0217 15:58:04.151624 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3e6a81ca-0d6e-48d2-a0a2-ada5fcb8b25e-var-lock\") pod \"3e6a81ca-0d6e-48d2-a0a2-ada5fcb8b25e\" (UID: \"3e6a81ca-0d6e-48d2-a0a2-ada5fcb8b25e\") " Feb 17 15:58:04 crc kubenswrapper[4808]: I0217 15:58:04.152014 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3e6a81ca-0d6e-48d2-a0a2-ada5fcb8b25e-kubelet-dir\") pod \"3e6a81ca-0d6e-48d2-a0a2-ada5fcb8b25e\" (UID: \"3e6a81ca-0d6e-48d2-a0a2-ada5fcb8b25e\") " Feb 17 15:58:04 crc kubenswrapper[4808]: I0217 15:58:04.151911 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e6a81ca-0d6e-48d2-a0a2-ada5fcb8b25e-var-lock" (OuterVolumeSpecName: "var-lock") pod "3e6a81ca-0d6e-48d2-a0a2-ada5fcb8b25e" (UID: "3e6a81ca-0d6e-48d2-a0a2-ada5fcb8b25e"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:58:04 crc kubenswrapper[4808]: I0217 15:58:04.152108 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3e6a81ca-0d6e-48d2-a0a2-ada5fcb8b25e-kube-api-access\") pod \"3e6a81ca-0d6e-48d2-a0a2-ada5fcb8b25e\" (UID: \"3e6a81ca-0d6e-48d2-a0a2-ada5fcb8b25e\") " Feb 17 15:58:04 crc kubenswrapper[4808]: I0217 15:58:04.152225 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e6a81ca-0d6e-48d2-a0a2-ada5fcb8b25e-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "3e6a81ca-0d6e-48d2-a0a2-ada5fcb8b25e" (UID: "3e6a81ca-0d6e-48d2-a0a2-ada5fcb8b25e"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:58:04 crc kubenswrapper[4808]: I0217 15:58:04.152625 4808 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3e6a81ca-0d6e-48d2-a0a2-ada5fcb8b25e-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 17 15:58:04 crc kubenswrapper[4808]: I0217 15:58:04.152649 4808 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3e6a81ca-0d6e-48d2-a0a2-ada5fcb8b25e-var-lock\") on node \"crc\" DevicePath \"\"" Feb 17 15:58:04 crc kubenswrapper[4808]: I0217 15:58:04.159605 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e6a81ca-0d6e-48d2-a0a2-ada5fcb8b25e-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "3e6a81ca-0d6e-48d2-a0a2-ada5fcb8b25e" (UID: "3e6a81ca-0d6e-48d2-a0a2-ada5fcb8b25e"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:58:04 crc kubenswrapper[4808]: I0217 15:58:04.254082 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3e6a81ca-0d6e-48d2-a0a2-ada5fcb8b25e-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 17 15:58:04 crc kubenswrapper[4808]: I0217 15:58:04.544506 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"3e6a81ca-0d6e-48d2-a0a2-ada5fcb8b25e","Type":"ContainerDied","Data":"c7a19d1c77507692cfde7142aa7d8a5076017b742b37e3a0c970625447aea416"} Feb 17 15:58:04 crc kubenswrapper[4808]: I0217 15:58:04.544603 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 17 15:58:04 crc kubenswrapper[4808]: I0217 15:58:04.544617 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c7a19d1c77507692cfde7142aa7d8a5076017b742b37e3a0c970625447aea416" Feb 17 15:58:04 crc kubenswrapper[4808]: I0217 15:58:04.550307 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 17 15:58:04 crc kubenswrapper[4808]: I0217 15:58:04.552987 4808 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="5fa3ef5d82c776e482d3da2d223d74423393c75b813707483fadca8cfbb5ed3b" exitCode=0 Feb 17 15:58:04 crc kubenswrapper[4808]: I0217 15:58:04.570347 4808 status_manager.go:851] "Failed to get status for pod" podUID="3e6a81ca-0d6e-48d2-a0a2-ada5fcb8b25e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 17 15:58:04 crc kubenswrapper[4808]: I0217 15:58:04.571299 4808 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 17 15:58:04 crc kubenswrapper[4808]: I0217 15:58:04.791912 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 17 15:58:04 crc kubenswrapper[4808]: I0217 15:58:04.793878 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:58:04 crc kubenswrapper[4808]: I0217 15:58:04.794983 4808 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 17 15:58:04 crc kubenswrapper[4808]: I0217 15:58:04.795961 4808 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 17 15:58:04 crc kubenswrapper[4808]: I0217 15:58:04.796617 4808 status_manager.go:851] "Failed to get status for pod" podUID="3e6a81ca-0d6e-48d2-a0a2-ada5fcb8b25e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 17 15:58:04 crc kubenswrapper[4808]: I0217 15:58:04.864116 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 17 15:58:04 crc kubenswrapper[4808]: I0217 15:58:04.864265 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 17 15:58:04 crc kubenswrapper[4808]: I0217 15:58:04.864282 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:58:04 crc kubenswrapper[4808]: I0217 15:58:04.864430 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:58:04 crc kubenswrapper[4808]: I0217 15:58:04.864446 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 17 15:58:04 crc kubenswrapper[4808]: I0217 15:58:04.864533 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:58:04 crc kubenswrapper[4808]: I0217 15:58:04.864994 4808 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 17 15:58:04 crc kubenswrapper[4808]: I0217 15:58:04.865073 4808 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Feb 17 15:58:04 crc kubenswrapper[4808]: I0217 15:58:04.865099 4808 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 17 15:58:05 crc kubenswrapper[4808]: I0217 15:58:05.154290 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Feb 17 15:58:05 crc kubenswrapper[4808]: I0217 15:58:05.578168 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 17 15:58:05 crc kubenswrapper[4808]: I0217 15:58:05.580815 4808 scope.go:117] "RemoveContainer" containerID="77d0e25e29d8f9c5146809e50f50a20c537f5ddecea1b902928a94870b5d44ef" Feb 17 15:58:05 crc kubenswrapper[4808]: I0217 15:58:05.581193 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:58:05 crc kubenswrapper[4808]: I0217 15:58:05.582072 4808 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 17 15:58:05 crc kubenswrapper[4808]: I0217 15:58:05.582504 4808 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 17 15:58:05 crc kubenswrapper[4808]: I0217 15:58:05.583182 4808 status_manager.go:851] "Failed to get status for pod" podUID="3e6a81ca-0d6e-48d2-a0a2-ada5fcb8b25e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 17 15:58:05 crc kubenswrapper[4808]: I0217 15:58:05.587259 4808 status_manager.go:851] "Failed to get status for pod" podUID="3e6a81ca-0d6e-48d2-a0a2-ada5fcb8b25e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 17 15:58:05 crc kubenswrapper[4808]: I0217 15:58:05.587877 4808 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 17 15:58:05 crc kubenswrapper[4808]: I0217 15:58:05.589749 4808 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 17 15:58:05 crc kubenswrapper[4808]: I0217 15:58:05.601851 4808 scope.go:117] "RemoveContainer" containerID="715d799f5e1732f88175b90bad28450b9c5148e89bf47ac3e47f9585acf3b392" Feb 17 15:58:05 crc kubenswrapper[4808]: I0217 15:58:05.622159 4808 scope.go:117] "RemoveContainer" containerID="695c70a36ec8a626d22b6dc04fdaad77e3e1f27a035ce6f62b96afe1f2c29361" Feb 17 15:58:05 crc kubenswrapper[4808]: I0217 15:58:05.647697 4808 scope.go:117] "RemoveContainer" containerID="e2611c9a878eac336beeea637370ce7fe47a5a80a6f29002cb2fb79d4637a1c6" Feb 17 15:58:05 crc kubenswrapper[4808]: I0217 15:58:05.671211 4808 scope.go:117] "RemoveContainer" containerID="5fa3ef5d82c776e482d3da2d223d74423393c75b813707483fadca8cfbb5ed3b" Feb 17 15:58:05 crc kubenswrapper[4808]: I0217 15:58:05.688748 4808 scope.go:117] "RemoveContainer" containerID="d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42" Feb 17 15:58:07 crc kubenswrapper[4808]: I0217 15:58:07.153198 4808 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 17 15:58:07 crc kubenswrapper[4808]: I0217 15:58:07.153998 4808 status_manager.go:851] "Failed to get status for pod" podUID="3e6a81ca-0d6e-48d2-a0a2-ada5fcb8b25e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 17 15:58:07 crc kubenswrapper[4808]: I0217 15:58:07.154551 4808 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 17 15:58:07 crc kubenswrapper[4808]: E0217 15:58:07.615982 4808 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 17 15:58:07 crc kubenswrapper[4808]: E0217 15:58:07.617462 4808 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 17 15:58:07 crc kubenswrapper[4808]: E0217 15:58:07.618231 4808 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 17 15:58:07 crc kubenswrapper[4808]: E0217 15:58:07.618670 4808 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 17 15:58:07 crc kubenswrapper[4808]: E0217 15:58:07.619268 4808 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 17 15:58:07 crc kubenswrapper[4808]: I0217 15:58:07.619500 4808 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 17 15:58:07 crc kubenswrapper[4808]: E0217 15:58:07.620755 4808 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.64:6443: connect: connection refused" interval="200ms" Feb 17 15:58:07 crc kubenswrapper[4808]: E0217 15:58:07.822511 4808 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.64:6443: connect: connection refused" interval="400ms" Feb 17 15:58:08 crc kubenswrapper[4808]: E0217 15:58:08.207118 4808 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.64:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" volumeName="registry-storage" Feb 17 15:58:08 crc kubenswrapper[4808]: E0217 15:58:08.223466 4808 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.64:6443: connect: connection refused" interval="800ms" Feb 17 15:58:09 crc kubenswrapper[4808]: E0217 15:58:09.025224 4808 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.64:6443: connect: connection refused" interval="1.6s" Feb 17 15:58:10 crc kubenswrapper[4808]: E0217 15:58:10.189220 4808 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.64:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.189513e037c419d9 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 15:58:02.169358809 +0000 UTC m=+245.685717922,LastTimestamp:2026-02-17 15:58:02.169358809 +0000 UTC m=+245.685717922,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 15:58:10 crc kubenswrapper[4808]: E0217 15:58:10.626915 4808 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.64:6443: connect: connection refused" interval="3.2s" Feb 17 15:58:12 crc kubenswrapper[4808]: I0217 15:58:12.489516 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-j6dgq" podUID="33978535-84b2-4def-af5a-d2819171e202" containerName="oauth-openshift" containerID="cri-o://a1afe1988306793eee4a68327c90d6c1337c9d7cc71b57771cb662e2ecc6eca8" gracePeriod=15 Feb 17 15:58:12 crc kubenswrapper[4808]: I0217 15:58:12.640319 4808 generic.go:334] "Generic (PLEG): container finished" podID="33978535-84b2-4def-af5a-d2819171e202" containerID="a1afe1988306793eee4a68327c90d6c1337c9d7cc71b57771cb662e2ecc6eca8" exitCode=0 Feb 17 15:58:12 crc kubenswrapper[4808]: I0217 15:58:12.640395 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-j6dgq" event={"ID":"33978535-84b2-4def-af5a-d2819171e202","Type":"ContainerDied","Data":"a1afe1988306793eee4a68327c90d6c1337c9d7cc71b57771cb662e2ecc6eca8"} Feb 17 15:58:13 crc kubenswrapper[4808]: I0217 15:58:13.103550 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-j6dgq" Feb 17 15:58:13 crc kubenswrapper[4808]: I0217 15:58:13.105428 4808 status_manager.go:851] "Failed to get status for pod" podUID="3e6a81ca-0d6e-48d2-a0a2-ada5fcb8b25e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 17 15:58:13 crc kubenswrapper[4808]: I0217 15:58:13.106757 4808 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 17 15:58:13 crc kubenswrapper[4808]: I0217 15:58:13.107251 4808 status_manager.go:851] "Failed to get status for pod" podUID="33978535-84b2-4def-af5a-d2819171e202" pod="openshift-authentication/oauth-openshift-558db77b4-j6dgq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-j6dgq\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 17 15:58:13 crc kubenswrapper[4808]: I0217 15:58:13.295101 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/33978535-84b2-4def-af5a-d2819171e202-v4-0-config-system-session\") pod \"33978535-84b2-4def-af5a-d2819171e202\" (UID: \"33978535-84b2-4def-af5a-d2819171e202\") " Feb 17 15:58:13 crc kubenswrapper[4808]: I0217 15:58:13.295330 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/33978535-84b2-4def-af5a-d2819171e202-v4-0-config-system-trusted-ca-bundle\") pod \"33978535-84b2-4def-af5a-d2819171e202\" (UID: \"33978535-84b2-4def-af5a-d2819171e202\") " Feb 17 15:58:13 crc kubenswrapper[4808]: I0217 15:58:13.295440 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/33978535-84b2-4def-af5a-d2819171e202-audit-policies\") pod \"33978535-84b2-4def-af5a-d2819171e202\" (UID: \"33978535-84b2-4def-af5a-d2819171e202\") " Feb 17 15:58:13 crc kubenswrapper[4808]: I0217 15:58:13.295524 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hw8ff\" (UniqueName: \"kubernetes.io/projected/33978535-84b2-4def-af5a-d2819171e202-kube-api-access-hw8ff\") pod \"33978535-84b2-4def-af5a-d2819171e202\" (UID: \"33978535-84b2-4def-af5a-d2819171e202\") " Feb 17 15:58:13 crc kubenswrapper[4808]: I0217 15:58:13.295701 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/33978535-84b2-4def-af5a-d2819171e202-v4-0-config-system-router-certs\") pod \"33978535-84b2-4def-af5a-d2819171e202\" (UID: \"33978535-84b2-4def-af5a-d2819171e202\") " Feb 17 15:58:13 crc kubenswrapper[4808]: I0217 15:58:13.295865 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/33978535-84b2-4def-af5a-d2819171e202-v4-0-config-user-template-error\") pod \"33978535-84b2-4def-af5a-d2819171e202\" (UID: \"33978535-84b2-4def-af5a-d2819171e202\") " Feb 17 15:58:13 crc kubenswrapper[4808]: I0217 15:58:13.295933 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/33978535-84b2-4def-af5a-d2819171e202-audit-dir\") pod \"33978535-84b2-4def-af5a-d2819171e202\" (UID: \"33978535-84b2-4def-af5a-d2819171e202\") " Feb 17 15:58:13 crc kubenswrapper[4808]: I0217 15:58:13.296454 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/33978535-84b2-4def-af5a-d2819171e202-v4-0-config-system-cliconfig\") pod \"33978535-84b2-4def-af5a-d2819171e202\" (UID: \"33978535-84b2-4def-af5a-d2819171e202\") " Feb 17 15:58:13 crc kubenswrapper[4808]: I0217 15:58:13.296488 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/33978535-84b2-4def-af5a-d2819171e202-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "33978535-84b2-4def-af5a-d2819171e202" (UID: "33978535-84b2-4def-af5a-d2819171e202"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:58:13 crc kubenswrapper[4808]: I0217 15:58:13.296549 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/33978535-84b2-4def-af5a-d2819171e202-v4-0-config-system-serving-cert\") pod \"33978535-84b2-4def-af5a-d2819171e202\" (UID: \"33978535-84b2-4def-af5a-d2819171e202\") " Feb 17 15:58:13 crc kubenswrapper[4808]: I0217 15:58:13.296745 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/33978535-84b2-4def-af5a-d2819171e202-v4-0-config-system-ocp-branding-template\") pod \"33978535-84b2-4def-af5a-d2819171e202\" (UID: \"33978535-84b2-4def-af5a-d2819171e202\") " Feb 17 15:58:13 crc kubenswrapper[4808]: I0217 15:58:13.296820 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/33978535-84b2-4def-af5a-d2819171e202-v4-0-config-user-template-login\") pod \"33978535-84b2-4def-af5a-d2819171e202\" (UID: \"33978535-84b2-4def-af5a-d2819171e202\") " Feb 17 15:58:13 crc kubenswrapper[4808]: I0217 15:58:13.296865 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/33978535-84b2-4def-af5a-d2819171e202-v4-0-config-user-template-provider-selection\") pod \"33978535-84b2-4def-af5a-d2819171e202\" (UID: \"33978535-84b2-4def-af5a-d2819171e202\") " Feb 17 15:58:13 crc kubenswrapper[4808]: I0217 15:58:13.296905 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/33978535-84b2-4def-af5a-d2819171e202-v4-0-config-system-service-ca\") pod \"33978535-84b2-4def-af5a-d2819171e202\" (UID: \"33978535-84b2-4def-af5a-d2819171e202\") " Feb 17 15:58:13 crc kubenswrapper[4808]: I0217 15:58:13.296998 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/33978535-84b2-4def-af5a-d2819171e202-v4-0-config-user-idp-0-file-data\") pod \"33978535-84b2-4def-af5a-d2819171e202\" (UID: \"33978535-84b2-4def-af5a-d2819171e202\") " Feb 17 15:58:13 crc kubenswrapper[4808]: I0217 15:58:13.297061 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33978535-84b2-4def-af5a-d2819171e202-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "33978535-84b2-4def-af5a-d2819171e202" (UID: "33978535-84b2-4def-af5a-d2819171e202"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:58:13 crc kubenswrapper[4808]: I0217 15:58:13.297120 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33978535-84b2-4def-af5a-d2819171e202-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "33978535-84b2-4def-af5a-d2819171e202" (UID: "33978535-84b2-4def-af5a-d2819171e202"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:58:13 crc kubenswrapper[4808]: I0217 15:58:13.297653 4808 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/33978535-84b2-4def-af5a-d2819171e202-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 15:58:13 crc kubenswrapper[4808]: I0217 15:58:13.297683 4808 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/33978535-84b2-4def-af5a-d2819171e202-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 17 15:58:13 crc kubenswrapper[4808]: I0217 15:58:13.297706 4808 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/33978535-84b2-4def-af5a-d2819171e202-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 17 15:58:13 crc kubenswrapper[4808]: I0217 15:58:13.299466 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33978535-84b2-4def-af5a-d2819171e202-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "33978535-84b2-4def-af5a-d2819171e202" (UID: "33978535-84b2-4def-af5a-d2819171e202"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:58:13 crc kubenswrapper[4808]: I0217 15:58:13.299574 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33978535-84b2-4def-af5a-d2819171e202-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "33978535-84b2-4def-af5a-d2819171e202" (UID: "33978535-84b2-4def-af5a-d2819171e202"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:58:13 crc kubenswrapper[4808]: I0217 15:58:13.307067 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33978535-84b2-4def-af5a-d2819171e202-kube-api-access-hw8ff" (OuterVolumeSpecName: "kube-api-access-hw8ff") pod "33978535-84b2-4def-af5a-d2819171e202" (UID: "33978535-84b2-4def-af5a-d2819171e202"). InnerVolumeSpecName "kube-api-access-hw8ff". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:58:13 crc kubenswrapper[4808]: I0217 15:58:13.311496 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33978535-84b2-4def-af5a-d2819171e202-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "33978535-84b2-4def-af5a-d2819171e202" (UID: "33978535-84b2-4def-af5a-d2819171e202"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:58:13 crc kubenswrapper[4808]: I0217 15:58:13.311983 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33978535-84b2-4def-af5a-d2819171e202-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "33978535-84b2-4def-af5a-d2819171e202" (UID: "33978535-84b2-4def-af5a-d2819171e202"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:58:13 crc kubenswrapper[4808]: I0217 15:58:13.312215 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33978535-84b2-4def-af5a-d2819171e202-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "33978535-84b2-4def-af5a-d2819171e202" (UID: "33978535-84b2-4def-af5a-d2819171e202"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:58:13 crc kubenswrapper[4808]: I0217 15:58:13.313196 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33978535-84b2-4def-af5a-d2819171e202-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "33978535-84b2-4def-af5a-d2819171e202" (UID: "33978535-84b2-4def-af5a-d2819171e202"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:58:13 crc kubenswrapper[4808]: I0217 15:58:13.314014 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33978535-84b2-4def-af5a-d2819171e202-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "33978535-84b2-4def-af5a-d2819171e202" (UID: "33978535-84b2-4def-af5a-d2819171e202"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:58:13 crc kubenswrapper[4808]: I0217 15:58:13.319000 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33978535-84b2-4def-af5a-d2819171e202-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "33978535-84b2-4def-af5a-d2819171e202" (UID: "33978535-84b2-4def-af5a-d2819171e202"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:58:13 crc kubenswrapper[4808]: I0217 15:58:13.319427 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33978535-84b2-4def-af5a-d2819171e202-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "33978535-84b2-4def-af5a-d2819171e202" (UID: "33978535-84b2-4def-af5a-d2819171e202"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:58:13 crc kubenswrapper[4808]: I0217 15:58:13.319713 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33978535-84b2-4def-af5a-d2819171e202-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "33978535-84b2-4def-af5a-d2819171e202" (UID: "33978535-84b2-4def-af5a-d2819171e202"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:58:13 crc kubenswrapper[4808]: I0217 15:58:13.399718 4808 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/33978535-84b2-4def-af5a-d2819171e202-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 17 15:58:13 crc kubenswrapper[4808]: I0217 15:58:13.399814 4808 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/33978535-84b2-4def-af5a-d2819171e202-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 17 15:58:13 crc kubenswrapper[4808]: I0217 15:58:13.399839 4808 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/33978535-84b2-4def-af5a-d2819171e202-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 17 15:58:13 crc kubenswrapper[4808]: I0217 15:58:13.399860 4808 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/33978535-84b2-4def-af5a-d2819171e202-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 15:58:13 crc kubenswrapper[4808]: I0217 15:58:13.399881 4808 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/33978535-84b2-4def-af5a-d2819171e202-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 17 15:58:13 crc kubenswrapper[4808]: I0217 15:58:13.399903 4808 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/33978535-84b2-4def-af5a-d2819171e202-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 17 15:58:13 crc kubenswrapper[4808]: I0217 15:58:13.399926 4808 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/33978535-84b2-4def-af5a-d2819171e202-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 17 15:58:13 crc kubenswrapper[4808]: I0217 15:58:13.399949 4808 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/33978535-84b2-4def-af5a-d2819171e202-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 17 15:58:13 crc kubenswrapper[4808]: I0217 15:58:13.399968 4808 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/33978535-84b2-4def-af5a-d2819171e202-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 17 15:58:13 crc kubenswrapper[4808]: I0217 15:58:13.399986 4808 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/33978535-84b2-4def-af5a-d2819171e202-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 17 15:58:13 crc kubenswrapper[4808]: I0217 15:58:13.400005 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hw8ff\" (UniqueName: \"kubernetes.io/projected/33978535-84b2-4def-af5a-d2819171e202-kube-api-access-hw8ff\") on node \"crc\" DevicePath \"\"" Feb 17 15:58:13 crc kubenswrapper[4808]: I0217 15:58:13.652032 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-j6dgq" event={"ID":"33978535-84b2-4def-af5a-d2819171e202","Type":"ContainerDied","Data":"844de191c1be070d299b4c3076870b370dc0d9ba311dfdcbe654f429c1b19e41"} Feb 17 15:58:13 crc kubenswrapper[4808]: I0217 15:58:13.652158 4808 scope.go:117] "RemoveContainer" containerID="a1afe1988306793eee4a68327c90d6c1337c9d7cc71b57771cb662e2ecc6eca8" Feb 17 15:58:13 crc kubenswrapper[4808]: I0217 15:58:13.652188 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-j6dgq" Feb 17 15:58:13 crc kubenswrapper[4808]: I0217 15:58:13.653190 4808 status_manager.go:851] "Failed to get status for pod" podUID="33978535-84b2-4def-af5a-d2819171e202" pod="openshift-authentication/oauth-openshift-558db77b4-j6dgq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-j6dgq\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 17 15:58:13 crc kubenswrapper[4808]: I0217 15:58:13.653981 4808 status_manager.go:851] "Failed to get status for pod" podUID="3e6a81ca-0d6e-48d2-a0a2-ada5fcb8b25e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 17 15:58:13 crc kubenswrapper[4808]: I0217 15:58:13.654527 4808 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 17 15:58:13 crc kubenswrapper[4808]: I0217 15:58:13.687025 4808 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 17 15:58:13 crc kubenswrapper[4808]: I0217 15:58:13.687874 4808 status_manager.go:851] "Failed to get status for pod" podUID="33978535-84b2-4def-af5a-d2819171e202" pod="openshift-authentication/oauth-openshift-558db77b4-j6dgq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-j6dgq\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 17 15:58:13 crc kubenswrapper[4808]: I0217 15:58:13.688736 4808 status_manager.go:851] "Failed to get status for pod" podUID="3e6a81ca-0d6e-48d2-a0a2-ada5fcb8b25e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 17 15:58:13 crc kubenswrapper[4808]: E0217 15:58:13.828836 4808 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.64:6443: connect: connection refused" interval="6.4s" Feb 17 15:58:15 crc kubenswrapper[4808]: I0217 15:58:15.677290 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 17 15:58:15 crc kubenswrapper[4808]: I0217 15:58:15.677374 4808 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="8b00de586738e2d759aa971e2114def8fdfeb2a25fd72f482d75b9f46ea9a3d1" exitCode=1 Feb 17 15:58:15 crc kubenswrapper[4808]: I0217 15:58:15.677418 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"8b00de586738e2d759aa971e2114def8fdfeb2a25fd72f482d75b9f46ea9a3d1"} Feb 17 15:58:15 crc kubenswrapper[4808]: I0217 15:58:15.678069 4808 scope.go:117] "RemoveContainer" containerID="8b00de586738e2d759aa971e2114def8fdfeb2a25fd72f482d75b9f46ea9a3d1" Feb 17 15:58:15 crc kubenswrapper[4808]: I0217 15:58:15.678564 4808 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 17 15:58:15 crc kubenswrapper[4808]: I0217 15:58:15.679474 4808 status_manager.go:851] "Failed to get status for pod" podUID="3e6a81ca-0d6e-48d2-a0a2-ada5fcb8b25e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 17 15:58:15 crc kubenswrapper[4808]: I0217 15:58:15.679731 4808 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 17 15:58:15 crc kubenswrapper[4808]: I0217 15:58:15.680131 4808 status_manager.go:851] "Failed to get status for pod" podUID="33978535-84b2-4def-af5a-d2819171e202" pod="openshift-authentication/oauth-openshift-558db77b4-j6dgq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-j6dgq\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 17 15:58:16 crc kubenswrapper[4808]: I0217 15:58:16.145138 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:58:16 crc kubenswrapper[4808]: I0217 15:58:16.147776 4808 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 17 15:58:16 crc kubenswrapper[4808]: I0217 15:58:16.148092 4808 status_manager.go:851] "Failed to get status for pod" podUID="33978535-84b2-4def-af5a-d2819171e202" pod="openshift-authentication/oauth-openshift-558db77b4-j6dgq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-j6dgq\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 17 15:58:16 crc kubenswrapper[4808]: I0217 15:58:16.148303 4808 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 17 15:58:16 crc kubenswrapper[4808]: I0217 15:58:16.148488 4808 status_manager.go:851] "Failed to get status for pod" podUID="3e6a81ca-0d6e-48d2-a0a2-ada5fcb8b25e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 17 15:58:16 crc kubenswrapper[4808]: I0217 15:58:16.169823 4808 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="efd34c89-7350-4ce0-83d9-302614df88f7" Feb 17 15:58:16 crc kubenswrapper[4808]: I0217 15:58:16.169852 4808 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="efd34c89-7350-4ce0-83d9-302614df88f7" Feb 17 15:58:16 crc kubenswrapper[4808]: E0217 15:58:16.170140 4808 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:58:16 crc kubenswrapper[4808]: I0217 15:58:16.170698 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:58:16 crc kubenswrapper[4808]: W0217 15:58:16.226036 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-b99cbcec80af649623847a31b73f99c2879c20aa68786e69cb69c6a2b59eef9a WatchSource:0}: Error finding container b99cbcec80af649623847a31b73f99c2879c20aa68786e69cb69c6a2b59eef9a: Status 404 returned error can't find the container with id b99cbcec80af649623847a31b73f99c2879c20aa68786e69cb69c6a2b59eef9a Feb 17 15:58:16 crc kubenswrapper[4808]: I0217 15:58:16.688485 4808 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="53b1f63ad4a1e73a7a5b4281325525eb23d2ef389b3a438a9ccc3a7cd68efb4c" exitCode=0 Feb 17 15:58:16 crc kubenswrapper[4808]: I0217 15:58:16.688572 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"53b1f63ad4a1e73a7a5b4281325525eb23d2ef389b3a438a9ccc3a7cd68efb4c"} Feb 17 15:58:16 crc kubenswrapper[4808]: I0217 15:58:16.688633 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"b99cbcec80af649623847a31b73f99c2879c20aa68786e69cb69c6a2b59eef9a"} Feb 17 15:58:16 crc kubenswrapper[4808]: I0217 15:58:16.688959 4808 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="efd34c89-7350-4ce0-83d9-302614df88f7" Feb 17 15:58:16 crc kubenswrapper[4808]: I0217 15:58:16.688976 4808 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="efd34c89-7350-4ce0-83d9-302614df88f7" Feb 17 15:58:16 crc kubenswrapper[4808]: I0217 15:58:16.689375 4808 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 17 15:58:16 crc kubenswrapper[4808]: E0217 15:58:16.689501 4808 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:58:16 crc kubenswrapper[4808]: I0217 15:58:16.689896 4808 status_manager.go:851] "Failed to get status for pod" podUID="33978535-84b2-4def-af5a-d2819171e202" pod="openshift-authentication/oauth-openshift-558db77b4-j6dgq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-j6dgq\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 17 15:58:16 crc kubenswrapper[4808]: I0217 15:58:16.690690 4808 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 17 15:58:16 crc kubenswrapper[4808]: I0217 15:58:16.691469 4808 status_manager.go:851] "Failed to get status for pod" podUID="3e6a81ca-0d6e-48d2-a0a2-ada5fcb8b25e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 17 15:58:16 crc kubenswrapper[4808]: I0217 15:58:16.693556 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 17 15:58:16 crc kubenswrapper[4808]: I0217 15:58:16.693681 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"55253c43280c77b6bc119cc1128dfc269bbd55032c2487fbd10280a86ea1efe4"} Feb 17 15:58:16 crc kubenswrapper[4808]: I0217 15:58:16.694696 4808 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 17 15:58:16 crc kubenswrapper[4808]: I0217 15:58:16.695232 4808 status_manager.go:851] "Failed to get status for pod" podUID="33978535-84b2-4def-af5a-d2819171e202" pod="openshift-authentication/oauth-openshift-558db77b4-j6dgq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-j6dgq\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 17 15:58:16 crc kubenswrapper[4808]: I0217 15:58:16.695764 4808 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 17 15:58:16 crc kubenswrapper[4808]: I0217 15:58:16.696226 4808 status_manager.go:851] "Failed to get status for pod" podUID="3e6a81ca-0d6e-48d2-a0a2-ada5fcb8b25e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 17 15:58:17 crc kubenswrapper[4808]: I0217 15:58:17.703829 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"f1f986fc5c200a9569c35b566e4cfa04f7cfc4f9f9c2e396942948391360e9f7"} Feb 17 15:58:17 crc kubenswrapper[4808]: I0217 15:58:17.704143 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"45bb6769049b70b73e54028c6538c1a4c6afe8cd3f1ea0ec050dc73c5f84b0f5"} Feb 17 15:58:17 crc kubenswrapper[4808]: I0217 15:58:17.704159 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"354e04967c324787ec4d636571c752b9a3dbcd280e56881c5956bf447f10c843"} Feb 17 15:58:18 crc kubenswrapper[4808]: I0217 15:58:18.714508 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"9c900cbde8d41dd6701813db391f8cd7ff105bdb00c8bad4c7b9052c074b82ec"} Feb 17 15:58:18 crc kubenswrapper[4808]: I0217 15:58:18.714550 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"39729087f2357d86ff966385e9c7822886245ecf7a94f94bda651a1d68c4040f"} Feb 17 15:58:18 crc kubenswrapper[4808]: I0217 15:58:18.714728 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:58:18 crc kubenswrapper[4808]: I0217 15:58:18.714896 4808 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="efd34c89-7350-4ce0-83d9-302614df88f7" Feb 17 15:58:18 crc kubenswrapper[4808]: I0217 15:58:18.714922 4808 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="efd34c89-7350-4ce0-83d9-302614df88f7" Feb 17 15:58:19 crc kubenswrapper[4808]: I0217 15:58:19.386706 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 15:58:19 crc kubenswrapper[4808]: I0217 15:58:19.394717 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 15:58:19 crc kubenswrapper[4808]: I0217 15:58:19.720368 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 15:58:21 crc kubenswrapper[4808]: I0217 15:58:21.171664 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:58:21 crc kubenswrapper[4808]: I0217 15:58:21.172247 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:58:21 crc kubenswrapper[4808]: I0217 15:58:21.179649 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:58:23 crc kubenswrapper[4808]: I0217 15:58:23.730233 4808 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:58:23 crc kubenswrapper[4808]: I0217 15:58:23.768391 4808 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="efd34c89-7350-4ce0-83d9-302614df88f7" Feb 17 15:58:23 crc kubenswrapper[4808]: I0217 15:58:23.768456 4808 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="efd34c89-7350-4ce0-83d9-302614df88f7" Feb 17 15:58:23 crc kubenswrapper[4808]: I0217 15:58:23.772129 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:58:23 crc kubenswrapper[4808]: I0217 15:58:23.774786 4808 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="5e083d5b-b359-4f1b-b671-3700cc1ac9ad" Feb 17 15:58:24 crc kubenswrapper[4808]: I0217 15:58:24.775851 4808 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="efd34c89-7350-4ce0-83d9-302614df88f7" Feb 17 15:58:24 crc kubenswrapper[4808]: I0217 15:58:24.776299 4808 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="efd34c89-7350-4ce0-83d9-302614df88f7" Feb 17 15:58:27 crc kubenswrapper[4808]: I0217 15:58:27.183081 4808 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="5e083d5b-b359-4f1b-b671-3700cc1ac9ad" Feb 17 15:58:30 crc kubenswrapper[4808]: I0217 15:58:30.477846 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 15:58:33 crc kubenswrapper[4808]: I0217 15:58:33.322939 4808 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 17 15:58:33 crc kubenswrapper[4808]: I0217 15:58:33.601468 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 17 15:58:33 crc kubenswrapper[4808]: I0217 15:58:33.762907 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 17 15:58:33 crc kubenswrapper[4808]: I0217 15:58:33.844626 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 17 15:58:33 crc kubenswrapper[4808]: I0217 15:58:33.950728 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 17 15:58:33 crc kubenswrapper[4808]: I0217 15:58:33.991406 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 17 15:58:34 crc kubenswrapper[4808]: I0217 15:58:34.055692 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 17 15:58:34 crc kubenswrapper[4808]: I0217 15:58:34.154051 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 17 15:58:34 crc kubenswrapper[4808]: I0217 15:58:34.640385 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 17 15:58:34 crc kubenswrapper[4808]: I0217 15:58:34.726867 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 17 15:58:34 crc kubenswrapper[4808]: I0217 15:58:34.765820 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 17 15:58:34 crc kubenswrapper[4808]: I0217 15:58:34.789223 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 17 15:58:34 crc kubenswrapper[4808]: I0217 15:58:34.846026 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 17 15:58:35 crc kubenswrapper[4808]: I0217 15:58:35.202984 4808 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 17 15:58:35 crc kubenswrapper[4808]: I0217 15:58:35.203661 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=34.203640604 podStartE2EDuration="34.203640604s" podCreationTimestamp="2026-02-17 15:58:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:58:23.68228368 +0000 UTC m=+267.198642763" watchObservedRunningTime="2026-02-17 15:58:35.203640604 +0000 UTC m=+278.719999687" Feb 17 15:58:35 crc kubenswrapper[4808]: I0217 15:58:35.222257 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 17 15:58:35 crc kubenswrapper[4808]: I0217 15:58:35.222682 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-j6dgq","openshift-kube-apiserver/kube-apiserver-crc"] Feb 17 15:58:35 crc kubenswrapper[4808]: I0217 15:58:35.222773 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 17 15:58:35 crc kubenswrapper[4808]: I0217 15:58:35.227821 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 17 15:58:35 crc kubenswrapper[4808]: I0217 15:58:35.244426 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:58:35 crc kubenswrapper[4808]: I0217 15:58:35.262001 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=12.26197315 podStartE2EDuration="12.26197315s" podCreationTimestamp="2026-02-17 15:58:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:58:35.256951415 +0000 UTC m=+278.773310518" watchObservedRunningTime="2026-02-17 15:58:35.26197315 +0000 UTC m=+278.778332253" Feb 17 15:58:35 crc kubenswrapper[4808]: I0217 15:58:35.623198 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 17 15:58:35 crc kubenswrapper[4808]: I0217 15:58:35.848694 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 17 15:58:36 crc kubenswrapper[4808]: I0217 15:58:36.111053 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 17 15:58:36 crc kubenswrapper[4808]: I0217 15:58:36.347221 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 17 15:58:36 crc kubenswrapper[4808]: I0217 15:58:36.485778 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 17 15:58:36 crc kubenswrapper[4808]: I0217 15:58:36.691266 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 17 15:58:36 crc kubenswrapper[4808]: I0217 15:58:36.692229 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 17 15:58:36 crc kubenswrapper[4808]: I0217 15:58:36.758115 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 17 15:58:37 crc kubenswrapper[4808]: I0217 15:58:37.061172 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 17 15:58:37 crc kubenswrapper[4808]: I0217 15:58:37.157735 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33978535-84b2-4def-af5a-d2819171e202" path="/var/lib/kubelet/pods/33978535-84b2-4def-af5a-d2819171e202/volumes" Feb 17 15:58:37 crc kubenswrapper[4808]: I0217 15:58:37.187059 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 17 15:58:37 crc kubenswrapper[4808]: I0217 15:58:37.497243 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 17 15:58:37 crc kubenswrapper[4808]: I0217 15:58:37.564743 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 17 15:58:37 crc kubenswrapper[4808]: I0217 15:58:37.707108 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 17 15:58:37 crc kubenswrapper[4808]: I0217 15:58:37.801959 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 17 15:58:38 crc kubenswrapper[4808]: I0217 15:58:38.052118 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 17 15:58:38 crc kubenswrapper[4808]: I0217 15:58:38.151447 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 17 15:58:38 crc kubenswrapper[4808]: I0217 15:58:38.194544 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 17 15:58:38 crc kubenswrapper[4808]: I0217 15:58:38.224038 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 17 15:58:38 crc kubenswrapper[4808]: I0217 15:58:38.305179 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 17 15:58:38 crc kubenswrapper[4808]: I0217 15:58:38.329391 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 17 15:58:38 crc kubenswrapper[4808]: I0217 15:58:38.334664 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 17 15:58:38 crc kubenswrapper[4808]: I0217 15:58:38.341372 4808 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 17 15:58:38 crc kubenswrapper[4808]: I0217 15:58:38.373004 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 17 15:58:38 crc kubenswrapper[4808]: I0217 15:58:38.412403 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 17 15:58:38 crc kubenswrapper[4808]: I0217 15:58:38.511016 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 17 15:58:38 crc kubenswrapper[4808]: I0217 15:58:38.531291 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 17 15:58:38 crc kubenswrapper[4808]: I0217 15:58:38.590982 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 17 15:58:38 crc kubenswrapper[4808]: I0217 15:58:38.606026 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 17 15:58:38 crc kubenswrapper[4808]: I0217 15:58:38.609031 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 17 15:58:38 crc kubenswrapper[4808]: I0217 15:58:38.625137 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 17 15:58:38 crc kubenswrapper[4808]: I0217 15:58:38.651959 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 17 15:58:38 crc kubenswrapper[4808]: I0217 15:58:38.867046 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 17 15:58:38 crc kubenswrapper[4808]: I0217 15:58:38.883254 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 17 15:58:38 crc kubenswrapper[4808]: I0217 15:58:38.938722 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 17 15:58:38 crc kubenswrapper[4808]: I0217 15:58:38.952541 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 17 15:58:38 crc kubenswrapper[4808]: I0217 15:58:38.964046 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 17 15:58:39 crc kubenswrapper[4808]: I0217 15:58:39.010664 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 17 15:58:39 crc kubenswrapper[4808]: I0217 15:58:39.026387 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 17 15:58:39 crc kubenswrapper[4808]: I0217 15:58:39.093718 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 17 15:58:39 crc kubenswrapper[4808]: I0217 15:58:39.154851 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 17 15:58:39 crc kubenswrapper[4808]: I0217 15:58:39.227801 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 17 15:58:39 crc kubenswrapper[4808]: I0217 15:58:39.288894 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 17 15:58:39 crc kubenswrapper[4808]: I0217 15:58:39.371622 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 17 15:58:39 crc kubenswrapper[4808]: I0217 15:58:39.378480 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 17 15:58:39 crc kubenswrapper[4808]: I0217 15:58:39.386450 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 17 15:58:39 crc kubenswrapper[4808]: I0217 15:58:39.427550 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 17 15:58:39 crc kubenswrapper[4808]: I0217 15:58:39.541140 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 17 15:58:39 crc kubenswrapper[4808]: I0217 15:58:39.550112 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 17 15:58:39 crc kubenswrapper[4808]: I0217 15:58:39.552953 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 17 15:58:39 crc kubenswrapper[4808]: I0217 15:58:39.620008 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 17 15:58:39 crc kubenswrapper[4808]: I0217 15:58:39.629937 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 17 15:58:39 crc kubenswrapper[4808]: I0217 15:58:39.635021 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 17 15:58:39 crc kubenswrapper[4808]: I0217 15:58:39.789215 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 17 15:58:39 crc kubenswrapper[4808]: I0217 15:58:39.916684 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 17 15:58:39 crc kubenswrapper[4808]: I0217 15:58:39.956564 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 17 15:58:39 crc kubenswrapper[4808]: I0217 15:58:39.973980 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 17 15:58:40 crc kubenswrapper[4808]: I0217 15:58:40.054296 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 17 15:58:40 crc kubenswrapper[4808]: I0217 15:58:40.123000 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 17 15:58:40 crc kubenswrapper[4808]: I0217 15:58:40.128870 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 17 15:58:40 crc kubenswrapper[4808]: I0217 15:58:40.156441 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 17 15:58:40 crc kubenswrapper[4808]: I0217 15:58:40.177066 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 17 15:58:40 crc kubenswrapper[4808]: I0217 15:58:40.194984 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 17 15:58:40 crc kubenswrapper[4808]: I0217 15:58:40.207174 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 17 15:58:40 crc kubenswrapper[4808]: I0217 15:58:40.310326 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 17 15:58:40 crc kubenswrapper[4808]: I0217 15:58:40.351721 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 17 15:58:40 crc kubenswrapper[4808]: I0217 15:58:40.456310 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 17 15:58:40 crc kubenswrapper[4808]: I0217 15:58:40.485250 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 17 15:58:40 crc kubenswrapper[4808]: I0217 15:58:40.517622 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 17 15:58:40 crc kubenswrapper[4808]: I0217 15:58:40.666466 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 17 15:58:40 crc kubenswrapper[4808]: I0217 15:58:40.786453 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 17 15:58:40 crc kubenswrapper[4808]: I0217 15:58:40.845081 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 17 15:58:40 crc kubenswrapper[4808]: I0217 15:58:40.846992 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 17 15:58:40 crc kubenswrapper[4808]: I0217 15:58:40.848153 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 17 15:58:40 crc kubenswrapper[4808]: I0217 15:58:40.858910 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 17 15:58:40 crc kubenswrapper[4808]: I0217 15:58:40.954762 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 17 15:58:40 crc kubenswrapper[4808]: I0217 15:58:40.988447 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 17 15:58:40 crc kubenswrapper[4808]: I0217 15:58:40.999608 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 17 15:58:41 crc kubenswrapper[4808]: I0217 15:58:41.018061 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 17 15:58:41 crc kubenswrapper[4808]: I0217 15:58:41.062602 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 17 15:58:41 crc kubenswrapper[4808]: I0217 15:58:41.127430 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 17 15:58:41 crc kubenswrapper[4808]: I0217 15:58:41.136659 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 17 15:58:41 crc kubenswrapper[4808]: I0217 15:58:41.205303 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 17 15:58:41 crc kubenswrapper[4808]: I0217 15:58:41.217216 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 17 15:58:41 crc kubenswrapper[4808]: I0217 15:58:41.225655 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 17 15:58:41 crc kubenswrapper[4808]: I0217 15:58:41.344894 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 17 15:58:41 crc kubenswrapper[4808]: I0217 15:58:41.514131 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 17 15:58:41 crc kubenswrapper[4808]: I0217 15:58:41.583763 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 17 15:58:41 crc kubenswrapper[4808]: I0217 15:58:41.614410 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 17 15:58:41 crc kubenswrapper[4808]: I0217 15:58:41.627016 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 17 15:58:41 crc kubenswrapper[4808]: I0217 15:58:41.663561 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 17 15:58:41 crc kubenswrapper[4808]: I0217 15:58:41.693569 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 17 15:58:41 crc kubenswrapper[4808]: I0217 15:58:41.721318 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 17 15:58:41 crc kubenswrapper[4808]: I0217 15:58:41.757765 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 17 15:58:41 crc kubenswrapper[4808]: I0217 15:58:41.823903 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 17 15:58:41 crc kubenswrapper[4808]: I0217 15:58:41.827236 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 17 15:58:41 crc kubenswrapper[4808]: I0217 15:58:41.890237 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 17 15:58:42 crc kubenswrapper[4808]: I0217 15:58:42.044431 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 17 15:58:42 crc kubenswrapper[4808]: I0217 15:58:42.176274 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 17 15:58:42 crc kubenswrapper[4808]: I0217 15:58:42.178061 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 17 15:58:42 crc kubenswrapper[4808]: I0217 15:58:42.296877 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 17 15:58:42 crc kubenswrapper[4808]: I0217 15:58:42.326438 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 17 15:58:42 crc kubenswrapper[4808]: I0217 15:58:42.425400 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 17 15:58:42 crc kubenswrapper[4808]: I0217 15:58:42.451547 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 17 15:58:42 crc kubenswrapper[4808]: I0217 15:58:42.467220 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 17 15:58:42 crc kubenswrapper[4808]: I0217 15:58:42.478173 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 17 15:58:42 crc kubenswrapper[4808]: I0217 15:58:42.522106 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 17 15:58:42 crc kubenswrapper[4808]: I0217 15:58:42.542982 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 17 15:58:42 crc kubenswrapper[4808]: I0217 15:58:42.591864 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 17 15:58:42 crc kubenswrapper[4808]: I0217 15:58:42.633143 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 17 15:58:42 crc kubenswrapper[4808]: I0217 15:58:42.641239 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 17 15:58:42 crc kubenswrapper[4808]: I0217 15:58:42.642795 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 17 15:58:42 crc kubenswrapper[4808]: I0217 15:58:42.710050 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 17 15:58:42 crc kubenswrapper[4808]: I0217 15:58:42.770270 4808 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 17 15:58:42 crc kubenswrapper[4808]: I0217 15:58:42.839762 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 17 15:58:42 crc kubenswrapper[4808]: I0217 15:58:42.973862 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 17 15:58:43 crc kubenswrapper[4808]: I0217 15:58:43.041194 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 17 15:58:43 crc kubenswrapper[4808]: I0217 15:58:43.085770 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 17 15:58:43 crc kubenswrapper[4808]: I0217 15:58:43.160296 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 17 15:58:43 crc kubenswrapper[4808]: I0217 15:58:43.224859 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 17 15:58:43 crc kubenswrapper[4808]: I0217 15:58:43.285553 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 17 15:58:43 crc kubenswrapper[4808]: I0217 15:58:43.349619 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 17 15:58:43 crc kubenswrapper[4808]: I0217 15:58:43.575914 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 17 15:58:43 crc kubenswrapper[4808]: I0217 15:58:43.624822 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 17 15:58:43 crc kubenswrapper[4808]: I0217 15:58:43.637295 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 17 15:58:43 crc kubenswrapper[4808]: I0217 15:58:43.638837 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 17 15:58:43 crc kubenswrapper[4808]: I0217 15:58:43.665294 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 17 15:58:43 crc kubenswrapper[4808]: I0217 15:58:43.672673 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 17 15:58:43 crc kubenswrapper[4808]: I0217 15:58:43.818128 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 17 15:58:43 crc kubenswrapper[4808]: I0217 15:58:43.848303 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 17 15:58:43 crc kubenswrapper[4808]: I0217 15:58:43.848760 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 17 15:58:43 crc kubenswrapper[4808]: I0217 15:58:43.888355 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 17 15:58:43 crc kubenswrapper[4808]: I0217 15:58:43.926350 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 17 15:58:44 crc kubenswrapper[4808]: I0217 15:58:44.135737 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 17 15:58:44 crc kubenswrapper[4808]: I0217 15:58:44.191293 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 17 15:58:44 crc kubenswrapper[4808]: I0217 15:58:44.241238 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 17 15:58:44 crc kubenswrapper[4808]: I0217 15:58:44.247316 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 17 15:58:44 crc kubenswrapper[4808]: I0217 15:58:44.321917 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 17 15:58:44 crc kubenswrapper[4808]: I0217 15:58:44.343764 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 17 15:58:44 crc kubenswrapper[4808]: I0217 15:58:44.376156 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 17 15:58:44 crc kubenswrapper[4808]: I0217 15:58:44.395277 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 17 15:58:44 crc kubenswrapper[4808]: I0217 15:58:44.469871 4808 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 17 15:58:44 crc kubenswrapper[4808]: I0217 15:58:44.472348 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 17 15:58:44 crc kubenswrapper[4808]: I0217 15:58:44.502744 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 17 15:58:44 crc kubenswrapper[4808]: I0217 15:58:44.536535 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 17 15:58:44 crc kubenswrapper[4808]: I0217 15:58:44.605707 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 17 15:58:44 crc kubenswrapper[4808]: I0217 15:58:44.648506 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 17 15:58:44 crc kubenswrapper[4808]: I0217 15:58:44.670639 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 17 15:58:44 crc kubenswrapper[4808]: I0217 15:58:44.681389 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 17 15:58:44 crc kubenswrapper[4808]: I0217 15:58:44.696063 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 17 15:58:44 crc kubenswrapper[4808]: I0217 15:58:44.700464 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 17 15:58:44 crc kubenswrapper[4808]: I0217 15:58:44.714360 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 17 15:58:44 crc kubenswrapper[4808]: I0217 15:58:44.867896 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 17 15:58:44 crc kubenswrapper[4808]: I0217 15:58:44.922312 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 17 15:58:44 crc kubenswrapper[4808]: I0217 15:58:44.966399 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.009847 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.073704 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.131939 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.167059 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.176964 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.177710 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.178893 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.189829 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.189911 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.192048 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.216523 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.259406 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.279873 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-7cf76d985f-jm4q8"] Feb 17 15:58:45 crc kubenswrapper[4808]: E0217 15:58:45.280489 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e6a81ca-0d6e-48d2-a0a2-ada5fcb8b25e" containerName="installer" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.280529 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e6a81ca-0d6e-48d2-a0a2-ada5fcb8b25e" containerName="installer" Feb 17 15:58:45 crc kubenswrapper[4808]: E0217 15:58:45.280559 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33978535-84b2-4def-af5a-d2819171e202" containerName="oauth-openshift" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.280572 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="33978535-84b2-4def-af5a-d2819171e202" containerName="oauth-openshift" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.280757 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="33978535-84b2-4def-af5a-d2819171e202" containerName="oauth-openshift" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.280788 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e6a81ca-0d6e-48d2-a0a2-ada5fcb8b25e" containerName="installer" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.281408 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7cf76d985f-jm4q8" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.283731 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.286402 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.286676 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.289066 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.290473 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.290579 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.291219 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.291336 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.291882 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.291869 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.295821 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.303342 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.317550 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.317748 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.319297 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.330124 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.333476 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7cf76d985f-jm4q8"] Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.335655 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.455987 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.483468 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a005d347-0020-4aa0-a3b7-9d406bfa9612-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7cf76d985f-jm4q8\" (UID: \"a005d347-0020-4aa0-a3b7-9d406bfa9612\") " pod="openshift-authentication/oauth-openshift-7cf76d985f-jm4q8" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.483717 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a005d347-0020-4aa0-a3b7-9d406bfa9612-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7cf76d985f-jm4q8\" (UID: \"a005d347-0020-4aa0-a3b7-9d406bfa9612\") " pod="openshift-authentication/oauth-openshift-7cf76d985f-jm4q8" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.483790 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a005d347-0020-4aa0-a3b7-9d406bfa9612-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7cf76d985f-jm4q8\" (UID: \"a005d347-0020-4aa0-a3b7-9d406bfa9612\") " pod="openshift-authentication/oauth-openshift-7cf76d985f-jm4q8" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.483869 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a005d347-0020-4aa0-a3b7-9d406bfa9612-v4-0-config-user-template-error\") pod \"oauth-openshift-7cf76d985f-jm4q8\" (UID: \"a005d347-0020-4aa0-a3b7-9d406bfa9612\") " pod="openshift-authentication/oauth-openshift-7cf76d985f-jm4q8" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.484044 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a005d347-0020-4aa0-a3b7-9d406bfa9612-v4-0-config-system-router-certs\") pod \"oauth-openshift-7cf76d985f-jm4q8\" (UID: \"a005d347-0020-4aa0-a3b7-9d406bfa9612\") " pod="openshift-authentication/oauth-openshift-7cf76d985f-jm4q8" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.484135 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a005d347-0020-4aa0-a3b7-9d406bfa9612-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7cf76d985f-jm4q8\" (UID: \"a005d347-0020-4aa0-a3b7-9d406bfa9612\") " pod="openshift-authentication/oauth-openshift-7cf76d985f-jm4q8" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.484199 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a005d347-0020-4aa0-a3b7-9d406bfa9612-audit-policies\") pod \"oauth-openshift-7cf76d985f-jm4q8\" (UID: \"a005d347-0020-4aa0-a3b7-9d406bfa9612\") " pod="openshift-authentication/oauth-openshift-7cf76d985f-jm4q8" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.484246 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a005d347-0020-4aa0-a3b7-9d406bfa9612-v4-0-config-system-session\") pod \"oauth-openshift-7cf76d985f-jm4q8\" (UID: \"a005d347-0020-4aa0-a3b7-9d406bfa9612\") " pod="openshift-authentication/oauth-openshift-7cf76d985f-jm4q8" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.484302 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a005d347-0020-4aa0-a3b7-9d406bfa9612-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7cf76d985f-jm4q8\" (UID: \"a005d347-0020-4aa0-a3b7-9d406bfa9612\") " pod="openshift-authentication/oauth-openshift-7cf76d985f-jm4q8" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.484391 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xv6x\" (UniqueName: \"kubernetes.io/projected/a005d347-0020-4aa0-a3b7-9d406bfa9612-kube-api-access-2xv6x\") pod \"oauth-openshift-7cf76d985f-jm4q8\" (UID: \"a005d347-0020-4aa0-a3b7-9d406bfa9612\") " pod="openshift-authentication/oauth-openshift-7cf76d985f-jm4q8" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.484441 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a005d347-0020-4aa0-a3b7-9d406bfa9612-v4-0-config-user-template-login\") pod \"oauth-openshift-7cf76d985f-jm4q8\" (UID: \"a005d347-0020-4aa0-a3b7-9d406bfa9612\") " pod="openshift-authentication/oauth-openshift-7cf76d985f-jm4q8" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.484611 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a005d347-0020-4aa0-a3b7-9d406bfa9612-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7cf76d985f-jm4q8\" (UID: \"a005d347-0020-4aa0-a3b7-9d406bfa9612\") " pod="openshift-authentication/oauth-openshift-7cf76d985f-jm4q8" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.484672 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a005d347-0020-4aa0-a3b7-9d406bfa9612-audit-dir\") pod \"oauth-openshift-7cf76d985f-jm4q8\" (UID: \"a005d347-0020-4aa0-a3b7-9d406bfa9612\") " pod="openshift-authentication/oauth-openshift-7cf76d985f-jm4q8" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.484696 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a005d347-0020-4aa0-a3b7-9d406bfa9612-v4-0-config-system-service-ca\") pod \"oauth-openshift-7cf76d985f-jm4q8\" (UID: \"a005d347-0020-4aa0-a3b7-9d406bfa9612\") " pod="openshift-authentication/oauth-openshift-7cf76d985f-jm4q8" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.565262 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.582386 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.585895 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a005d347-0020-4aa0-a3b7-9d406bfa9612-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7cf76d985f-jm4q8\" (UID: \"a005d347-0020-4aa0-a3b7-9d406bfa9612\") " pod="openshift-authentication/oauth-openshift-7cf76d985f-jm4q8" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.585955 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a005d347-0020-4aa0-a3b7-9d406bfa9612-audit-dir\") pod \"oauth-openshift-7cf76d985f-jm4q8\" (UID: \"a005d347-0020-4aa0-a3b7-9d406bfa9612\") " pod="openshift-authentication/oauth-openshift-7cf76d985f-jm4q8" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.585982 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a005d347-0020-4aa0-a3b7-9d406bfa9612-v4-0-config-system-service-ca\") pod \"oauth-openshift-7cf76d985f-jm4q8\" (UID: \"a005d347-0020-4aa0-a3b7-9d406bfa9612\") " pod="openshift-authentication/oauth-openshift-7cf76d985f-jm4q8" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.586024 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a005d347-0020-4aa0-a3b7-9d406bfa9612-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7cf76d985f-jm4q8\" (UID: \"a005d347-0020-4aa0-a3b7-9d406bfa9612\") " pod="openshift-authentication/oauth-openshift-7cf76d985f-jm4q8" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.586058 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a005d347-0020-4aa0-a3b7-9d406bfa9612-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7cf76d985f-jm4q8\" (UID: \"a005d347-0020-4aa0-a3b7-9d406bfa9612\") " pod="openshift-authentication/oauth-openshift-7cf76d985f-jm4q8" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.586087 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a005d347-0020-4aa0-a3b7-9d406bfa9612-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7cf76d985f-jm4q8\" (UID: \"a005d347-0020-4aa0-a3b7-9d406bfa9612\") " pod="openshift-authentication/oauth-openshift-7cf76d985f-jm4q8" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.586111 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a005d347-0020-4aa0-a3b7-9d406bfa9612-v4-0-config-user-template-error\") pod \"oauth-openshift-7cf76d985f-jm4q8\" (UID: \"a005d347-0020-4aa0-a3b7-9d406bfa9612\") " pod="openshift-authentication/oauth-openshift-7cf76d985f-jm4q8" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.586137 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a005d347-0020-4aa0-a3b7-9d406bfa9612-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7cf76d985f-jm4q8\" (UID: \"a005d347-0020-4aa0-a3b7-9d406bfa9612\") " pod="openshift-authentication/oauth-openshift-7cf76d985f-jm4q8" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.586159 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a005d347-0020-4aa0-a3b7-9d406bfa9612-v4-0-config-system-router-certs\") pod \"oauth-openshift-7cf76d985f-jm4q8\" (UID: \"a005d347-0020-4aa0-a3b7-9d406bfa9612\") " pod="openshift-authentication/oauth-openshift-7cf76d985f-jm4q8" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.586186 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a005d347-0020-4aa0-a3b7-9d406bfa9612-audit-policies\") pod \"oauth-openshift-7cf76d985f-jm4q8\" (UID: \"a005d347-0020-4aa0-a3b7-9d406bfa9612\") " pod="openshift-authentication/oauth-openshift-7cf76d985f-jm4q8" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.586234 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a005d347-0020-4aa0-a3b7-9d406bfa9612-v4-0-config-system-session\") pod \"oauth-openshift-7cf76d985f-jm4q8\" (UID: \"a005d347-0020-4aa0-a3b7-9d406bfa9612\") " pod="openshift-authentication/oauth-openshift-7cf76d985f-jm4q8" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.586267 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a005d347-0020-4aa0-a3b7-9d406bfa9612-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7cf76d985f-jm4q8\" (UID: \"a005d347-0020-4aa0-a3b7-9d406bfa9612\") " pod="openshift-authentication/oauth-openshift-7cf76d985f-jm4q8" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.586292 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xv6x\" (UniqueName: \"kubernetes.io/projected/a005d347-0020-4aa0-a3b7-9d406bfa9612-kube-api-access-2xv6x\") pod \"oauth-openshift-7cf76d985f-jm4q8\" (UID: \"a005d347-0020-4aa0-a3b7-9d406bfa9612\") " pod="openshift-authentication/oauth-openshift-7cf76d985f-jm4q8" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.586317 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a005d347-0020-4aa0-a3b7-9d406bfa9612-v4-0-config-user-template-login\") pod \"oauth-openshift-7cf76d985f-jm4q8\" (UID: \"a005d347-0020-4aa0-a3b7-9d406bfa9612\") " pod="openshift-authentication/oauth-openshift-7cf76d985f-jm4q8" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.587560 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.588215 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a005d347-0020-4aa0-a3b7-9d406bfa9612-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7cf76d985f-jm4q8\" (UID: \"a005d347-0020-4aa0-a3b7-9d406bfa9612\") " pod="openshift-authentication/oauth-openshift-7cf76d985f-jm4q8" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.588913 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a005d347-0020-4aa0-a3b7-9d406bfa9612-v4-0-config-system-service-ca\") pod \"oauth-openshift-7cf76d985f-jm4q8\" (UID: \"a005d347-0020-4aa0-a3b7-9d406bfa9612\") " pod="openshift-authentication/oauth-openshift-7cf76d985f-jm4q8" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.586085 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a005d347-0020-4aa0-a3b7-9d406bfa9612-audit-dir\") pod \"oauth-openshift-7cf76d985f-jm4q8\" (UID: \"a005d347-0020-4aa0-a3b7-9d406bfa9612\") " pod="openshift-authentication/oauth-openshift-7cf76d985f-jm4q8" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.590654 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a005d347-0020-4aa0-a3b7-9d406bfa9612-audit-policies\") pod \"oauth-openshift-7cf76d985f-jm4q8\" (UID: \"a005d347-0020-4aa0-a3b7-9d406bfa9612\") " pod="openshift-authentication/oauth-openshift-7cf76d985f-jm4q8" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.590751 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a005d347-0020-4aa0-a3b7-9d406bfa9612-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7cf76d985f-jm4q8\" (UID: \"a005d347-0020-4aa0-a3b7-9d406bfa9612\") " pod="openshift-authentication/oauth-openshift-7cf76d985f-jm4q8" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.593738 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a005d347-0020-4aa0-a3b7-9d406bfa9612-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7cf76d985f-jm4q8\" (UID: \"a005d347-0020-4aa0-a3b7-9d406bfa9612\") " pod="openshift-authentication/oauth-openshift-7cf76d985f-jm4q8" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.593786 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a005d347-0020-4aa0-a3b7-9d406bfa9612-v4-0-config-user-template-error\") pod \"oauth-openshift-7cf76d985f-jm4q8\" (UID: \"a005d347-0020-4aa0-a3b7-9d406bfa9612\") " pod="openshift-authentication/oauth-openshift-7cf76d985f-jm4q8" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.594478 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a005d347-0020-4aa0-a3b7-9d406bfa9612-v4-0-config-user-template-login\") pod \"oauth-openshift-7cf76d985f-jm4q8\" (UID: \"a005d347-0020-4aa0-a3b7-9d406bfa9612\") " pod="openshift-authentication/oauth-openshift-7cf76d985f-jm4q8" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.595229 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a005d347-0020-4aa0-a3b7-9d406bfa9612-v4-0-config-system-router-certs\") pod \"oauth-openshift-7cf76d985f-jm4q8\" (UID: \"a005d347-0020-4aa0-a3b7-9d406bfa9612\") " pod="openshift-authentication/oauth-openshift-7cf76d985f-jm4q8" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.595377 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a005d347-0020-4aa0-a3b7-9d406bfa9612-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7cf76d985f-jm4q8\" (UID: \"a005d347-0020-4aa0-a3b7-9d406bfa9612\") " pod="openshift-authentication/oauth-openshift-7cf76d985f-jm4q8" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.595509 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a005d347-0020-4aa0-a3b7-9d406bfa9612-v4-0-config-system-session\") pod \"oauth-openshift-7cf76d985f-jm4q8\" (UID: \"a005d347-0020-4aa0-a3b7-9d406bfa9612\") " pod="openshift-authentication/oauth-openshift-7cf76d985f-jm4q8" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.596663 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a005d347-0020-4aa0-a3b7-9d406bfa9612-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7cf76d985f-jm4q8\" (UID: \"a005d347-0020-4aa0-a3b7-9d406bfa9612\") " pod="openshift-authentication/oauth-openshift-7cf76d985f-jm4q8" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.597743 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a005d347-0020-4aa0-a3b7-9d406bfa9612-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7cf76d985f-jm4q8\" (UID: \"a005d347-0020-4aa0-a3b7-9d406bfa9612\") " pod="openshift-authentication/oauth-openshift-7cf76d985f-jm4q8" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.625461 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xv6x\" (UniqueName: \"kubernetes.io/projected/a005d347-0020-4aa0-a3b7-9d406bfa9612-kube-api-access-2xv6x\") pod \"oauth-openshift-7cf76d985f-jm4q8\" (UID: \"a005d347-0020-4aa0-a3b7-9d406bfa9612\") " pod="openshift-authentication/oauth-openshift-7cf76d985f-jm4q8" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.625919 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7cf76d985f-jm4q8" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.665923 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.669098 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.785148 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.938056 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 17 15:58:46 crc kubenswrapper[4808]: I0217 15:58:46.048064 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 17 15:58:46 crc kubenswrapper[4808]: I0217 15:58:46.114877 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 17 15:58:46 crc kubenswrapper[4808]: I0217 15:58:46.171632 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 17 15:58:46 crc kubenswrapper[4808]: I0217 15:58:46.275617 4808 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 17 15:58:46 crc kubenswrapper[4808]: I0217 15:58:46.275893 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://1aad017c95d37d7e1d108001e119581b1379d3c0c63d28c65df4fdfd7a716026" gracePeriod=5 Feb 17 15:58:46 crc kubenswrapper[4808]: I0217 15:58:46.308111 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 17 15:58:46 crc kubenswrapper[4808]: I0217 15:58:46.352931 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 17 15:58:46 crc kubenswrapper[4808]: I0217 15:58:46.421437 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 17 15:58:46 crc kubenswrapper[4808]: I0217 15:58:46.535338 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7cf76d985f-jm4q8"] Feb 17 15:58:46 crc kubenswrapper[4808]: I0217 15:58:46.576311 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 17 15:58:46 crc kubenswrapper[4808]: I0217 15:58:46.732654 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 17 15:58:46 crc kubenswrapper[4808]: I0217 15:58:46.858439 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 17 15:58:46 crc kubenswrapper[4808]: I0217 15:58:46.939308 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7cf76d985f-jm4q8" event={"ID":"a005d347-0020-4aa0-a3b7-9d406bfa9612","Type":"ContainerStarted","Data":"c4541dc422fccfc33b0d062375df1b8d0617039f10384c2072492d9dbc4efadd"} Feb 17 15:58:46 crc kubenswrapper[4808]: I0217 15:58:46.939364 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7cf76d985f-jm4q8" event={"ID":"a005d347-0020-4aa0-a3b7-9d406bfa9612","Type":"ContainerStarted","Data":"11568489746d2c1e860e39d7b3cf2bb2ced91a2b31b61a1b72c26ed6f6a983c9"} Feb 17 15:58:46 crc kubenswrapper[4808]: I0217 15:58:46.939641 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-7cf76d985f-jm4q8" Feb 17 15:58:46 crc kubenswrapper[4808]: I0217 15:58:46.942091 4808 patch_prober.go:28] interesting pod/oauth-openshift-7cf76d985f-jm4q8 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.65:6443/healthz\": dial tcp 10.217.0.65:6443: connect: connection refused" start-of-body= Feb 17 15:58:46 crc kubenswrapper[4808]: I0217 15:58:46.942136 4808 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-7cf76d985f-jm4q8" podUID="a005d347-0020-4aa0-a3b7-9d406bfa9612" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.65:6443/healthz\": dial tcp 10.217.0.65:6443: connect: connection refused" Feb 17 15:58:46 crc kubenswrapper[4808]: I0217 15:58:46.962888 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-7cf76d985f-jm4q8" podStartSLOduration=59.962852113 podStartE2EDuration="59.962852113s" podCreationTimestamp="2026-02-17 15:57:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:58:46.957772277 +0000 UTC m=+290.474131380" watchObservedRunningTime="2026-02-17 15:58:46.962852113 +0000 UTC m=+290.479211186" Feb 17 15:58:46 crc kubenswrapper[4808]: I0217 15:58:46.984278 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 17 15:58:47 crc kubenswrapper[4808]: I0217 15:58:47.108342 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 17 15:58:47 crc kubenswrapper[4808]: I0217 15:58:47.167085 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 17 15:58:47 crc kubenswrapper[4808]: I0217 15:58:47.400749 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 17 15:58:47 crc kubenswrapper[4808]: I0217 15:58:47.443110 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 17 15:58:47 crc kubenswrapper[4808]: I0217 15:58:47.471660 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 17 15:58:47 crc kubenswrapper[4808]: I0217 15:58:47.602945 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 17 15:58:47 crc kubenswrapper[4808]: I0217 15:58:47.758099 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 17 15:58:47 crc kubenswrapper[4808]: I0217 15:58:47.873141 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 17 15:58:47 crc kubenswrapper[4808]: I0217 15:58:47.874724 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 17 15:58:47 crc kubenswrapper[4808]: I0217 15:58:47.910415 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 17 15:58:47 crc kubenswrapper[4808]: I0217 15:58:47.948341 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-7cf76d985f-jm4q8" Feb 17 15:58:48 crc kubenswrapper[4808]: I0217 15:58:48.042189 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 17 15:58:48 crc kubenswrapper[4808]: I0217 15:58:48.068171 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 17 15:58:48 crc kubenswrapper[4808]: I0217 15:58:48.078973 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 17 15:58:48 crc kubenswrapper[4808]: I0217 15:58:48.125631 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 17 15:58:48 crc kubenswrapper[4808]: I0217 15:58:48.184290 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 17 15:58:48 crc kubenswrapper[4808]: I0217 15:58:48.333787 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 17 15:58:48 crc kubenswrapper[4808]: I0217 15:58:48.463532 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 17 15:58:48 crc kubenswrapper[4808]: I0217 15:58:48.572822 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 17 15:58:48 crc kubenswrapper[4808]: I0217 15:58:48.676352 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 17 15:58:48 crc kubenswrapper[4808]: I0217 15:58:48.799526 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 17 15:58:49 crc kubenswrapper[4808]: I0217 15:58:49.131331 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 17 15:58:49 crc kubenswrapper[4808]: I0217 15:58:49.334678 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 17 15:58:49 crc kubenswrapper[4808]: I0217 15:58:49.392885 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 17 15:58:49 crc kubenswrapper[4808]: I0217 15:58:49.434199 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 17 15:58:49 crc kubenswrapper[4808]: I0217 15:58:49.464096 4808 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 17 15:58:49 crc kubenswrapper[4808]: I0217 15:58:49.521958 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 17 15:58:49 crc kubenswrapper[4808]: I0217 15:58:49.645833 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 17 15:58:49 crc kubenswrapper[4808]: I0217 15:58:49.655643 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 17 15:58:49 crc kubenswrapper[4808]: I0217 15:58:49.750577 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 17 15:58:50 crc kubenswrapper[4808]: I0217 15:58:50.055191 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 17 15:58:50 crc kubenswrapper[4808]: I0217 15:58:50.066073 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 17 15:58:50 crc kubenswrapper[4808]: I0217 15:58:50.859241 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 17 15:58:51 crc kubenswrapper[4808]: I0217 15:58:51.199829 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 17 15:58:51 crc kubenswrapper[4808]: I0217 15:58:51.868240 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 17 15:58:51 crc kubenswrapper[4808]: I0217 15:58:51.868324 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 15:58:51 crc kubenswrapper[4808]: I0217 15:58:51.974967 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 17 15:58:51 crc kubenswrapper[4808]: I0217 15:58:51.975038 4808 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="1aad017c95d37d7e1d108001e119581b1379d3c0c63d28c65df4fdfd7a716026" exitCode=137 Feb 17 15:58:51 crc kubenswrapper[4808]: I0217 15:58:51.975107 4808 scope.go:117] "RemoveContainer" containerID="1aad017c95d37d7e1d108001e119581b1379d3c0c63d28c65df4fdfd7a716026" Feb 17 15:58:51 crc kubenswrapper[4808]: I0217 15:58:51.975193 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 15:58:51 crc kubenswrapper[4808]: I0217 15:58:51.983100 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 17 15:58:51 crc kubenswrapper[4808]: I0217 15:58:51.983176 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 17 15:58:51 crc kubenswrapper[4808]: I0217 15:58:51.983293 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 17 15:58:51 crc kubenswrapper[4808]: I0217 15:58:51.983378 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 17 15:58:51 crc kubenswrapper[4808]: I0217 15:58:51.983446 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:58:51 crc kubenswrapper[4808]: I0217 15:58:51.983469 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 17 15:58:51 crc kubenswrapper[4808]: I0217 15:58:51.983465 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:58:51 crc kubenswrapper[4808]: I0217 15:58:51.983520 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:58:51 crc kubenswrapper[4808]: I0217 15:58:51.983655 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:58:51 crc kubenswrapper[4808]: I0217 15:58:51.984030 4808 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Feb 17 15:58:51 crc kubenswrapper[4808]: I0217 15:58:51.984072 4808 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Feb 17 15:58:51 crc kubenswrapper[4808]: I0217 15:58:51.984096 4808 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 17 15:58:51 crc kubenswrapper[4808]: I0217 15:58:51.984123 4808 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Feb 17 15:58:51 crc kubenswrapper[4808]: I0217 15:58:51.996685 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:58:52 crc kubenswrapper[4808]: I0217 15:58:52.003090 4808 scope.go:117] "RemoveContainer" containerID="1aad017c95d37d7e1d108001e119581b1379d3c0c63d28c65df4fdfd7a716026" Feb 17 15:58:52 crc kubenswrapper[4808]: E0217 15:58:52.004236 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1aad017c95d37d7e1d108001e119581b1379d3c0c63d28c65df4fdfd7a716026\": container with ID starting with 1aad017c95d37d7e1d108001e119581b1379d3c0c63d28c65df4fdfd7a716026 not found: ID does not exist" containerID="1aad017c95d37d7e1d108001e119581b1379d3c0c63d28c65df4fdfd7a716026" Feb 17 15:58:52 crc kubenswrapper[4808]: I0217 15:58:52.004302 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1aad017c95d37d7e1d108001e119581b1379d3c0c63d28c65df4fdfd7a716026"} err="failed to get container status \"1aad017c95d37d7e1d108001e119581b1379d3c0c63d28c65df4fdfd7a716026\": rpc error: code = NotFound desc = could not find container \"1aad017c95d37d7e1d108001e119581b1379d3c0c63d28c65df4fdfd7a716026\": container with ID starting with 1aad017c95d37d7e1d108001e119581b1379d3c0c63d28c65df4fdfd7a716026 not found: ID does not exist" Feb 17 15:58:52 crc kubenswrapper[4808]: I0217 15:58:52.085908 4808 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 17 15:58:53 crc kubenswrapper[4808]: I0217 15:58:53.163679 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Feb 17 15:58:53 crc kubenswrapper[4808]: I0217 15:58:53.165077 4808 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Feb 17 15:58:53 crc kubenswrapper[4808]: I0217 15:58:53.191077 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 17 15:58:53 crc kubenswrapper[4808]: I0217 15:58:53.191146 4808 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="e0f80900-d24b-4479-bbe3-b422e8628d4b" Feb 17 15:58:53 crc kubenswrapper[4808]: I0217 15:58:53.198677 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 17 15:58:53 crc kubenswrapper[4808]: I0217 15:58:53.198779 4808 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="e0f80900-d24b-4479-bbe3-b422e8628d4b" Feb 17 15:58:56 crc kubenswrapper[4808]: I0217 15:58:56.956870 4808 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Feb 17 15:59:05 crc kubenswrapper[4808]: I0217 15:59:05.263274 4808 generic.go:334] "Generic (PLEG): container finished" podID="b0793347-d948-480b-b5a7-d0fed7e12b38" containerID="1c4f11a7931bfb6c7e6734178fd2038fdd115a2788998f8ef169fbd7407cf6d2" exitCode=0 Feb 17 15:59:05 crc kubenswrapper[4808]: I0217 15:59:05.263368 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-sbr84" event={"ID":"b0793347-d948-480b-b5a7-d0fed7e12b38","Type":"ContainerDied","Data":"1c4f11a7931bfb6c7e6734178fd2038fdd115a2788998f8ef169fbd7407cf6d2"} Feb 17 15:59:05 crc kubenswrapper[4808]: I0217 15:59:05.264917 4808 scope.go:117] "RemoveContainer" containerID="1c4f11a7931bfb6c7e6734178fd2038fdd115a2788998f8ef169fbd7407cf6d2" Feb 17 15:59:06 crc kubenswrapper[4808]: I0217 15:59:06.274270 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-sbr84" event={"ID":"b0793347-d948-480b-b5a7-d0fed7e12b38","Type":"ContainerStarted","Data":"39d5ff5dd804706cac13ddc305146999917b8de3246e042798c68cde55b248ed"} Feb 17 15:59:06 crc kubenswrapper[4808]: I0217 15:59:06.275611 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-sbr84" Feb 17 15:59:06 crc kubenswrapper[4808]: I0217 15:59:06.278513 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-sbr84" Feb 17 15:59:17 crc kubenswrapper[4808]: I0217 15:59:17.867689 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 17 15:59:18 crc kubenswrapper[4808]: I0217 15:59:18.227337 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 17 15:59:21 crc kubenswrapper[4808]: I0217 15:59:21.592990 4808 patch_prober.go:28] interesting pod/machine-config-daemon-k8v8k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 15:59:21 crc kubenswrapper[4808]: I0217 15:59:21.593701 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 15:59:51 crc kubenswrapper[4808]: I0217 15:59:51.592698 4808 patch_prober.go:28] interesting pod/machine-config-daemon-k8v8k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 15:59:51 crc kubenswrapper[4808]: I0217 15:59:51.593739 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:00:00 crc kubenswrapper[4808]: I0217 16:00:00.203514 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522400-gqxpq"] Feb 17 16:00:00 crc kubenswrapper[4808]: E0217 16:00:00.204656 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 17 16:00:00 crc kubenswrapper[4808]: I0217 16:00:00.204682 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 17 16:00:00 crc kubenswrapper[4808]: I0217 16:00:00.204918 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 17 16:00:00 crc kubenswrapper[4808]: I0217 16:00:00.205554 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522400-gqxpq" Feb 17 16:00:00 crc kubenswrapper[4808]: I0217 16:00:00.208897 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 17 16:00:00 crc kubenswrapper[4808]: I0217 16:00:00.214278 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 17 16:00:00 crc kubenswrapper[4808]: I0217 16:00:00.215999 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522400-gqxpq"] Feb 17 16:00:00 crc kubenswrapper[4808]: I0217 16:00:00.276331 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpnxp\" (UniqueName: \"kubernetes.io/projected/d231c3b2-ee81-488d-b526-77ab9c8a2822-kube-api-access-lpnxp\") pod \"collect-profiles-29522400-gqxpq\" (UID: \"d231c3b2-ee81-488d-b526-77ab9c8a2822\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522400-gqxpq" Feb 17 16:00:00 crc kubenswrapper[4808]: I0217 16:00:00.276621 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d231c3b2-ee81-488d-b526-77ab9c8a2822-config-volume\") pod \"collect-profiles-29522400-gqxpq\" (UID: \"d231c3b2-ee81-488d-b526-77ab9c8a2822\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522400-gqxpq" Feb 17 16:00:00 crc kubenswrapper[4808]: I0217 16:00:00.276678 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d231c3b2-ee81-488d-b526-77ab9c8a2822-secret-volume\") pod \"collect-profiles-29522400-gqxpq\" (UID: \"d231c3b2-ee81-488d-b526-77ab9c8a2822\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522400-gqxpq" Feb 17 16:00:00 crc kubenswrapper[4808]: I0217 16:00:00.378059 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lpnxp\" (UniqueName: \"kubernetes.io/projected/d231c3b2-ee81-488d-b526-77ab9c8a2822-kube-api-access-lpnxp\") pod \"collect-profiles-29522400-gqxpq\" (UID: \"d231c3b2-ee81-488d-b526-77ab9c8a2822\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522400-gqxpq" Feb 17 16:00:00 crc kubenswrapper[4808]: I0217 16:00:00.378185 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d231c3b2-ee81-488d-b526-77ab9c8a2822-config-volume\") pod \"collect-profiles-29522400-gqxpq\" (UID: \"d231c3b2-ee81-488d-b526-77ab9c8a2822\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522400-gqxpq" Feb 17 16:00:00 crc kubenswrapper[4808]: I0217 16:00:00.378213 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d231c3b2-ee81-488d-b526-77ab9c8a2822-secret-volume\") pod \"collect-profiles-29522400-gqxpq\" (UID: \"d231c3b2-ee81-488d-b526-77ab9c8a2822\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522400-gqxpq" Feb 17 16:00:00 crc kubenswrapper[4808]: I0217 16:00:00.379601 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d231c3b2-ee81-488d-b526-77ab9c8a2822-config-volume\") pod \"collect-profiles-29522400-gqxpq\" (UID: \"d231c3b2-ee81-488d-b526-77ab9c8a2822\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522400-gqxpq" Feb 17 16:00:00 crc kubenswrapper[4808]: I0217 16:00:00.385773 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d231c3b2-ee81-488d-b526-77ab9c8a2822-secret-volume\") pod \"collect-profiles-29522400-gqxpq\" (UID: \"d231c3b2-ee81-488d-b526-77ab9c8a2822\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522400-gqxpq" Feb 17 16:00:00 crc kubenswrapper[4808]: I0217 16:00:00.398078 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpnxp\" (UniqueName: \"kubernetes.io/projected/d231c3b2-ee81-488d-b526-77ab9c8a2822-kube-api-access-lpnxp\") pod \"collect-profiles-29522400-gqxpq\" (UID: \"d231c3b2-ee81-488d-b526-77ab9c8a2822\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522400-gqxpq" Feb 17 16:00:00 crc kubenswrapper[4808]: I0217 16:00:00.434271 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-vdjh6"] Feb 17 16:00:00 crc kubenswrapper[4808]: I0217 16:00:00.435164 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-vdjh6" Feb 17 16:00:00 crc kubenswrapper[4808]: I0217 16:00:00.447709 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-vdjh6"] Feb 17 16:00:00 crc kubenswrapper[4808]: I0217 16:00:00.480073 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/68a94516-1d30-4e3c-ac74-900be5a9a652-bound-sa-token\") pod \"image-registry-66df7c8f76-vdjh6\" (UID: \"68a94516-1d30-4e3c-ac74-900be5a9a652\") " pod="openshift-image-registry/image-registry-66df7c8f76-vdjh6" Feb 17 16:00:00 crc kubenswrapper[4808]: I0217 16:00:00.480132 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/68a94516-1d30-4e3c-ac74-900be5a9a652-registry-tls\") pod \"image-registry-66df7c8f76-vdjh6\" (UID: \"68a94516-1d30-4e3c-ac74-900be5a9a652\") " pod="openshift-image-registry/image-registry-66df7c8f76-vdjh6" Feb 17 16:00:00 crc kubenswrapper[4808]: I0217 16:00:00.480199 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/68a94516-1d30-4e3c-ac74-900be5a9a652-trusted-ca\") pod \"image-registry-66df7c8f76-vdjh6\" (UID: \"68a94516-1d30-4e3c-ac74-900be5a9a652\") " pod="openshift-image-registry/image-registry-66df7c8f76-vdjh6" Feb 17 16:00:00 crc kubenswrapper[4808]: I0217 16:00:00.480261 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-vdjh6\" (UID: \"68a94516-1d30-4e3c-ac74-900be5a9a652\") " pod="openshift-image-registry/image-registry-66df7c8f76-vdjh6" Feb 17 16:00:00 crc kubenswrapper[4808]: I0217 16:00:00.480392 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/68a94516-1d30-4e3c-ac74-900be5a9a652-registry-certificates\") pod \"image-registry-66df7c8f76-vdjh6\" (UID: \"68a94516-1d30-4e3c-ac74-900be5a9a652\") " pod="openshift-image-registry/image-registry-66df7c8f76-vdjh6" Feb 17 16:00:00 crc kubenswrapper[4808]: I0217 16:00:00.480657 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/68a94516-1d30-4e3c-ac74-900be5a9a652-installation-pull-secrets\") pod \"image-registry-66df7c8f76-vdjh6\" (UID: \"68a94516-1d30-4e3c-ac74-900be5a9a652\") " pod="openshift-image-registry/image-registry-66df7c8f76-vdjh6" Feb 17 16:00:00 crc kubenswrapper[4808]: I0217 16:00:00.480704 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/68a94516-1d30-4e3c-ac74-900be5a9a652-ca-trust-extracted\") pod \"image-registry-66df7c8f76-vdjh6\" (UID: \"68a94516-1d30-4e3c-ac74-900be5a9a652\") " pod="openshift-image-registry/image-registry-66df7c8f76-vdjh6" Feb 17 16:00:00 crc kubenswrapper[4808]: I0217 16:00:00.480749 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x47hm\" (UniqueName: \"kubernetes.io/projected/68a94516-1d30-4e3c-ac74-900be5a9a652-kube-api-access-x47hm\") pod \"image-registry-66df7c8f76-vdjh6\" (UID: \"68a94516-1d30-4e3c-ac74-900be5a9a652\") " pod="openshift-image-registry/image-registry-66df7c8f76-vdjh6" Feb 17 16:00:00 crc kubenswrapper[4808]: I0217 16:00:00.506735 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-vdjh6\" (UID: \"68a94516-1d30-4e3c-ac74-900be5a9a652\") " pod="openshift-image-registry/image-registry-66df7c8f76-vdjh6" Feb 17 16:00:00 crc kubenswrapper[4808]: I0217 16:00:00.537560 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522400-gqxpq" Feb 17 16:00:00 crc kubenswrapper[4808]: I0217 16:00:00.582213 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/68a94516-1d30-4e3c-ac74-900be5a9a652-registry-certificates\") pod \"image-registry-66df7c8f76-vdjh6\" (UID: \"68a94516-1d30-4e3c-ac74-900be5a9a652\") " pod="openshift-image-registry/image-registry-66df7c8f76-vdjh6" Feb 17 16:00:00 crc kubenswrapper[4808]: I0217 16:00:00.582254 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/68a94516-1d30-4e3c-ac74-900be5a9a652-installation-pull-secrets\") pod \"image-registry-66df7c8f76-vdjh6\" (UID: \"68a94516-1d30-4e3c-ac74-900be5a9a652\") " pod="openshift-image-registry/image-registry-66df7c8f76-vdjh6" Feb 17 16:00:00 crc kubenswrapper[4808]: I0217 16:00:00.582279 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/68a94516-1d30-4e3c-ac74-900be5a9a652-ca-trust-extracted\") pod \"image-registry-66df7c8f76-vdjh6\" (UID: \"68a94516-1d30-4e3c-ac74-900be5a9a652\") " pod="openshift-image-registry/image-registry-66df7c8f76-vdjh6" Feb 17 16:00:00 crc kubenswrapper[4808]: I0217 16:00:00.582298 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x47hm\" (UniqueName: \"kubernetes.io/projected/68a94516-1d30-4e3c-ac74-900be5a9a652-kube-api-access-x47hm\") pod \"image-registry-66df7c8f76-vdjh6\" (UID: \"68a94516-1d30-4e3c-ac74-900be5a9a652\") " pod="openshift-image-registry/image-registry-66df7c8f76-vdjh6" Feb 17 16:00:00 crc kubenswrapper[4808]: I0217 16:00:00.582332 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/68a94516-1d30-4e3c-ac74-900be5a9a652-bound-sa-token\") pod \"image-registry-66df7c8f76-vdjh6\" (UID: \"68a94516-1d30-4e3c-ac74-900be5a9a652\") " pod="openshift-image-registry/image-registry-66df7c8f76-vdjh6" Feb 17 16:00:00 crc kubenswrapper[4808]: I0217 16:00:00.582350 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/68a94516-1d30-4e3c-ac74-900be5a9a652-registry-tls\") pod \"image-registry-66df7c8f76-vdjh6\" (UID: \"68a94516-1d30-4e3c-ac74-900be5a9a652\") " pod="openshift-image-registry/image-registry-66df7c8f76-vdjh6" Feb 17 16:00:00 crc kubenswrapper[4808]: I0217 16:00:00.582371 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/68a94516-1d30-4e3c-ac74-900be5a9a652-trusted-ca\") pod \"image-registry-66df7c8f76-vdjh6\" (UID: \"68a94516-1d30-4e3c-ac74-900be5a9a652\") " pod="openshift-image-registry/image-registry-66df7c8f76-vdjh6" Feb 17 16:00:00 crc kubenswrapper[4808]: I0217 16:00:00.583312 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/68a94516-1d30-4e3c-ac74-900be5a9a652-ca-trust-extracted\") pod \"image-registry-66df7c8f76-vdjh6\" (UID: \"68a94516-1d30-4e3c-ac74-900be5a9a652\") " pod="openshift-image-registry/image-registry-66df7c8f76-vdjh6" Feb 17 16:00:00 crc kubenswrapper[4808]: I0217 16:00:00.583591 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/68a94516-1d30-4e3c-ac74-900be5a9a652-trusted-ca\") pod \"image-registry-66df7c8f76-vdjh6\" (UID: \"68a94516-1d30-4e3c-ac74-900be5a9a652\") " pod="openshift-image-registry/image-registry-66df7c8f76-vdjh6" Feb 17 16:00:00 crc kubenswrapper[4808]: I0217 16:00:00.583773 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/68a94516-1d30-4e3c-ac74-900be5a9a652-registry-certificates\") pod \"image-registry-66df7c8f76-vdjh6\" (UID: \"68a94516-1d30-4e3c-ac74-900be5a9a652\") " pod="openshift-image-registry/image-registry-66df7c8f76-vdjh6" Feb 17 16:00:00 crc kubenswrapper[4808]: I0217 16:00:00.588013 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/68a94516-1d30-4e3c-ac74-900be5a9a652-installation-pull-secrets\") pod \"image-registry-66df7c8f76-vdjh6\" (UID: \"68a94516-1d30-4e3c-ac74-900be5a9a652\") " pod="openshift-image-registry/image-registry-66df7c8f76-vdjh6" Feb 17 16:00:00 crc kubenswrapper[4808]: I0217 16:00:00.588119 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/68a94516-1d30-4e3c-ac74-900be5a9a652-registry-tls\") pod \"image-registry-66df7c8f76-vdjh6\" (UID: \"68a94516-1d30-4e3c-ac74-900be5a9a652\") " pod="openshift-image-registry/image-registry-66df7c8f76-vdjh6" Feb 17 16:00:00 crc kubenswrapper[4808]: I0217 16:00:00.598017 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/68a94516-1d30-4e3c-ac74-900be5a9a652-bound-sa-token\") pod \"image-registry-66df7c8f76-vdjh6\" (UID: \"68a94516-1d30-4e3c-ac74-900be5a9a652\") " pod="openshift-image-registry/image-registry-66df7c8f76-vdjh6" Feb 17 16:00:00 crc kubenswrapper[4808]: I0217 16:00:00.598800 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x47hm\" (UniqueName: \"kubernetes.io/projected/68a94516-1d30-4e3c-ac74-900be5a9a652-kube-api-access-x47hm\") pod \"image-registry-66df7c8f76-vdjh6\" (UID: \"68a94516-1d30-4e3c-ac74-900be5a9a652\") " pod="openshift-image-registry/image-registry-66df7c8f76-vdjh6" Feb 17 16:00:00 crc kubenswrapper[4808]: I0217 16:00:00.751998 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-vdjh6" Feb 17 16:00:00 crc kubenswrapper[4808]: I0217 16:00:00.753728 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522400-gqxpq"] Feb 17 16:00:01 crc kubenswrapper[4808]: I0217 16:00:01.015164 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-vdjh6"] Feb 17 16:00:01 crc kubenswrapper[4808]: W0217 16:00:01.024290 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod68a94516_1d30_4e3c_ac74_900be5a9a652.slice/crio-6136f45b05ddfab5b40f52c17efab6dda0b618d0f5942bd07a0ce504ec2a6310 WatchSource:0}: Error finding container 6136f45b05ddfab5b40f52c17efab6dda0b618d0f5942bd07a0ce504ec2a6310: Status 404 returned error can't find the container with id 6136f45b05ddfab5b40f52c17efab6dda0b618d0f5942bd07a0ce504ec2a6310 Feb 17 16:00:01 crc kubenswrapper[4808]: I0217 16:00:01.707043 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-vdjh6" event={"ID":"68a94516-1d30-4e3c-ac74-900be5a9a652","Type":"ContainerStarted","Data":"3bb1816d2059313cbb34f34bfd48513a99e6ba235b649edb11e010b90a5c62b6"} Feb 17 16:00:01 crc kubenswrapper[4808]: I0217 16:00:01.707380 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-vdjh6" event={"ID":"68a94516-1d30-4e3c-ac74-900be5a9a652","Type":"ContainerStarted","Data":"6136f45b05ddfab5b40f52c17efab6dda0b618d0f5942bd07a0ce504ec2a6310"} Feb 17 16:00:01 crc kubenswrapper[4808]: I0217 16:00:01.707397 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-vdjh6" Feb 17 16:00:01 crc kubenswrapper[4808]: I0217 16:00:01.708776 4808 generic.go:334] "Generic (PLEG): container finished" podID="d231c3b2-ee81-488d-b526-77ab9c8a2822" containerID="a5c43165b9e051b89a89100aebbe7b3cc4c01775c317fec65c06ca231b1fc493" exitCode=0 Feb 17 16:00:01 crc kubenswrapper[4808]: I0217 16:00:01.708829 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522400-gqxpq" event={"ID":"d231c3b2-ee81-488d-b526-77ab9c8a2822","Type":"ContainerDied","Data":"a5c43165b9e051b89a89100aebbe7b3cc4c01775c317fec65c06ca231b1fc493"} Feb 17 16:00:01 crc kubenswrapper[4808]: I0217 16:00:01.708862 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522400-gqxpq" event={"ID":"d231c3b2-ee81-488d-b526-77ab9c8a2822","Type":"ContainerStarted","Data":"f6804ef9baa91191e4b576d5f932378596dfb3ab3b8a9e55ede18e311e5b2d6f"} Feb 17 16:00:01 crc kubenswrapper[4808]: I0217 16:00:01.729829 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-vdjh6" podStartSLOduration=1.729813227 podStartE2EDuration="1.729813227s" podCreationTimestamp="2026-02-17 16:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:00:01.726627081 +0000 UTC m=+365.242986184" watchObservedRunningTime="2026-02-17 16:00:01.729813227 +0000 UTC m=+365.246172300" Feb 17 16:00:03 crc kubenswrapper[4808]: I0217 16:00:03.041916 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522400-gqxpq" Feb 17 16:00:03 crc kubenswrapper[4808]: I0217 16:00:03.220462 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lpnxp\" (UniqueName: \"kubernetes.io/projected/d231c3b2-ee81-488d-b526-77ab9c8a2822-kube-api-access-lpnxp\") pod \"d231c3b2-ee81-488d-b526-77ab9c8a2822\" (UID: \"d231c3b2-ee81-488d-b526-77ab9c8a2822\") " Feb 17 16:00:03 crc kubenswrapper[4808]: I0217 16:00:03.220664 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d231c3b2-ee81-488d-b526-77ab9c8a2822-secret-volume\") pod \"d231c3b2-ee81-488d-b526-77ab9c8a2822\" (UID: \"d231c3b2-ee81-488d-b526-77ab9c8a2822\") " Feb 17 16:00:03 crc kubenswrapper[4808]: I0217 16:00:03.220720 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d231c3b2-ee81-488d-b526-77ab9c8a2822-config-volume\") pod \"d231c3b2-ee81-488d-b526-77ab9c8a2822\" (UID: \"d231c3b2-ee81-488d-b526-77ab9c8a2822\") " Feb 17 16:00:03 crc kubenswrapper[4808]: I0217 16:00:03.222031 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d231c3b2-ee81-488d-b526-77ab9c8a2822-config-volume" (OuterVolumeSpecName: "config-volume") pod "d231c3b2-ee81-488d-b526-77ab9c8a2822" (UID: "d231c3b2-ee81-488d-b526-77ab9c8a2822"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:00:03 crc kubenswrapper[4808]: I0217 16:00:03.227246 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d231c3b2-ee81-488d-b526-77ab9c8a2822-kube-api-access-lpnxp" (OuterVolumeSpecName: "kube-api-access-lpnxp") pod "d231c3b2-ee81-488d-b526-77ab9c8a2822" (UID: "d231c3b2-ee81-488d-b526-77ab9c8a2822"). InnerVolumeSpecName "kube-api-access-lpnxp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:00:03 crc kubenswrapper[4808]: I0217 16:00:03.233793 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d231c3b2-ee81-488d-b526-77ab9c8a2822-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "d231c3b2-ee81-488d-b526-77ab9c8a2822" (UID: "d231c3b2-ee81-488d-b526-77ab9c8a2822"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:00:03 crc kubenswrapper[4808]: I0217 16:00:03.323593 4808 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d231c3b2-ee81-488d-b526-77ab9c8a2822-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 17 16:00:03 crc kubenswrapper[4808]: I0217 16:00:03.323846 4808 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d231c3b2-ee81-488d-b526-77ab9c8a2822-config-volume\") on node \"crc\" DevicePath \"\"" Feb 17 16:00:03 crc kubenswrapper[4808]: I0217 16:00:03.323911 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lpnxp\" (UniqueName: \"kubernetes.io/projected/d231c3b2-ee81-488d-b526-77ab9c8a2822-kube-api-access-lpnxp\") on node \"crc\" DevicePath \"\"" Feb 17 16:00:03 crc kubenswrapper[4808]: I0217 16:00:03.724764 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522400-gqxpq" event={"ID":"d231c3b2-ee81-488d-b526-77ab9c8a2822","Type":"ContainerDied","Data":"f6804ef9baa91191e4b576d5f932378596dfb3ab3b8a9e55ede18e311e5b2d6f"} Feb 17 16:00:03 crc kubenswrapper[4808]: I0217 16:00:03.724807 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f6804ef9baa91191e4b576d5f932378596dfb3ab3b8a9e55ede18e311e5b2d6f" Feb 17 16:00:03 crc kubenswrapper[4808]: I0217 16:00:03.724859 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522400-gqxpq" Feb 17 16:00:20 crc kubenswrapper[4808]: I0217 16:00:20.759774 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-vdjh6" Feb 17 16:00:20 crc kubenswrapper[4808]: I0217 16:00:20.842538 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fmfh5"] Feb 17 16:00:21 crc kubenswrapper[4808]: I0217 16:00:21.592639 4808 patch_prober.go:28] interesting pod/machine-config-daemon-k8v8k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:00:21 crc kubenswrapper[4808]: I0217 16:00:21.592763 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:00:21 crc kubenswrapper[4808]: I0217 16:00:21.592839 4808 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" Feb 17 16:00:21 crc kubenswrapper[4808]: I0217 16:00:21.596161 4808 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"77d27579afc79c7f9499a81b219b4983465c9c8999e7fd27d50b7990ea6072c1"} pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 16:00:21 crc kubenswrapper[4808]: I0217 16:00:21.596344 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" containerID="cri-o://77d27579afc79c7f9499a81b219b4983465c9c8999e7fd27d50b7990ea6072c1" gracePeriod=600 Feb 17 16:00:21 crc kubenswrapper[4808]: I0217 16:00:21.850079 4808 generic.go:334] "Generic (PLEG): container finished" podID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerID="77d27579afc79c7f9499a81b219b4983465c9c8999e7fd27d50b7990ea6072c1" exitCode=0 Feb 17 16:00:21 crc kubenswrapper[4808]: I0217 16:00:21.850491 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" event={"ID":"ca38b6e7-b21c-453d-8b6c-a163dac84b35","Type":"ContainerDied","Data":"77d27579afc79c7f9499a81b219b4983465c9c8999e7fd27d50b7990ea6072c1"} Feb 17 16:00:21 crc kubenswrapper[4808]: I0217 16:00:21.850544 4808 scope.go:117] "RemoveContainer" containerID="383650c9e8169aa5621d731ebcbfdd1ace0491ad4e7931fca1f6b595e0e782b9" Feb 17 16:00:22 crc kubenswrapper[4808]: I0217 16:00:22.859150 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" event={"ID":"ca38b6e7-b21c-453d-8b6c-a163dac84b35","Type":"ContainerStarted","Data":"088a965aa6da48d3335f0fd7b3ea4dc5ac44753ad3722fc3086c2312ec7c03db"} Feb 17 16:00:39 crc kubenswrapper[4808]: I0217 16:00:39.728795 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hn7fn"] Feb 17 16:00:39 crc kubenswrapper[4808]: I0217 16:00:39.729772 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-hn7fn" podUID="a1db3ff7-c43f-412e-ab72-3d592b6352b0" containerName="registry-server" containerID="cri-o://ab1f4fdafb32d3b5b88908e1013b0deb27471f76f61f16612081d0858b9c0b31" gracePeriod=30 Feb 17 16:00:39 crc kubenswrapper[4808]: I0217 16:00:39.751531 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-22x8m"] Feb 17 16:00:39 crc kubenswrapper[4808]: I0217 16:00:39.751934 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-22x8m" podUID="543b2019-8399-411e-8e8b-45787b96873f" containerName="registry-server" containerID="cri-o://5e0ccb5571695b0a11ced97259c836c8ed65e804c680e02618b7b777ab17bf5c" gracePeriod=30 Feb 17 16:00:39 crc kubenswrapper[4808]: I0217 16:00:39.772910 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-sbr84"] Feb 17 16:00:39 crc kubenswrapper[4808]: I0217 16:00:39.773324 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-sbr84" podUID="b0793347-d948-480b-b5a7-d0fed7e12b38" containerName="marketplace-operator" containerID="cri-o://39d5ff5dd804706cac13ddc305146999917b8de3246e042798c68cde55b248ed" gracePeriod=30 Feb 17 16:00:39 crc kubenswrapper[4808]: I0217 16:00:39.781520 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cs597"] Feb 17 16:00:39 crc kubenswrapper[4808]: I0217 16:00:39.781968 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-cs597" podUID="48efd125-e3aa-444d-91a3-fa915be48b46" containerName="registry-server" containerID="cri-o://1789b161d1d589d4f4b637bcd20330b171b3967cd4acb37da4ed2b0c3bffddf0" gracePeriod=30 Feb 17 16:00:39 crc kubenswrapper[4808]: I0217 16:00:39.787300 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8jsrz"] Feb 17 16:00:39 crc kubenswrapper[4808]: I0217 16:00:39.787686 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-8jsrz" podUID="e22d34a8-92f6-4a2a-a0f5-e063c25afac1" containerName="registry-server" containerID="cri-o://aa3fed03abacd35eb7bb1f3065835e28313c3e4962262338c33f30c7827d8852" gracePeriod=30 Feb 17 16:00:39 crc kubenswrapper[4808]: I0217 16:00:39.792941 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-v2wfq"] Feb 17 16:00:39 crc kubenswrapper[4808]: E0217 16:00:39.793470 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d231c3b2-ee81-488d-b526-77ab9c8a2822" containerName="collect-profiles" Feb 17 16:00:39 crc kubenswrapper[4808]: I0217 16:00:39.793503 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="d231c3b2-ee81-488d-b526-77ab9c8a2822" containerName="collect-profiles" Feb 17 16:00:39 crc kubenswrapper[4808]: I0217 16:00:39.793749 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="d231c3b2-ee81-488d-b526-77ab9c8a2822" containerName="collect-profiles" Feb 17 16:00:39 crc kubenswrapper[4808]: I0217 16:00:39.794537 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-v2wfq" Feb 17 16:00:39 crc kubenswrapper[4808]: I0217 16:00:39.796400 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-v2wfq"] Feb 17 16:00:39 crc kubenswrapper[4808]: I0217 16:00:39.908382 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5dtb\" (UniqueName: \"kubernetes.io/projected/012287fd-dda3-4c7b-af1f-576ec2dc479b-kube-api-access-c5dtb\") pod \"marketplace-operator-79b997595-v2wfq\" (UID: \"012287fd-dda3-4c7b-af1f-576ec2dc479b\") " pod="openshift-marketplace/marketplace-operator-79b997595-v2wfq" Feb 17 16:00:39 crc kubenswrapper[4808]: I0217 16:00:39.908448 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/012287fd-dda3-4c7b-af1f-576ec2dc479b-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-v2wfq\" (UID: \"012287fd-dda3-4c7b-af1f-576ec2dc479b\") " pod="openshift-marketplace/marketplace-operator-79b997595-v2wfq" Feb 17 16:00:39 crc kubenswrapper[4808]: I0217 16:00:39.908481 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/012287fd-dda3-4c7b-af1f-576ec2dc479b-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-v2wfq\" (UID: \"012287fd-dda3-4c7b-af1f-576ec2dc479b\") " pod="openshift-marketplace/marketplace-operator-79b997595-v2wfq" Feb 17 16:00:40 crc kubenswrapper[4808]: I0217 16:00:40.009198 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5dtb\" (UniqueName: \"kubernetes.io/projected/012287fd-dda3-4c7b-af1f-576ec2dc479b-kube-api-access-c5dtb\") pod \"marketplace-operator-79b997595-v2wfq\" (UID: \"012287fd-dda3-4c7b-af1f-576ec2dc479b\") " pod="openshift-marketplace/marketplace-operator-79b997595-v2wfq" Feb 17 16:00:40 crc kubenswrapper[4808]: I0217 16:00:40.009247 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/012287fd-dda3-4c7b-af1f-576ec2dc479b-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-v2wfq\" (UID: \"012287fd-dda3-4c7b-af1f-576ec2dc479b\") " pod="openshift-marketplace/marketplace-operator-79b997595-v2wfq" Feb 17 16:00:40 crc kubenswrapper[4808]: I0217 16:00:40.009287 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/012287fd-dda3-4c7b-af1f-576ec2dc479b-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-v2wfq\" (UID: \"012287fd-dda3-4c7b-af1f-576ec2dc479b\") " pod="openshift-marketplace/marketplace-operator-79b997595-v2wfq" Feb 17 16:00:40 crc kubenswrapper[4808]: I0217 16:00:40.010894 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/012287fd-dda3-4c7b-af1f-576ec2dc479b-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-v2wfq\" (UID: \"012287fd-dda3-4c7b-af1f-576ec2dc479b\") " pod="openshift-marketplace/marketplace-operator-79b997595-v2wfq" Feb 17 16:00:40 crc kubenswrapper[4808]: I0217 16:00:40.017487 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/012287fd-dda3-4c7b-af1f-576ec2dc479b-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-v2wfq\" (UID: \"012287fd-dda3-4c7b-af1f-576ec2dc479b\") " pod="openshift-marketplace/marketplace-operator-79b997595-v2wfq" Feb 17 16:00:40 crc kubenswrapper[4808]: I0217 16:00:40.028802 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5dtb\" (UniqueName: \"kubernetes.io/projected/012287fd-dda3-4c7b-af1f-576ec2dc479b-kube-api-access-c5dtb\") pod \"marketplace-operator-79b997595-v2wfq\" (UID: \"012287fd-dda3-4c7b-af1f-576ec2dc479b\") " pod="openshift-marketplace/marketplace-operator-79b997595-v2wfq" Feb 17 16:00:40 crc kubenswrapper[4808]: I0217 16:00:40.035364 4808 generic.go:334] "Generic (PLEG): container finished" podID="e22d34a8-92f6-4a2a-a0f5-e063c25afac1" containerID="aa3fed03abacd35eb7bb1f3065835e28313c3e4962262338c33f30c7827d8852" exitCode=0 Feb 17 16:00:40 crc kubenswrapper[4808]: I0217 16:00:40.035425 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8jsrz" event={"ID":"e22d34a8-92f6-4a2a-a0f5-e063c25afac1","Type":"ContainerDied","Data":"aa3fed03abacd35eb7bb1f3065835e28313c3e4962262338c33f30c7827d8852"} Feb 17 16:00:40 crc kubenswrapper[4808]: I0217 16:00:40.040229 4808 generic.go:334] "Generic (PLEG): container finished" podID="543b2019-8399-411e-8e8b-45787b96873f" containerID="5e0ccb5571695b0a11ced97259c836c8ed65e804c680e02618b7b777ab17bf5c" exitCode=0 Feb 17 16:00:40 crc kubenswrapper[4808]: I0217 16:00:40.040393 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-22x8m" event={"ID":"543b2019-8399-411e-8e8b-45787b96873f","Type":"ContainerDied","Data":"5e0ccb5571695b0a11ced97259c836c8ed65e804c680e02618b7b777ab17bf5c"} Feb 17 16:00:40 crc kubenswrapper[4808]: I0217 16:00:40.043514 4808 generic.go:334] "Generic (PLEG): container finished" podID="b0793347-d948-480b-b5a7-d0fed7e12b38" containerID="39d5ff5dd804706cac13ddc305146999917b8de3246e042798c68cde55b248ed" exitCode=0 Feb 17 16:00:40 crc kubenswrapper[4808]: I0217 16:00:40.043780 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-sbr84" event={"ID":"b0793347-d948-480b-b5a7-d0fed7e12b38","Type":"ContainerDied","Data":"39d5ff5dd804706cac13ddc305146999917b8de3246e042798c68cde55b248ed"} Feb 17 16:00:40 crc kubenswrapper[4808]: I0217 16:00:40.043817 4808 scope.go:117] "RemoveContainer" containerID="1c4f11a7931bfb6c7e6734178fd2038fdd115a2788998f8ef169fbd7407cf6d2" Feb 17 16:00:40 crc kubenswrapper[4808]: I0217 16:00:40.056220 4808 generic.go:334] "Generic (PLEG): container finished" podID="a1db3ff7-c43f-412e-ab72-3d592b6352b0" containerID="ab1f4fdafb32d3b5b88908e1013b0deb27471f76f61f16612081d0858b9c0b31" exitCode=0 Feb 17 16:00:40 crc kubenswrapper[4808]: I0217 16:00:40.056307 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hn7fn" event={"ID":"a1db3ff7-c43f-412e-ab72-3d592b6352b0","Type":"ContainerDied","Data":"ab1f4fdafb32d3b5b88908e1013b0deb27471f76f61f16612081d0858b9c0b31"} Feb 17 16:00:40 crc kubenswrapper[4808]: I0217 16:00:40.059849 4808 generic.go:334] "Generic (PLEG): container finished" podID="48efd125-e3aa-444d-91a3-fa915be48b46" containerID="1789b161d1d589d4f4b637bcd20330b171b3967cd4acb37da4ed2b0c3bffddf0" exitCode=0 Feb 17 16:00:40 crc kubenswrapper[4808]: I0217 16:00:40.059911 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cs597" event={"ID":"48efd125-e3aa-444d-91a3-fa915be48b46","Type":"ContainerDied","Data":"1789b161d1d589d4f4b637bcd20330b171b3967cd4acb37da4ed2b0c3bffddf0"} Feb 17 16:00:40 crc kubenswrapper[4808]: I0217 16:00:40.184076 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-v2wfq" Feb 17 16:00:40 crc kubenswrapper[4808]: I0217 16:00:40.197047 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-22x8m" Feb 17 16:00:40 crc kubenswrapper[4808]: I0217 16:00:40.198448 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cs597" Feb 17 16:00:40 crc kubenswrapper[4808]: I0217 16:00:40.198664 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hn7fn" Feb 17 16:00:40 crc kubenswrapper[4808]: I0217 16:00:40.201748 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-sbr84" Feb 17 16:00:40 crc kubenswrapper[4808]: I0217 16:00:40.247242 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8jsrz" Feb 17 16:00:40 crc kubenswrapper[4808]: I0217 16:00:40.316243 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptbxm\" (UniqueName: \"kubernetes.io/projected/48efd125-e3aa-444d-91a3-fa915be48b46-kube-api-access-ptbxm\") pod \"48efd125-e3aa-444d-91a3-fa915be48b46\" (UID: \"48efd125-e3aa-444d-91a3-fa915be48b46\") " Feb 17 16:00:40 crc kubenswrapper[4808]: I0217 16:00:40.316302 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48efd125-e3aa-444d-91a3-fa915be48b46-catalog-content\") pod \"48efd125-e3aa-444d-91a3-fa915be48b46\" (UID: \"48efd125-e3aa-444d-91a3-fa915be48b46\") " Feb 17 16:00:40 crc kubenswrapper[4808]: I0217 16:00:40.316333 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h922n\" (UniqueName: \"kubernetes.io/projected/543b2019-8399-411e-8e8b-45787b96873f-kube-api-access-h922n\") pod \"543b2019-8399-411e-8e8b-45787b96873f\" (UID: \"543b2019-8399-411e-8e8b-45787b96873f\") " Feb 17 16:00:40 crc kubenswrapper[4808]: I0217 16:00:40.316386 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b0793347-d948-480b-b5a7-d0fed7e12b38-marketplace-trusted-ca\") pod \"b0793347-d948-480b-b5a7-d0fed7e12b38\" (UID: \"b0793347-d948-480b-b5a7-d0fed7e12b38\") " Feb 17 16:00:40 crc kubenswrapper[4808]: I0217 16:00:40.316421 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1db3ff7-c43f-412e-ab72-3d592b6352b0-utilities\") pod \"a1db3ff7-c43f-412e-ab72-3d592b6352b0\" (UID: \"a1db3ff7-c43f-412e-ab72-3d592b6352b0\") " Feb 17 16:00:40 crc kubenswrapper[4808]: I0217 16:00:40.316440 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1db3ff7-c43f-412e-ab72-3d592b6352b0-catalog-content\") pod \"a1db3ff7-c43f-412e-ab72-3d592b6352b0\" (UID: \"a1db3ff7-c43f-412e-ab72-3d592b6352b0\") " Feb 17 16:00:40 crc kubenswrapper[4808]: I0217 16:00:40.316467 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sp46n\" (UniqueName: \"kubernetes.io/projected/a1db3ff7-c43f-412e-ab72-3d592b6352b0-kube-api-access-sp46n\") pod \"a1db3ff7-c43f-412e-ab72-3d592b6352b0\" (UID: \"a1db3ff7-c43f-412e-ab72-3d592b6352b0\") " Feb 17 16:00:40 crc kubenswrapper[4808]: I0217 16:00:40.316490 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48efd125-e3aa-444d-91a3-fa915be48b46-utilities\") pod \"48efd125-e3aa-444d-91a3-fa915be48b46\" (UID: \"48efd125-e3aa-444d-91a3-fa915be48b46\") " Feb 17 16:00:40 crc kubenswrapper[4808]: I0217 16:00:40.316516 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b0793347-d948-480b-b5a7-d0fed7e12b38-marketplace-operator-metrics\") pod \"b0793347-d948-480b-b5a7-d0fed7e12b38\" (UID: \"b0793347-d948-480b-b5a7-d0fed7e12b38\") " Feb 17 16:00:40 crc kubenswrapper[4808]: I0217 16:00:40.316551 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cdhmj\" (UniqueName: \"kubernetes.io/projected/b0793347-d948-480b-b5a7-d0fed7e12b38-kube-api-access-cdhmj\") pod \"b0793347-d948-480b-b5a7-d0fed7e12b38\" (UID: \"b0793347-d948-480b-b5a7-d0fed7e12b38\") " Feb 17 16:00:40 crc kubenswrapper[4808]: I0217 16:00:40.317487 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48efd125-e3aa-444d-91a3-fa915be48b46-utilities" (OuterVolumeSpecName: "utilities") pod "48efd125-e3aa-444d-91a3-fa915be48b46" (UID: "48efd125-e3aa-444d-91a3-fa915be48b46"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:00:40 crc kubenswrapper[4808]: I0217 16:00:40.318089 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0793347-d948-480b-b5a7-d0fed7e12b38-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b0793347-d948-480b-b5a7-d0fed7e12b38" (UID: "b0793347-d948-480b-b5a7-d0fed7e12b38"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:00:40 crc kubenswrapper[4808]: I0217 16:00:40.318699 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a1db3ff7-c43f-412e-ab72-3d592b6352b0-utilities" (OuterVolumeSpecName: "utilities") pod "a1db3ff7-c43f-412e-ab72-3d592b6352b0" (UID: "a1db3ff7-c43f-412e-ab72-3d592b6352b0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:00:40 crc kubenswrapper[4808]: I0217 16:00:40.323408 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48efd125-e3aa-444d-91a3-fa915be48b46-kube-api-access-ptbxm" (OuterVolumeSpecName: "kube-api-access-ptbxm") pod "48efd125-e3aa-444d-91a3-fa915be48b46" (UID: "48efd125-e3aa-444d-91a3-fa915be48b46"). InnerVolumeSpecName "kube-api-access-ptbxm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:00:40 crc kubenswrapper[4808]: I0217 16:00:40.323438 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1db3ff7-c43f-412e-ab72-3d592b6352b0-kube-api-access-sp46n" (OuterVolumeSpecName: "kube-api-access-sp46n") pod "a1db3ff7-c43f-412e-ab72-3d592b6352b0" (UID: "a1db3ff7-c43f-412e-ab72-3d592b6352b0"). InnerVolumeSpecName "kube-api-access-sp46n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:00:40 crc kubenswrapper[4808]: I0217 16:00:40.323495 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/543b2019-8399-411e-8e8b-45787b96873f-kube-api-access-h922n" (OuterVolumeSpecName: "kube-api-access-h922n") pod "543b2019-8399-411e-8e8b-45787b96873f" (UID: "543b2019-8399-411e-8e8b-45787b96873f"). InnerVolumeSpecName "kube-api-access-h922n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:00:40 crc kubenswrapper[4808]: I0217 16:00:40.323714 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0793347-d948-480b-b5a7-d0fed7e12b38-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b0793347-d948-480b-b5a7-d0fed7e12b38" (UID: "b0793347-d948-480b-b5a7-d0fed7e12b38"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:00:40 crc kubenswrapper[4808]: I0217 16:00:40.324469 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/543b2019-8399-411e-8e8b-45787b96873f-utilities\") pod \"543b2019-8399-411e-8e8b-45787b96873f\" (UID: \"543b2019-8399-411e-8e8b-45787b96873f\") " Feb 17 16:00:40 crc kubenswrapper[4808]: I0217 16:00:40.324493 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/543b2019-8399-411e-8e8b-45787b96873f-catalog-content\") pod \"543b2019-8399-411e-8e8b-45787b96873f\" (UID: \"543b2019-8399-411e-8e8b-45787b96873f\") " Feb 17 16:00:40 crc kubenswrapper[4808]: I0217 16:00:40.325000 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1db3ff7-c43f-412e-ab72-3d592b6352b0-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:00:40 crc kubenswrapper[4808]: I0217 16:00:40.325013 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sp46n\" (UniqueName: \"kubernetes.io/projected/a1db3ff7-c43f-412e-ab72-3d592b6352b0-kube-api-access-sp46n\") on node \"crc\" DevicePath \"\"" Feb 17 16:00:40 crc kubenswrapper[4808]: I0217 16:00:40.325025 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48efd125-e3aa-444d-91a3-fa915be48b46-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:00:40 crc kubenswrapper[4808]: I0217 16:00:40.325035 4808 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b0793347-d948-480b-b5a7-d0fed7e12b38-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 17 16:00:40 crc kubenswrapper[4808]: I0217 16:00:40.325048 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ptbxm\" (UniqueName: \"kubernetes.io/projected/48efd125-e3aa-444d-91a3-fa915be48b46-kube-api-access-ptbxm\") on node \"crc\" DevicePath \"\"" Feb 17 16:00:40 crc kubenswrapper[4808]: I0217 16:00:40.325057 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h922n\" (UniqueName: \"kubernetes.io/projected/543b2019-8399-411e-8e8b-45787b96873f-kube-api-access-h922n\") on node \"crc\" DevicePath \"\"" Feb 17 16:00:40 crc kubenswrapper[4808]: I0217 16:00:40.325066 4808 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b0793347-d948-480b-b5a7-d0fed7e12b38-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 17 16:00:40 crc kubenswrapper[4808]: I0217 16:00:40.325415 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0793347-d948-480b-b5a7-d0fed7e12b38-kube-api-access-cdhmj" (OuterVolumeSpecName: "kube-api-access-cdhmj") pod "b0793347-d948-480b-b5a7-d0fed7e12b38" (UID: "b0793347-d948-480b-b5a7-d0fed7e12b38"). InnerVolumeSpecName "kube-api-access-cdhmj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:00:40 crc kubenswrapper[4808]: I0217 16:00:40.326379 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/543b2019-8399-411e-8e8b-45787b96873f-utilities" (OuterVolumeSpecName: "utilities") pod "543b2019-8399-411e-8e8b-45787b96873f" (UID: "543b2019-8399-411e-8e8b-45787b96873f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:00:40 crc kubenswrapper[4808]: I0217 16:00:40.358315 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48efd125-e3aa-444d-91a3-fa915be48b46-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "48efd125-e3aa-444d-91a3-fa915be48b46" (UID: "48efd125-e3aa-444d-91a3-fa915be48b46"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:00:40 crc kubenswrapper[4808]: I0217 16:00:40.388487 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a1db3ff7-c43f-412e-ab72-3d592b6352b0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a1db3ff7-c43f-412e-ab72-3d592b6352b0" (UID: "a1db3ff7-c43f-412e-ab72-3d592b6352b0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:00:40 crc kubenswrapper[4808]: I0217 16:00:40.389460 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/543b2019-8399-411e-8e8b-45787b96873f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "543b2019-8399-411e-8e8b-45787b96873f" (UID: "543b2019-8399-411e-8e8b-45787b96873f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:00:40 crc kubenswrapper[4808]: I0217 16:00:40.425451 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e22d34a8-92f6-4a2a-a0f5-e063c25afac1-utilities\") pod \"e22d34a8-92f6-4a2a-a0f5-e063c25afac1\" (UID: \"e22d34a8-92f6-4a2a-a0f5-e063c25afac1\") " Feb 17 16:00:40 crc kubenswrapper[4808]: I0217 16:00:40.425528 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bfwdc\" (UniqueName: \"kubernetes.io/projected/e22d34a8-92f6-4a2a-a0f5-e063c25afac1-kube-api-access-bfwdc\") pod \"e22d34a8-92f6-4a2a-a0f5-e063c25afac1\" (UID: \"e22d34a8-92f6-4a2a-a0f5-e063c25afac1\") " Feb 17 16:00:40 crc kubenswrapper[4808]: I0217 16:00:40.425604 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e22d34a8-92f6-4a2a-a0f5-e063c25afac1-catalog-content\") pod \"e22d34a8-92f6-4a2a-a0f5-e063c25afac1\" (UID: \"e22d34a8-92f6-4a2a-a0f5-e063c25afac1\") " Feb 17 16:00:40 crc kubenswrapper[4808]: I0217 16:00:40.425836 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48efd125-e3aa-444d-91a3-fa915be48b46-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:00:40 crc kubenswrapper[4808]: I0217 16:00:40.425847 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1db3ff7-c43f-412e-ab72-3d592b6352b0-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:00:40 crc kubenswrapper[4808]: I0217 16:00:40.425856 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cdhmj\" (UniqueName: \"kubernetes.io/projected/b0793347-d948-480b-b5a7-d0fed7e12b38-kube-api-access-cdhmj\") on node \"crc\" DevicePath \"\"" Feb 17 16:00:40 crc kubenswrapper[4808]: I0217 16:00:40.425866 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/543b2019-8399-411e-8e8b-45787b96873f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:00:40 crc kubenswrapper[4808]: I0217 16:00:40.425875 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/543b2019-8399-411e-8e8b-45787b96873f-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:00:40 crc kubenswrapper[4808]: I0217 16:00:40.427681 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e22d34a8-92f6-4a2a-a0f5-e063c25afac1-utilities" (OuterVolumeSpecName: "utilities") pod "e22d34a8-92f6-4a2a-a0f5-e063c25afac1" (UID: "e22d34a8-92f6-4a2a-a0f5-e063c25afac1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:00:40 crc kubenswrapper[4808]: I0217 16:00:40.431229 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-v2wfq"] Feb 17 16:00:40 crc kubenswrapper[4808]: I0217 16:00:40.433760 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e22d34a8-92f6-4a2a-a0f5-e063c25afac1-kube-api-access-bfwdc" (OuterVolumeSpecName: "kube-api-access-bfwdc") pod "e22d34a8-92f6-4a2a-a0f5-e063c25afac1" (UID: "e22d34a8-92f6-4a2a-a0f5-e063c25afac1"). InnerVolumeSpecName "kube-api-access-bfwdc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:00:40 crc kubenswrapper[4808]: I0217 16:00:40.527008 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e22d34a8-92f6-4a2a-a0f5-e063c25afac1-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:00:40 crc kubenswrapper[4808]: I0217 16:00:40.527233 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bfwdc\" (UniqueName: \"kubernetes.io/projected/e22d34a8-92f6-4a2a-a0f5-e063c25afac1-kube-api-access-bfwdc\") on node \"crc\" DevicePath \"\"" Feb 17 16:00:40 crc kubenswrapper[4808]: I0217 16:00:40.578712 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e22d34a8-92f6-4a2a-a0f5-e063c25afac1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e22d34a8-92f6-4a2a-a0f5-e063c25afac1" (UID: "e22d34a8-92f6-4a2a-a0f5-e063c25afac1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:00:40 crc kubenswrapper[4808]: I0217 16:00:40.628370 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e22d34a8-92f6-4a2a-a0f5-e063c25afac1-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:00:41 crc kubenswrapper[4808]: I0217 16:00:41.067327 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-sbr84" event={"ID":"b0793347-d948-480b-b5a7-d0fed7e12b38","Type":"ContainerDied","Data":"026165e1bd109fad794dffddae09d3e255a5318f60f94f71f305c72e7d4ac00e"} Feb 17 16:00:41 crc kubenswrapper[4808]: I0217 16:00:41.067388 4808 scope.go:117] "RemoveContainer" containerID="39d5ff5dd804706cac13ddc305146999917b8de3246e042798c68cde55b248ed" Feb 17 16:00:41 crc kubenswrapper[4808]: I0217 16:00:41.067355 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-sbr84" Feb 17 16:00:41 crc kubenswrapper[4808]: I0217 16:00:41.071741 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hn7fn" Feb 17 16:00:41 crc kubenswrapper[4808]: I0217 16:00:41.071757 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hn7fn" event={"ID":"a1db3ff7-c43f-412e-ab72-3d592b6352b0","Type":"ContainerDied","Data":"a45a3dcf61a1bf78b3c958287ad11993acb14303ea923a5033d56896c26a6ab3"} Feb 17 16:00:41 crc kubenswrapper[4808]: I0217 16:00:41.074109 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-v2wfq" event={"ID":"012287fd-dda3-4c7b-af1f-576ec2dc479b","Type":"ContainerStarted","Data":"eaf65c679dacb3b04fb5e80de2028cbc11e3e31becac5bae377dfc8eaba3fedd"} Feb 17 16:00:41 crc kubenswrapper[4808]: I0217 16:00:41.074157 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-v2wfq" event={"ID":"012287fd-dda3-4c7b-af1f-576ec2dc479b","Type":"ContainerStarted","Data":"175ef94fb6c0bf727103da307105f12e6f048b80375e60513ea8f41627457074"} Feb 17 16:00:41 crc kubenswrapper[4808]: I0217 16:00:41.074184 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-v2wfq" Feb 17 16:00:41 crc kubenswrapper[4808]: I0217 16:00:41.077123 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cs597" Feb 17 16:00:41 crc kubenswrapper[4808]: I0217 16:00:41.077370 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cs597" event={"ID":"48efd125-e3aa-444d-91a3-fa915be48b46","Type":"ContainerDied","Data":"126635f0be61976c959568021a2dceebba5ec8a4421ba4bd848eb5998d5c720b"} Feb 17 16:00:41 crc kubenswrapper[4808]: I0217 16:00:41.080111 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-v2wfq" Feb 17 16:00:41 crc kubenswrapper[4808]: I0217 16:00:41.087153 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8jsrz" event={"ID":"e22d34a8-92f6-4a2a-a0f5-e063c25afac1","Type":"ContainerDied","Data":"74a889b6efdb919b84134965ae425faf36a72c4e4787bd3f59cfb8cf73e5c6b2"} Feb 17 16:00:41 crc kubenswrapper[4808]: I0217 16:00:41.087220 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8jsrz" Feb 17 16:00:41 crc kubenswrapper[4808]: I0217 16:00:41.094832 4808 scope.go:117] "RemoveContainer" containerID="ab1f4fdafb32d3b5b88908e1013b0deb27471f76f61f16612081d0858b9c0b31" Feb 17 16:00:41 crc kubenswrapper[4808]: I0217 16:00:41.101284 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-v2wfq" podStartSLOduration=2.101273891 podStartE2EDuration="2.101273891s" podCreationTimestamp="2026-02-17 16:00:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:00:41.10024706 +0000 UTC m=+404.616606153" watchObservedRunningTime="2026-02-17 16:00:41.101273891 +0000 UTC m=+404.617632964" Feb 17 16:00:41 crc kubenswrapper[4808]: I0217 16:00:41.119634 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hn7fn"] Feb 17 16:00:41 crc kubenswrapper[4808]: I0217 16:00:41.127028 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-22x8m" event={"ID":"543b2019-8399-411e-8e8b-45787b96873f","Type":"ContainerDied","Data":"88ab9dc080b2cadb5ff2951ac6094d56029248c1c148ac36b7e2a6167225bf7c"} Feb 17 16:00:41 crc kubenswrapper[4808]: I0217 16:00:41.127188 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-22x8m" Feb 17 16:00:41 crc kubenswrapper[4808]: I0217 16:00:41.128519 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-hn7fn"] Feb 17 16:00:41 crc kubenswrapper[4808]: I0217 16:00:41.137129 4808 scope.go:117] "RemoveContainer" containerID="56e991bdc7726b6c61887160d04bc51376a606946a766ba535be7f736adc85e3" Feb 17 16:00:41 crc kubenswrapper[4808]: I0217 16:00:41.158237 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1db3ff7-c43f-412e-ab72-3d592b6352b0" path="/var/lib/kubelet/pods/a1db3ff7-c43f-412e-ab72-3d592b6352b0/volumes" Feb 17 16:00:41 crc kubenswrapper[4808]: I0217 16:00:41.162692 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cs597"] Feb 17 16:00:41 crc kubenswrapper[4808]: I0217 16:00:41.162776 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-cs597"] Feb 17 16:00:41 crc kubenswrapper[4808]: I0217 16:00:41.163180 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8jsrz"] Feb 17 16:00:41 crc kubenswrapper[4808]: I0217 16:00:41.189091 4808 scope.go:117] "RemoveContainer" containerID="b039d42ff08392f60bfd69fd494b2249c19f74796e443b4b4b8b827c93e49b48" Feb 17 16:00:41 crc kubenswrapper[4808]: I0217 16:00:41.213700 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-8jsrz"] Feb 17 16:00:41 crc kubenswrapper[4808]: I0217 16:00:41.217929 4808 scope.go:117] "RemoveContainer" containerID="1789b161d1d589d4f4b637bcd20330b171b3967cd4acb37da4ed2b0c3bffddf0" Feb 17 16:00:41 crc kubenswrapper[4808]: I0217 16:00:41.221127 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-sbr84"] Feb 17 16:00:41 crc kubenswrapper[4808]: I0217 16:00:41.229359 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-sbr84"] Feb 17 16:00:41 crc kubenswrapper[4808]: I0217 16:00:41.233436 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-22x8m"] Feb 17 16:00:41 crc kubenswrapper[4808]: I0217 16:00:41.234764 4808 scope.go:117] "RemoveContainer" containerID="2e27c972236a280162abd4cf4685ed84882d0bc3042df73d9e827a7ec611814e" Feb 17 16:00:41 crc kubenswrapper[4808]: I0217 16:00:41.239974 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-22x8m"] Feb 17 16:00:41 crc kubenswrapper[4808]: I0217 16:00:41.254748 4808 scope.go:117] "RemoveContainer" containerID="2d27bebccfda20ebcc5c228a8194fccc9e95ec81e20baedc530a917fdd03e867" Feb 17 16:00:41 crc kubenswrapper[4808]: I0217 16:00:41.274010 4808 scope.go:117] "RemoveContainer" containerID="aa3fed03abacd35eb7bb1f3065835e28313c3e4962262338c33f30c7827d8852" Feb 17 16:00:41 crc kubenswrapper[4808]: I0217 16:00:41.294215 4808 scope.go:117] "RemoveContainer" containerID="616c2fdd03b2d5398b274f5ab3d43d25dcd8bacb210382e6b982a39d3da41dd3" Feb 17 16:00:41 crc kubenswrapper[4808]: I0217 16:00:41.311634 4808 scope.go:117] "RemoveContainer" containerID="3c46a03c8aecba377b0d1ea2fda18a067c3dd9d9e53d4229b5338fca0d7a98e0" Feb 17 16:00:41 crc kubenswrapper[4808]: I0217 16:00:41.323502 4808 scope.go:117] "RemoveContainer" containerID="5e0ccb5571695b0a11ced97259c836c8ed65e804c680e02618b7b777ab17bf5c" Feb 17 16:00:41 crc kubenswrapper[4808]: I0217 16:00:41.337027 4808 scope.go:117] "RemoveContainer" containerID="335aab9c25e746284f138cf133ee4f794236186f62c6450d29a99ecbca2622cc" Feb 17 16:00:41 crc kubenswrapper[4808]: I0217 16:00:41.353352 4808 scope.go:117] "RemoveContainer" containerID="a1b466a7276199cdb3d16661c145bd9226ea4df1371372728f98eec1641d1432" Feb 17 16:00:41 crc kubenswrapper[4808]: I0217 16:00:41.919165 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-bbhct"] Feb 17 16:00:41 crc kubenswrapper[4808]: E0217 16:00:41.922042 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1db3ff7-c43f-412e-ab72-3d592b6352b0" containerName="extract-content" Feb 17 16:00:41 crc kubenswrapper[4808]: I0217 16:00:41.922068 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1db3ff7-c43f-412e-ab72-3d592b6352b0" containerName="extract-content" Feb 17 16:00:41 crc kubenswrapper[4808]: E0217 16:00:41.922080 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e22d34a8-92f6-4a2a-a0f5-e063c25afac1" containerName="extract-content" Feb 17 16:00:41 crc kubenswrapper[4808]: I0217 16:00:41.922089 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="e22d34a8-92f6-4a2a-a0f5-e063c25afac1" containerName="extract-content" Feb 17 16:00:41 crc kubenswrapper[4808]: E0217 16:00:41.922102 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0793347-d948-480b-b5a7-d0fed7e12b38" containerName="marketplace-operator" Feb 17 16:00:41 crc kubenswrapper[4808]: I0217 16:00:41.922114 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0793347-d948-480b-b5a7-d0fed7e12b38" containerName="marketplace-operator" Feb 17 16:00:41 crc kubenswrapper[4808]: E0217 16:00:41.922127 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="543b2019-8399-411e-8e8b-45787b96873f" containerName="extract-utilities" Feb 17 16:00:41 crc kubenswrapper[4808]: I0217 16:00:41.922135 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="543b2019-8399-411e-8e8b-45787b96873f" containerName="extract-utilities" Feb 17 16:00:41 crc kubenswrapper[4808]: E0217 16:00:41.922143 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e22d34a8-92f6-4a2a-a0f5-e063c25afac1" containerName="extract-utilities" Feb 17 16:00:41 crc kubenswrapper[4808]: I0217 16:00:41.922150 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="e22d34a8-92f6-4a2a-a0f5-e063c25afac1" containerName="extract-utilities" Feb 17 16:00:41 crc kubenswrapper[4808]: E0217 16:00:41.922160 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1db3ff7-c43f-412e-ab72-3d592b6352b0" containerName="registry-server" Feb 17 16:00:41 crc kubenswrapper[4808]: I0217 16:00:41.922167 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1db3ff7-c43f-412e-ab72-3d592b6352b0" containerName="registry-server" Feb 17 16:00:41 crc kubenswrapper[4808]: E0217 16:00:41.922180 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e22d34a8-92f6-4a2a-a0f5-e063c25afac1" containerName="registry-server" Feb 17 16:00:41 crc kubenswrapper[4808]: I0217 16:00:41.922187 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="e22d34a8-92f6-4a2a-a0f5-e063c25afac1" containerName="registry-server" Feb 17 16:00:41 crc kubenswrapper[4808]: E0217 16:00:41.922197 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="543b2019-8399-411e-8e8b-45787b96873f" containerName="extract-content" Feb 17 16:00:41 crc kubenswrapper[4808]: I0217 16:00:41.922204 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="543b2019-8399-411e-8e8b-45787b96873f" containerName="extract-content" Feb 17 16:00:41 crc kubenswrapper[4808]: E0217 16:00:41.922212 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="543b2019-8399-411e-8e8b-45787b96873f" containerName="registry-server" Feb 17 16:00:41 crc kubenswrapper[4808]: I0217 16:00:41.922219 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="543b2019-8399-411e-8e8b-45787b96873f" containerName="registry-server" Feb 17 16:00:41 crc kubenswrapper[4808]: E0217 16:00:41.922227 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48efd125-e3aa-444d-91a3-fa915be48b46" containerName="extract-content" Feb 17 16:00:41 crc kubenswrapper[4808]: I0217 16:00:41.922258 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="48efd125-e3aa-444d-91a3-fa915be48b46" containerName="extract-content" Feb 17 16:00:41 crc kubenswrapper[4808]: E0217 16:00:41.922269 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0793347-d948-480b-b5a7-d0fed7e12b38" containerName="marketplace-operator" Feb 17 16:00:41 crc kubenswrapper[4808]: I0217 16:00:41.922276 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0793347-d948-480b-b5a7-d0fed7e12b38" containerName="marketplace-operator" Feb 17 16:00:41 crc kubenswrapper[4808]: E0217 16:00:41.922289 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1db3ff7-c43f-412e-ab72-3d592b6352b0" containerName="extract-utilities" Feb 17 16:00:41 crc kubenswrapper[4808]: I0217 16:00:41.922296 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1db3ff7-c43f-412e-ab72-3d592b6352b0" containerName="extract-utilities" Feb 17 16:00:41 crc kubenswrapper[4808]: E0217 16:00:41.922304 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48efd125-e3aa-444d-91a3-fa915be48b46" containerName="extract-utilities" Feb 17 16:00:41 crc kubenswrapper[4808]: I0217 16:00:41.922311 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="48efd125-e3aa-444d-91a3-fa915be48b46" containerName="extract-utilities" Feb 17 16:00:41 crc kubenswrapper[4808]: E0217 16:00:41.922322 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48efd125-e3aa-444d-91a3-fa915be48b46" containerName="registry-server" Feb 17 16:00:41 crc kubenswrapper[4808]: I0217 16:00:41.922330 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="48efd125-e3aa-444d-91a3-fa915be48b46" containerName="registry-server" Feb 17 16:00:41 crc kubenswrapper[4808]: I0217 16:00:41.922454 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="48efd125-e3aa-444d-91a3-fa915be48b46" containerName="registry-server" Feb 17 16:00:41 crc kubenswrapper[4808]: I0217 16:00:41.922473 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="e22d34a8-92f6-4a2a-a0f5-e063c25afac1" containerName="registry-server" Feb 17 16:00:41 crc kubenswrapper[4808]: I0217 16:00:41.922484 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0793347-d948-480b-b5a7-d0fed7e12b38" containerName="marketplace-operator" Feb 17 16:00:41 crc kubenswrapper[4808]: I0217 16:00:41.922495 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="543b2019-8399-411e-8e8b-45787b96873f" containerName="registry-server" Feb 17 16:00:41 crc kubenswrapper[4808]: I0217 16:00:41.922504 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0793347-d948-480b-b5a7-d0fed7e12b38" containerName="marketplace-operator" Feb 17 16:00:41 crc kubenswrapper[4808]: I0217 16:00:41.922512 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1db3ff7-c43f-412e-ab72-3d592b6352b0" containerName="registry-server" Feb 17 16:00:41 crc kubenswrapper[4808]: I0217 16:00:41.923435 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bbhct" Feb 17 16:00:41 crc kubenswrapper[4808]: I0217 16:00:41.926926 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 17 16:00:41 crc kubenswrapper[4808]: I0217 16:00:41.936444 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bbhct"] Feb 17 16:00:42 crc kubenswrapper[4808]: I0217 16:00:42.048300 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5011758e-a6e4-4491-8ac6-c0a8bcb50568-utilities\") pod \"redhat-marketplace-bbhct\" (UID: \"5011758e-a6e4-4491-8ac6-c0a8bcb50568\") " pod="openshift-marketplace/redhat-marketplace-bbhct" Feb 17 16:00:42 crc kubenswrapper[4808]: I0217 16:00:42.048421 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5011758e-a6e4-4491-8ac6-c0a8bcb50568-catalog-content\") pod \"redhat-marketplace-bbhct\" (UID: \"5011758e-a6e4-4491-8ac6-c0a8bcb50568\") " pod="openshift-marketplace/redhat-marketplace-bbhct" Feb 17 16:00:42 crc kubenswrapper[4808]: I0217 16:00:42.048450 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dq7r\" (UniqueName: \"kubernetes.io/projected/5011758e-a6e4-4491-8ac6-c0a8bcb50568-kube-api-access-8dq7r\") pod \"redhat-marketplace-bbhct\" (UID: \"5011758e-a6e4-4491-8ac6-c0a8bcb50568\") " pod="openshift-marketplace/redhat-marketplace-bbhct" Feb 17 16:00:42 crc kubenswrapper[4808]: I0217 16:00:42.116388 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-lstjz"] Feb 17 16:00:42 crc kubenswrapper[4808]: I0217 16:00:42.119465 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lstjz" Feb 17 16:00:42 crc kubenswrapper[4808]: I0217 16:00:42.122882 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 17 16:00:42 crc kubenswrapper[4808]: I0217 16:00:42.138212 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lstjz"] Feb 17 16:00:42 crc kubenswrapper[4808]: I0217 16:00:42.149281 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5011758e-a6e4-4491-8ac6-c0a8bcb50568-catalog-content\") pod \"redhat-marketplace-bbhct\" (UID: \"5011758e-a6e4-4491-8ac6-c0a8bcb50568\") " pod="openshift-marketplace/redhat-marketplace-bbhct" Feb 17 16:00:42 crc kubenswrapper[4808]: I0217 16:00:42.149336 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8dq7r\" (UniqueName: \"kubernetes.io/projected/5011758e-a6e4-4491-8ac6-c0a8bcb50568-kube-api-access-8dq7r\") pod \"redhat-marketplace-bbhct\" (UID: \"5011758e-a6e4-4491-8ac6-c0a8bcb50568\") " pod="openshift-marketplace/redhat-marketplace-bbhct" Feb 17 16:00:42 crc kubenswrapper[4808]: I0217 16:00:42.149413 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5011758e-a6e4-4491-8ac6-c0a8bcb50568-utilities\") pod \"redhat-marketplace-bbhct\" (UID: \"5011758e-a6e4-4491-8ac6-c0a8bcb50568\") " pod="openshift-marketplace/redhat-marketplace-bbhct" Feb 17 16:00:42 crc kubenswrapper[4808]: I0217 16:00:42.150176 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5011758e-a6e4-4491-8ac6-c0a8bcb50568-catalog-content\") pod \"redhat-marketplace-bbhct\" (UID: \"5011758e-a6e4-4491-8ac6-c0a8bcb50568\") " pod="openshift-marketplace/redhat-marketplace-bbhct" Feb 17 16:00:42 crc kubenswrapper[4808]: I0217 16:00:42.150221 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5011758e-a6e4-4491-8ac6-c0a8bcb50568-utilities\") pod \"redhat-marketplace-bbhct\" (UID: \"5011758e-a6e4-4491-8ac6-c0a8bcb50568\") " pod="openshift-marketplace/redhat-marketplace-bbhct" Feb 17 16:00:42 crc kubenswrapper[4808]: I0217 16:00:42.190566 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8dq7r\" (UniqueName: \"kubernetes.io/projected/5011758e-a6e4-4491-8ac6-c0a8bcb50568-kube-api-access-8dq7r\") pod \"redhat-marketplace-bbhct\" (UID: \"5011758e-a6e4-4491-8ac6-c0a8bcb50568\") " pod="openshift-marketplace/redhat-marketplace-bbhct" Feb 17 16:00:42 crc kubenswrapper[4808]: I0217 16:00:42.250254 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bcdfcb0d-7a0d-4cee-a80f-f49f078bef37-utilities\") pod \"redhat-operators-lstjz\" (UID: \"bcdfcb0d-7a0d-4cee-a80f-f49f078bef37\") " pod="openshift-marketplace/redhat-operators-lstjz" Feb 17 16:00:42 crc kubenswrapper[4808]: I0217 16:00:42.250657 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bcdfcb0d-7a0d-4cee-a80f-f49f078bef37-catalog-content\") pod \"redhat-operators-lstjz\" (UID: \"bcdfcb0d-7a0d-4cee-a80f-f49f078bef37\") " pod="openshift-marketplace/redhat-operators-lstjz" Feb 17 16:00:42 crc kubenswrapper[4808]: I0217 16:00:42.250882 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcnxj\" (UniqueName: \"kubernetes.io/projected/bcdfcb0d-7a0d-4cee-a80f-f49f078bef37-kube-api-access-jcnxj\") pod \"redhat-operators-lstjz\" (UID: \"bcdfcb0d-7a0d-4cee-a80f-f49f078bef37\") " pod="openshift-marketplace/redhat-operators-lstjz" Feb 17 16:00:42 crc kubenswrapper[4808]: I0217 16:00:42.266693 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bbhct" Feb 17 16:00:42 crc kubenswrapper[4808]: I0217 16:00:42.352938 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bcdfcb0d-7a0d-4cee-a80f-f49f078bef37-catalog-content\") pod \"redhat-operators-lstjz\" (UID: \"bcdfcb0d-7a0d-4cee-a80f-f49f078bef37\") " pod="openshift-marketplace/redhat-operators-lstjz" Feb 17 16:00:42 crc kubenswrapper[4808]: I0217 16:00:42.353259 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jcnxj\" (UniqueName: \"kubernetes.io/projected/bcdfcb0d-7a0d-4cee-a80f-f49f078bef37-kube-api-access-jcnxj\") pod \"redhat-operators-lstjz\" (UID: \"bcdfcb0d-7a0d-4cee-a80f-f49f078bef37\") " pod="openshift-marketplace/redhat-operators-lstjz" Feb 17 16:00:42 crc kubenswrapper[4808]: I0217 16:00:42.353295 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bcdfcb0d-7a0d-4cee-a80f-f49f078bef37-utilities\") pod \"redhat-operators-lstjz\" (UID: \"bcdfcb0d-7a0d-4cee-a80f-f49f078bef37\") " pod="openshift-marketplace/redhat-operators-lstjz" Feb 17 16:00:42 crc kubenswrapper[4808]: I0217 16:00:42.353761 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bcdfcb0d-7a0d-4cee-a80f-f49f078bef37-catalog-content\") pod \"redhat-operators-lstjz\" (UID: \"bcdfcb0d-7a0d-4cee-a80f-f49f078bef37\") " pod="openshift-marketplace/redhat-operators-lstjz" Feb 17 16:00:42 crc kubenswrapper[4808]: I0217 16:00:42.353864 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bcdfcb0d-7a0d-4cee-a80f-f49f078bef37-utilities\") pod \"redhat-operators-lstjz\" (UID: \"bcdfcb0d-7a0d-4cee-a80f-f49f078bef37\") " pod="openshift-marketplace/redhat-operators-lstjz" Feb 17 16:00:42 crc kubenswrapper[4808]: I0217 16:00:42.377283 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jcnxj\" (UniqueName: \"kubernetes.io/projected/bcdfcb0d-7a0d-4cee-a80f-f49f078bef37-kube-api-access-jcnxj\") pod \"redhat-operators-lstjz\" (UID: \"bcdfcb0d-7a0d-4cee-a80f-f49f078bef37\") " pod="openshift-marketplace/redhat-operators-lstjz" Feb 17 16:00:42 crc kubenswrapper[4808]: I0217 16:00:42.452498 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bbhct"] Feb 17 16:00:42 crc kubenswrapper[4808]: I0217 16:00:42.452557 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lstjz" Feb 17 16:00:42 crc kubenswrapper[4808]: W0217 16:00:42.463386 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5011758e_a6e4_4491_8ac6_c0a8bcb50568.slice/crio-921fc7dd33aec55c58cf0c2b55ec6836878f7c0080bc6d184a05e8f04e953284 WatchSource:0}: Error finding container 921fc7dd33aec55c58cf0c2b55ec6836878f7c0080bc6d184a05e8f04e953284: Status 404 returned error can't find the container with id 921fc7dd33aec55c58cf0c2b55ec6836878f7c0080bc6d184a05e8f04e953284 Feb 17 16:00:42 crc kubenswrapper[4808]: I0217 16:00:42.661509 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lstjz"] Feb 17 16:00:43 crc kubenswrapper[4808]: I0217 16:00:43.156671 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48efd125-e3aa-444d-91a3-fa915be48b46" path="/var/lib/kubelet/pods/48efd125-e3aa-444d-91a3-fa915be48b46/volumes" Feb 17 16:00:43 crc kubenswrapper[4808]: I0217 16:00:43.158004 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="543b2019-8399-411e-8e8b-45787b96873f" path="/var/lib/kubelet/pods/543b2019-8399-411e-8e8b-45787b96873f/volumes" Feb 17 16:00:43 crc kubenswrapper[4808]: I0217 16:00:43.159375 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0793347-d948-480b-b5a7-d0fed7e12b38" path="/var/lib/kubelet/pods/b0793347-d948-480b-b5a7-d0fed7e12b38/volumes" Feb 17 16:00:43 crc kubenswrapper[4808]: I0217 16:00:43.160939 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e22d34a8-92f6-4a2a-a0f5-e063c25afac1" path="/var/lib/kubelet/pods/e22d34a8-92f6-4a2a-a0f5-e063c25afac1/volumes" Feb 17 16:00:43 crc kubenswrapper[4808]: I0217 16:00:43.182021 4808 generic.go:334] "Generic (PLEG): container finished" podID="bcdfcb0d-7a0d-4cee-a80f-f49f078bef37" containerID="127179db16e67d9e8dcadf6734e266e67993b9f846ab820cb629d1308633756f" exitCode=0 Feb 17 16:00:43 crc kubenswrapper[4808]: I0217 16:00:43.182125 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lstjz" event={"ID":"bcdfcb0d-7a0d-4cee-a80f-f49f078bef37","Type":"ContainerDied","Data":"127179db16e67d9e8dcadf6734e266e67993b9f846ab820cb629d1308633756f"} Feb 17 16:00:43 crc kubenswrapper[4808]: I0217 16:00:43.182164 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lstjz" event={"ID":"bcdfcb0d-7a0d-4cee-a80f-f49f078bef37","Type":"ContainerStarted","Data":"d040bb42b76433ad539601aaec69ac52d503fa1b69b306ae00d824d1707f5b1a"} Feb 17 16:00:43 crc kubenswrapper[4808]: I0217 16:00:43.185238 4808 generic.go:334] "Generic (PLEG): container finished" podID="5011758e-a6e4-4491-8ac6-c0a8bcb50568" containerID="c596161aeadceeb328bd43505150bab123a2f2a537b42718bb7e2a8b06f27acf" exitCode=0 Feb 17 16:00:43 crc kubenswrapper[4808]: I0217 16:00:43.186320 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bbhct" event={"ID":"5011758e-a6e4-4491-8ac6-c0a8bcb50568","Type":"ContainerDied","Data":"c596161aeadceeb328bd43505150bab123a2f2a537b42718bb7e2a8b06f27acf"} Feb 17 16:00:43 crc kubenswrapper[4808]: I0217 16:00:43.186394 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bbhct" event={"ID":"5011758e-a6e4-4491-8ac6-c0a8bcb50568","Type":"ContainerStarted","Data":"921fc7dd33aec55c58cf0c2b55ec6836878f7c0080bc6d184a05e8f04e953284"} Feb 17 16:00:44 crc kubenswrapper[4808]: I0217 16:00:44.192519 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lstjz" event={"ID":"bcdfcb0d-7a0d-4cee-a80f-f49f078bef37","Type":"ContainerStarted","Data":"2daf81ecd3c16485533bbe62503f83d4e79a667aade15b55d10480d78481ba20"} Feb 17 16:00:44 crc kubenswrapper[4808]: I0217 16:00:44.322990 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-jqtsg"] Feb 17 16:00:44 crc kubenswrapper[4808]: I0217 16:00:44.324552 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jqtsg" Feb 17 16:00:44 crc kubenswrapper[4808]: I0217 16:00:44.326806 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 17 16:00:44 crc kubenswrapper[4808]: I0217 16:00:44.341162 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jqtsg"] Feb 17 16:00:44 crc kubenswrapper[4808]: I0217 16:00:44.501172 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7cdb188e-770b-4b77-8396-a2422be880a4-catalog-content\") pod \"certified-operators-jqtsg\" (UID: \"7cdb188e-770b-4b77-8396-a2422be880a4\") " pod="openshift-marketplace/certified-operators-jqtsg" Feb 17 16:00:44 crc kubenswrapper[4808]: I0217 16:00:44.501240 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmplc\" (UniqueName: \"kubernetes.io/projected/7cdb188e-770b-4b77-8396-a2422be880a4-kube-api-access-gmplc\") pod \"certified-operators-jqtsg\" (UID: \"7cdb188e-770b-4b77-8396-a2422be880a4\") " pod="openshift-marketplace/certified-operators-jqtsg" Feb 17 16:00:44 crc kubenswrapper[4808]: I0217 16:00:44.501276 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7cdb188e-770b-4b77-8396-a2422be880a4-utilities\") pod \"certified-operators-jqtsg\" (UID: \"7cdb188e-770b-4b77-8396-a2422be880a4\") " pod="openshift-marketplace/certified-operators-jqtsg" Feb 17 16:00:44 crc kubenswrapper[4808]: I0217 16:00:44.513255 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-snf82"] Feb 17 16:00:44 crc kubenswrapper[4808]: I0217 16:00:44.514826 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-snf82" Feb 17 16:00:44 crc kubenswrapper[4808]: I0217 16:00:44.521143 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 17 16:00:44 crc kubenswrapper[4808]: I0217 16:00:44.526201 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-snf82"] Feb 17 16:00:44 crc kubenswrapper[4808]: I0217 16:00:44.603149 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7cdb188e-770b-4b77-8396-a2422be880a4-catalog-content\") pod \"certified-operators-jqtsg\" (UID: \"7cdb188e-770b-4b77-8396-a2422be880a4\") " pod="openshift-marketplace/certified-operators-jqtsg" Feb 17 16:00:44 crc kubenswrapper[4808]: I0217 16:00:44.603478 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmplc\" (UniqueName: \"kubernetes.io/projected/7cdb188e-770b-4b77-8396-a2422be880a4-kube-api-access-gmplc\") pod \"certified-operators-jqtsg\" (UID: \"7cdb188e-770b-4b77-8396-a2422be880a4\") " pod="openshift-marketplace/certified-operators-jqtsg" Feb 17 16:00:44 crc kubenswrapper[4808]: I0217 16:00:44.603520 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7cdb188e-770b-4b77-8396-a2422be880a4-utilities\") pod \"certified-operators-jqtsg\" (UID: \"7cdb188e-770b-4b77-8396-a2422be880a4\") " pod="openshift-marketplace/certified-operators-jqtsg" Feb 17 16:00:44 crc kubenswrapper[4808]: I0217 16:00:44.603929 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7cdb188e-770b-4b77-8396-a2422be880a4-catalog-content\") pod \"certified-operators-jqtsg\" (UID: \"7cdb188e-770b-4b77-8396-a2422be880a4\") " pod="openshift-marketplace/certified-operators-jqtsg" Feb 17 16:00:44 crc kubenswrapper[4808]: I0217 16:00:44.604276 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7cdb188e-770b-4b77-8396-a2422be880a4-utilities\") pod \"certified-operators-jqtsg\" (UID: \"7cdb188e-770b-4b77-8396-a2422be880a4\") " pod="openshift-marketplace/certified-operators-jqtsg" Feb 17 16:00:44 crc kubenswrapper[4808]: I0217 16:00:44.627432 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmplc\" (UniqueName: \"kubernetes.io/projected/7cdb188e-770b-4b77-8396-a2422be880a4-kube-api-access-gmplc\") pod \"certified-operators-jqtsg\" (UID: \"7cdb188e-770b-4b77-8396-a2422be880a4\") " pod="openshift-marketplace/certified-operators-jqtsg" Feb 17 16:00:44 crc kubenswrapper[4808]: I0217 16:00:44.645553 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jqtsg" Feb 17 16:00:44 crc kubenswrapper[4808]: I0217 16:00:44.705121 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vc7j5\" (UniqueName: \"kubernetes.io/projected/9b925660-1865-4603-8f8e-f21a1c342f63-kube-api-access-vc7j5\") pod \"community-operators-snf82\" (UID: \"9b925660-1865-4603-8f8e-f21a1c342f63\") " pod="openshift-marketplace/community-operators-snf82" Feb 17 16:00:44 crc kubenswrapper[4808]: I0217 16:00:44.705672 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b925660-1865-4603-8f8e-f21a1c342f63-catalog-content\") pod \"community-operators-snf82\" (UID: \"9b925660-1865-4603-8f8e-f21a1c342f63\") " pod="openshift-marketplace/community-operators-snf82" Feb 17 16:00:44 crc kubenswrapper[4808]: I0217 16:00:44.705726 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b925660-1865-4603-8f8e-f21a1c342f63-utilities\") pod \"community-operators-snf82\" (UID: \"9b925660-1865-4603-8f8e-f21a1c342f63\") " pod="openshift-marketplace/community-operators-snf82" Feb 17 16:00:44 crc kubenswrapper[4808]: I0217 16:00:44.806197 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vc7j5\" (UniqueName: \"kubernetes.io/projected/9b925660-1865-4603-8f8e-f21a1c342f63-kube-api-access-vc7j5\") pod \"community-operators-snf82\" (UID: \"9b925660-1865-4603-8f8e-f21a1c342f63\") " pod="openshift-marketplace/community-operators-snf82" Feb 17 16:00:44 crc kubenswrapper[4808]: I0217 16:00:44.806255 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b925660-1865-4603-8f8e-f21a1c342f63-catalog-content\") pod \"community-operators-snf82\" (UID: \"9b925660-1865-4603-8f8e-f21a1c342f63\") " pod="openshift-marketplace/community-operators-snf82" Feb 17 16:00:44 crc kubenswrapper[4808]: I0217 16:00:44.806295 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b925660-1865-4603-8f8e-f21a1c342f63-utilities\") pod \"community-operators-snf82\" (UID: \"9b925660-1865-4603-8f8e-f21a1c342f63\") " pod="openshift-marketplace/community-operators-snf82" Feb 17 16:00:44 crc kubenswrapper[4808]: I0217 16:00:44.806830 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b925660-1865-4603-8f8e-f21a1c342f63-utilities\") pod \"community-operators-snf82\" (UID: \"9b925660-1865-4603-8f8e-f21a1c342f63\") " pod="openshift-marketplace/community-operators-snf82" Feb 17 16:00:44 crc kubenswrapper[4808]: I0217 16:00:44.806888 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b925660-1865-4603-8f8e-f21a1c342f63-catalog-content\") pod \"community-operators-snf82\" (UID: \"9b925660-1865-4603-8f8e-f21a1c342f63\") " pod="openshift-marketplace/community-operators-snf82" Feb 17 16:00:44 crc kubenswrapper[4808]: I0217 16:00:44.828952 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vc7j5\" (UniqueName: \"kubernetes.io/projected/9b925660-1865-4603-8f8e-f21a1c342f63-kube-api-access-vc7j5\") pod \"community-operators-snf82\" (UID: \"9b925660-1865-4603-8f8e-f21a1c342f63\") " pod="openshift-marketplace/community-operators-snf82" Feb 17 16:00:44 crc kubenswrapper[4808]: I0217 16:00:44.835665 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-snf82" Feb 17 16:00:45 crc kubenswrapper[4808]: I0217 16:00:45.027461 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jqtsg"] Feb 17 16:00:45 crc kubenswrapper[4808]: W0217 16:00:45.034441 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7cdb188e_770b_4b77_8396_a2422be880a4.slice/crio-ef844668f5d5756ff7b1ef705f4ea124e4d7a7bd509d8e67479cb418a27a08a4 WatchSource:0}: Error finding container ef844668f5d5756ff7b1ef705f4ea124e4d7a7bd509d8e67479cb418a27a08a4: Status 404 returned error can't find the container with id ef844668f5d5756ff7b1ef705f4ea124e4d7a7bd509d8e67479cb418a27a08a4 Feb 17 16:00:45 crc kubenswrapper[4808]: I0217 16:00:45.090179 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-snf82"] Feb 17 16:00:45 crc kubenswrapper[4808]: I0217 16:00:45.202962 4808 generic.go:334] "Generic (PLEG): container finished" podID="5011758e-a6e4-4491-8ac6-c0a8bcb50568" containerID="0f4854f446efe5957d7c81e19b5da8c7c806c0afafb344fde0ce3aaf5d49f886" exitCode=0 Feb 17 16:00:45 crc kubenswrapper[4808]: I0217 16:00:45.203049 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bbhct" event={"ID":"5011758e-a6e4-4491-8ac6-c0a8bcb50568","Type":"ContainerDied","Data":"0f4854f446efe5957d7c81e19b5da8c7c806c0afafb344fde0ce3aaf5d49f886"} Feb 17 16:00:45 crc kubenswrapper[4808]: I0217 16:00:45.208742 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-snf82" event={"ID":"9b925660-1865-4603-8f8e-f21a1c342f63","Type":"ContainerStarted","Data":"63b36f7d6da84b9b4455c506dbd13856e075e7b3b6c650a39ebcaf9267f7ceaf"} Feb 17 16:00:45 crc kubenswrapper[4808]: I0217 16:00:45.213241 4808 generic.go:334] "Generic (PLEG): container finished" podID="bcdfcb0d-7a0d-4cee-a80f-f49f078bef37" containerID="2daf81ecd3c16485533bbe62503f83d4e79a667aade15b55d10480d78481ba20" exitCode=0 Feb 17 16:00:45 crc kubenswrapper[4808]: I0217 16:00:45.213304 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lstjz" event={"ID":"bcdfcb0d-7a0d-4cee-a80f-f49f078bef37","Type":"ContainerDied","Data":"2daf81ecd3c16485533bbe62503f83d4e79a667aade15b55d10480d78481ba20"} Feb 17 16:00:45 crc kubenswrapper[4808]: I0217 16:00:45.215905 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jqtsg" event={"ID":"7cdb188e-770b-4b77-8396-a2422be880a4","Type":"ContainerStarted","Data":"47a3ebdb89ce68c6b02152046e0104b05bde9ba746322e9e754da8447f0e2b5b"} Feb 17 16:00:45 crc kubenswrapper[4808]: I0217 16:00:45.215953 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jqtsg" event={"ID":"7cdb188e-770b-4b77-8396-a2422be880a4","Type":"ContainerStarted","Data":"ef844668f5d5756ff7b1ef705f4ea124e4d7a7bd509d8e67479cb418a27a08a4"} Feb 17 16:00:45 crc kubenswrapper[4808]: I0217 16:00:45.896725 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" podUID="ddc3801d-3513-460c-a719-ed9dc92697e7" containerName="registry" containerID="cri-o://2c6abeefd28c47d49cee179f808d4b10aff7311be498ba875ef344c21dc775da" gracePeriod=30 Feb 17 16:00:46 crc kubenswrapper[4808]: I0217 16:00:46.228693 4808 generic.go:334] "Generic (PLEG): container finished" podID="9b925660-1865-4603-8f8e-f21a1c342f63" containerID="d0350e5a6a6ac994336a37c313b488f12ab8fc28005e7c91cfab28eb02b3774d" exitCode=0 Feb 17 16:00:46 crc kubenswrapper[4808]: I0217 16:00:46.228819 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-snf82" event={"ID":"9b925660-1865-4603-8f8e-f21a1c342f63","Type":"ContainerDied","Data":"d0350e5a6a6ac994336a37c313b488f12ab8fc28005e7c91cfab28eb02b3774d"} Feb 17 16:00:46 crc kubenswrapper[4808]: I0217 16:00:46.236229 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lstjz" event={"ID":"bcdfcb0d-7a0d-4cee-a80f-f49f078bef37","Type":"ContainerStarted","Data":"32d9978b151ae50bdecbc21ec640df93bbd6346bdfdfcc6a9ac2cc3e03f96622"} Feb 17 16:00:46 crc kubenswrapper[4808]: I0217 16:00:46.238854 4808 generic.go:334] "Generic (PLEG): container finished" podID="7cdb188e-770b-4b77-8396-a2422be880a4" containerID="47a3ebdb89ce68c6b02152046e0104b05bde9ba746322e9e754da8447f0e2b5b" exitCode=0 Feb 17 16:00:46 crc kubenswrapper[4808]: I0217 16:00:46.240344 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jqtsg" event={"ID":"7cdb188e-770b-4b77-8396-a2422be880a4","Type":"ContainerDied","Data":"47a3ebdb89ce68c6b02152046e0104b05bde9ba746322e9e754da8447f0e2b5b"} Feb 17 16:00:46 crc kubenswrapper[4808]: I0217 16:00:46.258143 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bbhct" event={"ID":"5011758e-a6e4-4491-8ac6-c0a8bcb50568","Type":"ContainerStarted","Data":"fdf09729f009f935cf68d8269108df5e5ec401e39d9ce2ba72a6e317f7d6227f"} Feb 17 16:00:46 crc kubenswrapper[4808]: I0217 16:00:46.260954 4808 generic.go:334] "Generic (PLEG): container finished" podID="ddc3801d-3513-460c-a719-ed9dc92697e7" containerID="2c6abeefd28c47d49cee179f808d4b10aff7311be498ba875ef344c21dc775da" exitCode=0 Feb 17 16:00:46 crc kubenswrapper[4808]: I0217 16:00:46.260984 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" event={"ID":"ddc3801d-3513-460c-a719-ed9dc92697e7","Type":"ContainerDied","Data":"2c6abeefd28c47d49cee179f808d4b10aff7311be498ba875ef344c21dc775da"} Feb 17 16:00:46 crc kubenswrapper[4808]: I0217 16:00:46.298302 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-lstjz" podStartSLOduration=1.779485342 podStartE2EDuration="4.298277495s" podCreationTimestamp="2026-02-17 16:00:42 +0000 UTC" firstStartedPulling="2026-02-17 16:00:43.183494145 +0000 UTC m=+406.699853218" lastFinishedPulling="2026-02-17 16:00:45.702286298 +0000 UTC m=+409.218645371" observedRunningTime="2026-02-17 16:00:46.276484382 +0000 UTC m=+409.792843455" watchObservedRunningTime="2026-02-17 16:00:46.298277495 +0000 UTC m=+409.814636568" Feb 17 16:00:46 crc kubenswrapper[4808]: I0217 16:00:46.325550 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-bbhct" podStartSLOduration=2.79889054 podStartE2EDuration="5.325530612s" podCreationTimestamp="2026-02-17 16:00:41 +0000 UTC" firstStartedPulling="2026-02-17 16:00:43.187532287 +0000 UTC m=+406.703891360" lastFinishedPulling="2026-02-17 16:00:45.714172359 +0000 UTC m=+409.230531432" observedRunningTime="2026-02-17 16:00:46.324310416 +0000 UTC m=+409.840669489" watchObservedRunningTime="2026-02-17 16:00:46.325530612 +0000 UTC m=+409.841889695" Feb 17 16:00:46 crc kubenswrapper[4808]: I0217 16:00:46.340256 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 16:00:46 crc kubenswrapper[4808]: I0217 16:00:46.431921 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l78nd\" (UniqueName: \"kubernetes.io/projected/ddc3801d-3513-460c-a719-ed9dc92697e7-kube-api-access-l78nd\") pod \"ddc3801d-3513-460c-a719-ed9dc92697e7\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " Feb 17 16:00:46 crc kubenswrapper[4808]: I0217 16:00:46.431987 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ddc3801d-3513-460c-a719-ed9dc92697e7-bound-sa-token\") pod \"ddc3801d-3513-460c-a719-ed9dc92697e7\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " Feb 17 16:00:46 crc kubenswrapper[4808]: I0217 16:00:46.432084 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ddc3801d-3513-460c-a719-ed9dc92697e7-installation-pull-secrets\") pod \"ddc3801d-3513-460c-a719-ed9dc92697e7\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " Feb 17 16:00:46 crc kubenswrapper[4808]: I0217 16:00:46.432133 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ddc3801d-3513-460c-a719-ed9dc92697e7-ca-trust-extracted\") pod \"ddc3801d-3513-460c-a719-ed9dc92697e7\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " Feb 17 16:00:46 crc kubenswrapper[4808]: I0217 16:00:46.432176 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ddc3801d-3513-460c-a719-ed9dc92697e7-trusted-ca\") pod \"ddc3801d-3513-460c-a719-ed9dc92697e7\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " Feb 17 16:00:46 crc kubenswrapper[4808]: I0217 16:00:46.432344 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"ddc3801d-3513-460c-a719-ed9dc92697e7\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " Feb 17 16:00:46 crc kubenswrapper[4808]: I0217 16:00:46.432376 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ddc3801d-3513-460c-a719-ed9dc92697e7-registry-tls\") pod \"ddc3801d-3513-460c-a719-ed9dc92697e7\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " Feb 17 16:00:46 crc kubenswrapper[4808]: I0217 16:00:46.432418 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ddc3801d-3513-460c-a719-ed9dc92697e7-registry-certificates\") pod \"ddc3801d-3513-460c-a719-ed9dc92697e7\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " Feb 17 16:00:46 crc kubenswrapper[4808]: I0217 16:00:46.433533 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ddc3801d-3513-460c-a719-ed9dc92697e7-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "ddc3801d-3513-460c-a719-ed9dc92697e7" (UID: "ddc3801d-3513-460c-a719-ed9dc92697e7"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:00:46 crc kubenswrapper[4808]: I0217 16:00:46.433627 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ddc3801d-3513-460c-a719-ed9dc92697e7-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "ddc3801d-3513-460c-a719-ed9dc92697e7" (UID: "ddc3801d-3513-460c-a719-ed9dc92697e7"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:00:46 crc kubenswrapper[4808]: I0217 16:00:46.439331 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ddc3801d-3513-460c-a719-ed9dc92697e7-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "ddc3801d-3513-460c-a719-ed9dc92697e7" (UID: "ddc3801d-3513-460c-a719-ed9dc92697e7"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:00:46 crc kubenswrapper[4808]: I0217 16:00:46.445939 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ddc3801d-3513-460c-a719-ed9dc92697e7-kube-api-access-l78nd" (OuterVolumeSpecName: "kube-api-access-l78nd") pod "ddc3801d-3513-460c-a719-ed9dc92697e7" (UID: "ddc3801d-3513-460c-a719-ed9dc92697e7"). InnerVolumeSpecName "kube-api-access-l78nd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:00:46 crc kubenswrapper[4808]: I0217 16:00:46.446479 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "ddc3801d-3513-460c-a719-ed9dc92697e7" (UID: "ddc3801d-3513-460c-a719-ed9dc92697e7"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 17 16:00:46 crc kubenswrapper[4808]: I0217 16:00:46.446715 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ddc3801d-3513-460c-a719-ed9dc92697e7-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "ddc3801d-3513-460c-a719-ed9dc92697e7" (UID: "ddc3801d-3513-460c-a719-ed9dc92697e7"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:00:46 crc kubenswrapper[4808]: I0217 16:00:46.451301 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ddc3801d-3513-460c-a719-ed9dc92697e7-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "ddc3801d-3513-460c-a719-ed9dc92697e7" (UID: "ddc3801d-3513-460c-a719-ed9dc92697e7"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:00:46 crc kubenswrapper[4808]: I0217 16:00:46.455677 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ddc3801d-3513-460c-a719-ed9dc92697e7-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "ddc3801d-3513-460c-a719-ed9dc92697e7" (UID: "ddc3801d-3513-460c-a719-ed9dc92697e7"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:00:46 crc kubenswrapper[4808]: I0217 16:00:46.533818 4808 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ddc3801d-3513-460c-a719-ed9dc92697e7-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 17 16:00:46 crc kubenswrapper[4808]: I0217 16:00:46.533860 4808 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ddc3801d-3513-460c-a719-ed9dc92697e7-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 17 16:00:46 crc kubenswrapper[4808]: I0217 16:00:46.533872 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l78nd\" (UniqueName: \"kubernetes.io/projected/ddc3801d-3513-460c-a719-ed9dc92697e7-kube-api-access-l78nd\") on node \"crc\" DevicePath \"\"" Feb 17 16:00:46 crc kubenswrapper[4808]: I0217 16:00:46.533881 4808 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ddc3801d-3513-460c-a719-ed9dc92697e7-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 17 16:00:46 crc kubenswrapper[4808]: I0217 16:00:46.533891 4808 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ddc3801d-3513-460c-a719-ed9dc92697e7-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 17 16:00:46 crc kubenswrapper[4808]: I0217 16:00:46.533901 4808 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ddc3801d-3513-460c-a719-ed9dc92697e7-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 17 16:00:46 crc kubenswrapper[4808]: I0217 16:00:46.533911 4808 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ddc3801d-3513-460c-a719-ed9dc92697e7-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 17 16:00:47 crc kubenswrapper[4808]: I0217 16:00:47.270941 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" event={"ID":"ddc3801d-3513-460c-a719-ed9dc92697e7","Type":"ContainerDied","Data":"6e3f1081b00b18d9f343d94a49f4eb8fd3475f6dc82e8e6676483c99ff105dda"} Feb 17 16:00:47 crc kubenswrapper[4808]: I0217 16:00:47.270962 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 16:00:47 crc kubenswrapper[4808]: I0217 16:00:47.271487 4808 scope.go:117] "RemoveContainer" containerID="2c6abeefd28c47d49cee179f808d4b10aff7311be498ba875ef344c21dc775da" Feb 17 16:00:47 crc kubenswrapper[4808]: I0217 16:00:47.281302 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jqtsg" event={"ID":"7cdb188e-770b-4b77-8396-a2422be880a4","Type":"ContainerStarted","Data":"90673874b32c0b13b6c696df3d7ec418349328c7a6d184134dcf0c00617dcaee"} Feb 17 16:00:47 crc kubenswrapper[4808]: I0217 16:00:47.337865 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fmfh5"] Feb 17 16:00:47 crc kubenswrapper[4808]: I0217 16:00:47.341765 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fmfh5"] Feb 17 16:00:48 crc kubenswrapper[4808]: I0217 16:00:48.287205 4808 generic.go:334] "Generic (PLEG): container finished" podID="7cdb188e-770b-4b77-8396-a2422be880a4" containerID="90673874b32c0b13b6c696df3d7ec418349328c7a6d184134dcf0c00617dcaee" exitCode=0 Feb 17 16:00:48 crc kubenswrapper[4808]: I0217 16:00:48.287258 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jqtsg" event={"ID":"7cdb188e-770b-4b77-8396-a2422be880a4","Type":"ContainerDied","Data":"90673874b32c0b13b6c696df3d7ec418349328c7a6d184134dcf0c00617dcaee"} Feb 17 16:00:48 crc kubenswrapper[4808]: I0217 16:00:48.301474 4808 generic.go:334] "Generic (PLEG): container finished" podID="9b925660-1865-4603-8f8e-f21a1c342f63" containerID="52e264425fb80accc6368ccf3807bac64ef6f8e36953f6e0db1eddd3a570a652" exitCode=0 Feb 17 16:00:48 crc kubenswrapper[4808]: I0217 16:00:48.301688 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-snf82" event={"ID":"9b925660-1865-4603-8f8e-f21a1c342f63","Type":"ContainerDied","Data":"52e264425fb80accc6368ccf3807bac64ef6f8e36953f6e0db1eddd3a570a652"} Feb 17 16:00:49 crc kubenswrapper[4808]: I0217 16:00:49.157047 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ddc3801d-3513-460c-a719-ed9dc92697e7" path="/var/lib/kubelet/pods/ddc3801d-3513-460c-a719-ed9dc92697e7/volumes" Feb 17 16:00:49 crc kubenswrapper[4808]: I0217 16:00:49.308518 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jqtsg" event={"ID":"7cdb188e-770b-4b77-8396-a2422be880a4","Type":"ContainerStarted","Data":"2d9bae86441156ea0978a61aa55e3e05d2e584ec61842c859e61158d7e3209d1"} Feb 17 16:00:49 crc kubenswrapper[4808]: I0217 16:00:49.310203 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-snf82" event={"ID":"9b925660-1865-4603-8f8e-f21a1c342f63","Type":"ContainerStarted","Data":"55b66084d7c88b24753d4f326e3d7444972e56a90179b952814cb3b23af1b396"} Feb 17 16:00:49 crc kubenswrapper[4808]: I0217 16:00:49.339262 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-jqtsg" podStartSLOduration=2.880396288 podStartE2EDuration="5.33924663s" podCreationTimestamp="2026-02-17 16:00:44 +0000 UTC" firstStartedPulling="2026-02-17 16:00:46.242196229 +0000 UTC m=+409.758555302" lastFinishedPulling="2026-02-17 16:00:48.701046561 +0000 UTC m=+412.217405644" observedRunningTime="2026-02-17 16:00:49.334794435 +0000 UTC m=+412.851153518" watchObservedRunningTime="2026-02-17 16:00:49.33924663 +0000 UTC m=+412.855605703" Feb 17 16:00:49 crc kubenswrapper[4808]: I0217 16:00:49.351773 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-snf82" podStartSLOduration=2.776472529 podStartE2EDuration="5.35175468s" podCreationTimestamp="2026-02-17 16:00:44 +0000 UTC" firstStartedPulling="2026-02-17 16:00:46.232051011 +0000 UTC m=+409.748410084" lastFinishedPulling="2026-02-17 16:00:48.807333162 +0000 UTC m=+412.323692235" observedRunningTime="2026-02-17 16:00:49.350176403 +0000 UTC m=+412.866535496" watchObservedRunningTime="2026-02-17 16:00:49.35175468 +0000 UTC m=+412.868113763" Feb 17 16:00:52 crc kubenswrapper[4808]: I0217 16:00:52.267130 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-bbhct" Feb 17 16:00:52 crc kubenswrapper[4808]: I0217 16:00:52.267740 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-bbhct" Feb 17 16:00:52 crc kubenswrapper[4808]: I0217 16:00:52.326531 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-bbhct" Feb 17 16:00:52 crc kubenswrapper[4808]: I0217 16:00:52.404123 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-bbhct" Feb 17 16:00:52 crc kubenswrapper[4808]: I0217 16:00:52.453554 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-lstjz" Feb 17 16:00:52 crc kubenswrapper[4808]: I0217 16:00:52.454762 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-lstjz" Feb 17 16:00:52 crc kubenswrapper[4808]: I0217 16:00:52.503689 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-lstjz" Feb 17 16:00:53 crc kubenswrapper[4808]: I0217 16:00:53.395441 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-lstjz" Feb 17 16:00:54 crc kubenswrapper[4808]: I0217 16:00:54.646467 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-jqtsg" Feb 17 16:00:54 crc kubenswrapper[4808]: I0217 16:00:54.647157 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-jqtsg" Feb 17 16:00:54 crc kubenswrapper[4808]: I0217 16:00:54.683366 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-jqtsg" Feb 17 16:00:54 crc kubenswrapper[4808]: I0217 16:00:54.836615 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-snf82" Feb 17 16:00:54 crc kubenswrapper[4808]: I0217 16:00:54.836680 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-snf82" Feb 17 16:00:54 crc kubenswrapper[4808]: I0217 16:00:54.876093 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-snf82" Feb 17 16:00:55 crc kubenswrapper[4808]: I0217 16:00:55.381208 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-jqtsg" Feb 17 16:00:55 crc kubenswrapper[4808]: I0217 16:00:55.384083 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-snf82" Feb 17 16:02:21 crc kubenswrapper[4808]: I0217 16:02:21.592835 4808 patch_prober.go:28] interesting pod/machine-config-daemon-k8v8k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:02:21 crc kubenswrapper[4808]: I0217 16:02:21.593633 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:02:51 crc kubenswrapper[4808]: I0217 16:02:51.591915 4808 patch_prober.go:28] interesting pod/machine-config-daemon-k8v8k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:02:51 crc kubenswrapper[4808]: I0217 16:02:51.592706 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:03:21 crc kubenswrapper[4808]: I0217 16:03:21.592278 4808 patch_prober.go:28] interesting pod/machine-config-daemon-k8v8k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:03:21 crc kubenswrapper[4808]: I0217 16:03:21.592972 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:03:21 crc kubenswrapper[4808]: I0217 16:03:21.593034 4808 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" Feb 17 16:03:21 crc kubenswrapper[4808]: I0217 16:03:21.593877 4808 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"088a965aa6da48d3335f0fd7b3ea4dc5ac44753ad3722fc3086c2312ec7c03db"} pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 16:03:21 crc kubenswrapper[4808]: I0217 16:03:21.594007 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" containerID="cri-o://088a965aa6da48d3335f0fd7b3ea4dc5ac44753ad3722fc3086c2312ec7c03db" gracePeriod=600 Feb 17 16:03:22 crc kubenswrapper[4808]: I0217 16:03:22.403385 4808 generic.go:334] "Generic (PLEG): container finished" podID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerID="088a965aa6da48d3335f0fd7b3ea4dc5ac44753ad3722fc3086c2312ec7c03db" exitCode=0 Feb 17 16:03:22 crc kubenswrapper[4808]: I0217 16:03:22.403464 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" event={"ID":"ca38b6e7-b21c-453d-8b6c-a163dac84b35","Type":"ContainerDied","Data":"088a965aa6da48d3335f0fd7b3ea4dc5ac44753ad3722fc3086c2312ec7c03db"} Feb 17 16:03:22 crc kubenswrapper[4808]: I0217 16:03:22.404473 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" event={"ID":"ca38b6e7-b21c-453d-8b6c-a163dac84b35","Type":"ContainerStarted","Data":"51dff3d704e9a98a9fc5f37394f1d0157cc8cebcc4571b1aa78c7b9262eeb36c"} Feb 17 16:03:22 crc kubenswrapper[4808]: I0217 16:03:22.404518 4808 scope.go:117] "RemoveContainer" containerID="77d27579afc79c7f9499a81b219b4983465c9c8999e7fd27d50b7990ea6072c1" Feb 17 16:05:21 crc kubenswrapper[4808]: I0217 16:05:21.593325 4808 patch_prober.go:28] interesting pod/machine-config-daemon-k8v8k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:05:21 crc kubenswrapper[4808]: I0217 16:05:21.594104 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:05:39 crc kubenswrapper[4808]: I0217 16:05:39.389141 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gm8bm"] Feb 17 16:05:39 crc kubenswrapper[4808]: E0217 16:05:39.390171 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddc3801d-3513-460c-a719-ed9dc92697e7" containerName="registry" Feb 17 16:05:39 crc kubenswrapper[4808]: I0217 16:05:39.390191 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddc3801d-3513-460c-a719-ed9dc92697e7" containerName="registry" Feb 17 16:05:39 crc kubenswrapper[4808]: I0217 16:05:39.390338 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="ddc3801d-3513-460c-a719-ed9dc92697e7" containerName="registry" Feb 17 16:05:39 crc kubenswrapper[4808]: I0217 16:05:39.391349 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gm8bm" Feb 17 16:05:39 crc kubenswrapper[4808]: I0217 16:05:39.393869 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 17 16:05:39 crc kubenswrapper[4808]: I0217 16:05:39.406270 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gm8bm"] Feb 17 16:05:39 crc kubenswrapper[4808]: I0217 16:05:39.491273 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4vtz\" (UniqueName: \"kubernetes.io/projected/11d9feea-2c1d-48e4-9cf4-bde172f9faea-kube-api-access-x4vtz\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gm8bm\" (UID: \"11d9feea-2c1d-48e4-9cf4-bde172f9faea\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gm8bm" Feb 17 16:05:39 crc kubenswrapper[4808]: I0217 16:05:39.491348 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/11d9feea-2c1d-48e4-9cf4-bde172f9faea-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gm8bm\" (UID: \"11d9feea-2c1d-48e4-9cf4-bde172f9faea\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gm8bm" Feb 17 16:05:39 crc kubenswrapper[4808]: I0217 16:05:39.491410 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/11d9feea-2c1d-48e4-9cf4-bde172f9faea-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gm8bm\" (UID: \"11d9feea-2c1d-48e4-9cf4-bde172f9faea\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gm8bm" Feb 17 16:05:39 crc kubenswrapper[4808]: I0217 16:05:39.593282 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/11d9feea-2c1d-48e4-9cf4-bde172f9faea-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gm8bm\" (UID: \"11d9feea-2c1d-48e4-9cf4-bde172f9faea\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gm8bm" Feb 17 16:05:39 crc kubenswrapper[4808]: I0217 16:05:39.593419 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x4vtz\" (UniqueName: \"kubernetes.io/projected/11d9feea-2c1d-48e4-9cf4-bde172f9faea-kube-api-access-x4vtz\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gm8bm\" (UID: \"11d9feea-2c1d-48e4-9cf4-bde172f9faea\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gm8bm" Feb 17 16:05:39 crc kubenswrapper[4808]: I0217 16:05:39.593701 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/11d9feea-2c1d-48e4-9cf4-bde172f9faea-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gm8bm\" (UID: \"11d9feea-2c1d-48e4-9cf4-bde172f9faea\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gm8bm" Feb 17 16:05:39 crc kubenswrapper[4808]: I0217 16:05:39.593841 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/11d9feea-2c1d-48e4-9cf4-bde172f9faea-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gm8bm\" (UID: \"11d9feea-2c1d-48e4-9cf4-bde172f9faea\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gm8bm" Feb 17 16:05:39 crc kubenswrapper[4808]: I0217 16:05:39.594032 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/11d9feea-2c1d-48e4-9cf4-bde172f9faea-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gm8bm\" (UID: \"11d9feea-2c1d-48e4-9cf4-bde172f9faea\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gm8bm" Feb 17 16:05:39 crc kubenswrapper[4808]: I0217 16:05:39.626216 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x4vtz\" (UniqueName: \"kubernetes.io/projected/11d9feea-2c1d-48e4-9cf4-bde172f9faea-kube-api-access-x4vtz\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gm8bm\" (UID: \"11d9feea-2c1d-48e4-9cf4-bde172f9faea\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gm8bm" Feb 17 16:05:39 crc kubenswrapper[4808]: I0217 16:05:39.719019 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gm8bm" Feb 17 16:05:39 crc kubenswrapper[4808]: I0217 16:05:39.975071 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gm8bm"] Feb 17 16:05:40 crc kubenswrapper[4808]: I0217 16:05:40.424337 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gm8bm" event={"ID":"11d9feea-2c1d-48e4-9cf4-bde172f9faea","Type":"ContainerStarted","Data":"c1927813e5dee42974ad95f87121936cfcb59e339c6af53fbdcd594c1a9d8a41"} Feb 17 16:05:40 crc kubenswrapper[4808]: I0217 16:05:40.424421 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gm8bm" event={"ID":"11d9feea-2c1d-48e4-9cf4-bde172f9faea","Type":"ContainerStarted","Data":"1cf44481943a899439fc15a8de81c91b62c9ca1868a444f67bef4eb79a7c7f80"} Feb 17 16:05:41 crc kubenswrapper[4808]: I0217 16:05:41.432154 4808 generic.go:334] "Generic (PLEG): container finished" podID="11d9feea-2c1d-48e4-9cf4-bde172f9faea" containerID="c1927813e5dee42974ad95f87121936cfcb59e339c6af53fbdcd594c1a9d8a41" exitCode=0 Feb 17 16:05:41 crc kubenswrapper[4808]: I0217 16:05:41.432594 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gm8bm" event={"ID":"11d9feea-2c1d-48e4-9cf4-bde172f9faea","Type":"ContainerDied","Data":"c1927813e5dee42974ad95f87121936cfcb59e339c6af53fbdcd594c1a9d8a41"} Feb 17 16:05:41 crc kubenswrapper[4808]: I0217 16:05:41.438162 4808 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 16:05:43 crc kubenswrapper[4808]: I0217 16:05:43.450095 4808 generic.go:334] "Generic (PLEG): container finished" podID="11d9feea-2c1d-48e4-9cf4-bde172f9faea" containerID="495964b7fe8320dfa69f3d266112f71b2d4ec51d673ac680479f0aac4c456279" exitCode=0 Feb 17 16:05:43 crc kubenswrapper[4808]: I0217 16:05:43.450212 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gm8bm" event={"ID":"11d9feea-2c1d-48e4-9cf4-bde172f9faea","Type":"ContainerDied","Data":"495964b7fe8320dfa69f3d266112f71b2d4ec51d673ac680479f0aac4c456279"} Feb 17 16:05:44 crc kubenswrapper[4808]: I0217 16:05:44.467175 4808 generic.go:334] "Generic (PLEG): container finished" podID="11d9feea-2c1d-48e4-9cf4-bde172f9faea" containerID="b98b03db716e9694fdfd21b758179be84383bdb2aafaecd25d545be5dc8eaedd" exitCode=0 Feb 17 16:05:44 crc kubenswrapper[4808]: I0217 16:05:44.467233 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gm8bm" event={"ID":"11d9feea-2c1d-48e4-9cf4-bde172f9faea","Type":"ContainerDied","Data":"b98b03db716e9694fdfd21b758179be84383bdb2aafaecd25d545be5dc8eaedd"} Feb 17 16:05:45 crc kubenswrapper[4808]: I0217 16:05:45.743391 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gm8bm" Feb 17 16:05:45 crc kubenswrapper[4808]: I0217 16:05:45.894785 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/11d9feea-2c1d-48e4-9cf4-bde172f9faea-bundle\") pod \"11d9feea-2c1d-48e4-9cf4-bde172f9faea\" (UID: \"11d9feea-2c1d-48e4-9cf4-bde172f9faea\") " Feb 17 16:05:45 crc kubenswrapper[4808]: I0217 16:05:45.894866 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/11d9feea-2c1d-48e4-9cf4-bde172f9faea-util\") pod \"11d9feea-2c1d-48e4-9cf4-bde172f9faea\" (UID: \"11d9feea-2c1d-48e4-9cf4-bde172f9faea\") " Feb 17 16:05:45 crc kubenswrapper[4808]: I0217 16:05:45.894934 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4vtz\" (UniqueName: \"kubernetes.io/projected/11d9feea-2c1d-48e4-9cf4-bde172f9faea-kube-api-access-x4vtz\") pod \"11d9feea-2c1d-48e4-9cf4-bde172f9faea\" (UID: \"11d9feea-2c1d-48e4-9cf4-bde172f9faea\") " Feb 17 16:05:45 crc kubenswrapper[4808]: I0217 16:05:45.899910 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/11d9feea-2c1d-48e4-9cf4-bde172f9faea-bundle" (OuterVolumeSpecName: "bundle") pod "11d9feea-2c1d-48e4-9cf4-bde172f9faea" (UID: "11d9feea-2c1d-48e4-9cf4-bde172f9faea"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:05:45 crc kubenswrapper[4808]: I0217 16:05:45.903955 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11d9feea-2c1d-48e4-9cf4-bde172f9faea-kube-api-access-x4vtz" (OuterVolumeSpecName: "kube-api-access-x4vtz") pod "11d9feea-2c1d-48e4-9cf4-bde172f9faea" (UID: "11d9feea-2c1d-48e4-9cf4-bde172f9faea"). InnerVolumeSpecName "kube-api-access-x4vtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:05:45 crc kubenswrapper[4808]: I0217 16:05:45.915961 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/11d9feea-2c1d-48e4-9cf4-bde172f9faea-util" (OuterVolumeSpecName: "util") pod "11d9feea-2c1d-48e4-9cf4-bde172f9faea" (UID: "11d9feea-2c1d-48e4-9cf4-bde172f9faea"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:05:45 crc kubenswrapper[4808]: I0217 16:05:45.997050 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4vtz\" (UniqueName: \"kubernetes.io/projected/11d9feea-2c1d-48e4-9cf4-bde172f9faea-kube-api-access-x4vtz\") on node \"crc\" DevicePath \"\"" Feb 17 16:05:45 crc kubenswrapper[4808]: I0217 16:05:45.997093 4808 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/11d9feea-2c1d-48e4-9cf4-bde172f9faea-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:05:45 crc kubenswrapper[4808]: I0217 16:05:45.997113 4808 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/11d9feea-2c1d-48e4-9cf4-bde172f9faea-util\") on node \"crc\" DevicePath \"\"" Feb 17 16:05:46 crc kubenswrapper[4808]: I0217 16:05:46.482749 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gm8bm" event={"ID":"11d9feea-2c1d-48e4-9cf4-bde172f9faea","Type":"ContainerDied","Data":"1cf44481943a899439fc15a8de81c91b62c9ca1868a444f67bef4eb79a7c7f80"} Feb 17 16:05:46 crc kubenswrapper[4808]: I0217 16:05:46.482809 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1cf44481943a899439fc15a8de81c91b62c9ca1868a444f67bef4eb79a7c7f80" Feb 17 16:05:46 crc kubenswrapper[4808]: I0217 16:05:46.482867 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gm8bm" Feb 17 16:05:51 crc kubenswrapper[4808]: I0217 16:05:51.591987 4808 patch_prober.go:28] interesting pod/machine-config-daemon-k8v8k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:05:51 crc kubenswrapper[4808]: I0217 16:05:51.592598 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:05:52 crc kubenswrapper[4808]: I0217 16:05:52.666772 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-tgvlh"] Feb 17 16:05:52 crc kubenswrapper[4808]: I0217 16:05:52.667116 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" podUID="5748f02a-e3dd-47c7-b89d-b472c718e593" containerName="ovn-controller" containerID="cri-o://26a9d62d12c66018649ffcb84c69e20f1c08f3241bdb02ba4306b08dbe5ec49a" gracePeriod=30 Feb 17 16:05:52 crc kubenswrapper[4808]: I0217 16:05:52.667139 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" podUID="5748f02a-e3dd-47c7-b89d-b472c718e593" containerName="northd" containerID="cri-o://28b04c73bfd5eadf6c1e436f6a7150074ee8357cef79b0e040c1d9f3809aab13" gracePeriod=30 Feb 17 16:05:52 crc kubenswrapper[4808]: I0217 16:05:52.667213 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" podUID="5748f02a-e3dd-47c7-b89d-b472c718e593" containerName="kube-rbac-proxy-node" containerID="cri-o://80ab3de82f2a3f22425c34c9b4abcbc925a7076e3f2ce3b952f10aeb856e1c09" gracePeriod=30 Feb 17 16:05:52 crc kubenswrapper[4808]: I0217 16:05:52.667221 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" podUID="5748f02a-e3dd-47c7-b89d-b472c718e593" containerName="ovn-acl-logging" containerID="cri-o://5e9e729fa5a68d07a0f7e4a86114ed39e4128428e5a21c2f3f113f869adc9fc2" gracePeriod=30 Feb 17 16:05:52 crc kubenswrapper[4808]: I0217 16:05:52.667203 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" podUID="5748f02a-e3dd-47c7-b89d-b472c718e593" containerName="sbdb" containerID="cri-o://363a0f82d4347e522c91f27597bc03aa33f75e0399760fcc5cfdc1772eb6aabf" gracePeriod=30 Feb 17 16:05:52 crc kubenswrapper[4808]: I0217 16:05:52.667228 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" podUID="5748f02a-e3dd-47c7-b89d-b472c718e593" containerName="nbdb" containerID="cri-o://58ee49f9d112bd2fe6a3cc5f499d1be9d4c51f2741ffb9bf24754a46a0a12814" gracePeriod=30 Feb 17 16:05:52 crc kubenswrapper[4808]: I0217 16:05:52.667283 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" podUID="5748f02a-e3dd-47c7-b89d-b472c718e593" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://4c263e6c0445a0badadcbc5b50c370fd4ee9a4d0cb3e535e3d7944e938cbea4f" gracePeriod=30 Feb 17 16:05:52 crc kubenswrapper[4808]: I0217 16:05:52.735745 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" podUID="5748f02a-e3dd-47c7-b89d-b472c718e593" containerName="ovnkube-controller" containerID="cri-o://1385665b452c9c54279b496b70105068cc9ac986718df98cc735fc09bcd4ac05" gracePeriod=30 Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.385122 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tgvlh_5748f02a-e3dd-47c7-b89d-b472c718e593/ovnkube-controller/3.log" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.389228 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tgvlh_5748f02a-e3dd-47c7-b89d-b472c718e593/ovn-acl-logging/0.log" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.389840 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tgvlh_5748f02a-e3dd-47c7-b89d-b472c718e593/ovn-controller/0.log" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.390373 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.516229 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-2q7qz"] Feb 17 16:05:53 crc kubenswrapper[4808]: E0217 16:05:53.516444 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5748f02a-e3dd-47c7-b89d-b472c718e593" containerName="northd" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.516459 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="5748f02a-e3dd-47c7-b89d-b472c718e593" containerName="northd" Feb 17 16:05:53 crc kubenswrapper[4808]: E0217 16:05:53.516470 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11d9feea-2c1d-48e4-9cf4-bde172f9faea" containerName="extract" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.516476 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="11d9feea-2c1d-48e4-9cf4-bde172f9faea" containerName="extract" Feb 17 16:05:53 crc kubenswrapper[4808]: E0217 16:05:53.516485 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5748f02a-e3dd-47c7-b89d-b472c718e593" containerName="nbdb" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.516491 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="5748f02a-e3dd-47c7-b89d-b472c718e593" containerName="nbdb" Feb 17 16:05:53 crc kubenswrapper[4808]: E0217 16:05:53.516499 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5748f02a-e3dd-47c7-b89d-b472c718e593" containerName="ovnkube-controller" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.516504 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="5748f02a-e3dd-47c7-b89d-b472c718e593" containerName="ovnkube-controller" Feb 17 16:05:53 crc kubenswrapper[4808]: E0217 16:05:53.516511 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11d9feea-2c1d-48e4-9cf4-bde172f9faea" containerName="pull" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.516517 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="11d9feea-2c1d-48e4-9cf4-bde172f9faea" containerName="pull" Feb 17 16:05:53 crc kubenswrapper[4808]: E0217 16:05:53.516525 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5748f02a-e3dd-47c7-b89d-b472c718e593" containerName="ovn-controller" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.516532 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="5748f02a-e3dd-47c7-b89d-b472c718e593" containerName="ovn-controller" Feb 17 16:05:53 crc kubenswrapper[4808]: E0217 16:05:53.516540 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11d9feea-2c1d-48e4-9cf4-bde172f9faea" containerName="util" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.516546 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="11d9feea-2c1d-48e4-9cf4-bde172f9faea" containerName="util" Feb 17 16:05:53 crc kubenswrapper[4808]: E0217 16:05:53.516555 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5748f02a-e3dd-47c7-b89d-b472c718e593" containerName="ovnkube-controller" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.516561 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="5748f02a-e3dd-47c7-b89d-b472c718e593" containerName="ovnkube-controller" Feb 17 16:05:53 crc kubenswrapper[4808]: E0217 16:05:53.516586 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5748f02a-e3dd-47c7-b89d-b472c718e593" containerName="kube-rbac-proxy-node" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.516592 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="5748f02a-e3dd-47c7-b89d-b472c718e593" containerName="kube-rbac-proxy-node" Feb 17 16:05:53 crc kubenswrapper[4808]: E0217 16:05:53.516599 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5748f02a-e3dd-47c7-b89d-b472c718e593" containerName="sbdb" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.516605 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="5748f02a-e3dd-47c7-b89d-b472c718e593" containerName="sbdb" Feb 17 16:05:53 crc kubenswrapper[4808]: E0217 16:05:53.516613 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5748f02a-e3dd-47c7-b89d-b472c718e593" containerName="ovnkube-controller" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.516619 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="5748f02a-e3dd-47c7-b89d-b472c718e593" containerName="ovnkube-controller" Feb 17 16:05:53 crc kubenswrapper[4808]: E0217 16:05:53.516625 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5748f02a-e3dd-47c7-b89d-b472c718e593" containerName="ovn-acl-logging" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.516630 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="5748f02a-e3dd-47c7-b89d-b472c718e593" containerName="ovn-acl-logging" Feb 17 16:05:53 crc kubenswrapper[4808]: E0217 16:05:53.516638 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5748f02a-e3dd-47c7-b89d-b472c718e593" containerName="kubecfg-setup" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.516644 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="5748f02a-e3dd-47c7-b89d-b472c718e593" containerName="kubecfg-setup" Feb 17 16:05:53 crc kubenswrapper[4808]: E0217 16:05:53.516652 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5748f02a-e3dd-47c7-b89d-b472c718e593" containerName="kube-rbac-proxy-ovn-metrics" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.516657 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="5748f02a-e3dd-47c7-b89d-b472c718e593" containerName="kube-rbac-proxy-ovn-metrics" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.516747 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="5748f02a-e3dd-47c7-b89d-b472c718e593" containerName="ovn-controller" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.516757 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="5748f02a-e3dd-47c7-b89d-b472c718e593" containerName="sbdb" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.516764 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="5748f02a-e3dd-47c7-b89d-b472c718e593" containerName="kube-rbac-proxy-node" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.516771 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="5748f02a-e3dd-47c7-b89d-b472c718e593" containerName="kube-rbac-proxy-ovn-metrics" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.516778 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="5748f02a-e3dd-47c7-b89d-b472c718e593" containerName="ovnkube-controller" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.516785 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="5748f02a-e3dd-47c7-b89d-b472c718e593" containerName="ovn-acl-logging" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.516791 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="11d9feea-2c1d-48e4-9cf4-bde172f9faea" containerName="extract" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.516799 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="5748f02a-e3dd-47c7-b89d-b472c718e593" containerName="northd" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.516806 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="5748f02a-e3dd-47c7-b89d-b472c718e593" containerName="ovnkube-controller" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.516813 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="5748f02a-e3dd-47c7-b89d-b472c718e593" containerName="nbdb" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.516820 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="5748f02a-e3dd-47c7-b89d-b472c718e593" containerName="ovnkube-controller" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.516826 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="5748f02a-e3dd-47c7-b89d-b472c718e593" containerName="ovnkube-controller" Feb 17 16:05:53 crc kubenswrapper[4808]: E0217 16:05:53.516911 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5748f02a-e3dd-47c7-b89d-b472c718e593" containerName="ovnkube-controller" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.516919 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="5748f02a-e3dd-47c7-b89d-b472c718e593" containerName="ovnkube-controller" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.517001 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="5748f02a-e3dd-47c7-b89d-b472c718e593" containerName="ovnkube-controller" Feb 17 16:05:53 crc kubenswrapper[4808]: E0217 16:05:53.517096 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5748f02a-e3dd-47c7-b89d-b472c718e593" containerName="ovnkube-controller" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.517102 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="5748f02a-e3dd-47c7-b89d-b472c718e593" containerName="ovnkube-controller" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.518498 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.538907 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-msgfd_18916d6d-e063-40a0-816f-554f95cd2956/kube-multus/2.log" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.539591 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-msgfd_18916d6d-e063-40a0-816f-554f95cd2956/kube-multus/1.log" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.539639 4808 generic.go:334] "Generic (PLEG): container finished" podID="18916d6d-e063-40a0-816f-554f95cd2956" containerID="a6961e0c67ed7d26f44519f3b555fda05bf5219f4205ed2528b68394bcb91f2c" exitCode=2 Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.539699 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-msgfd" event={"ID":"18916d6d-e063-40a0-816f-554f95cd2956","Type":"ContainerDied","Data":"a6961e0c67ed7d26f44519f3b555fda05bf5219f4205ed2528b68394bcb91f2c"} Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.539965 4808 scope.go:117] "RemoveContainer" containerID="7bdc6e86716d40b6c433ccb24a97665384190bfe2ab5ddf0868109d78826917e" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.540417 4808 scope.go:117] "RemoveContainer" containerID="a6961e0c67ed7d26f44519f3b555fda05bf5219f4205ed2528b68394bcb91f2c" Feb 17 16:05:53 crc kubenswrapper[4808]: E0217 16:05:53.540659 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-msgfd_openshift-multus(18916d6d-e063-40a0-816f-554f95cd2956)\"" pod="openshift-multus/multus-msgfd" podUID="18916d6d-e063-40a0-816f-554f95cd2956" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.542215 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tgvlh_5748f02a-e3dd-47c7-b89d-b472c718e593/ovnkube-controller/3.log" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.544149 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tgvlh_5748f02a-e3dd-47c7-b89d-b472c718e593/ovn-acl-logging/0.log" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.544543 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tgvlh_5748f02a-e3dd-47c7-b89d-b472c718e593/ovn-controller/0.log" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.544928 4808 generic.go:334] "Generic (PLEG): container finished" podID="5748f02a-e3dd-47c7-b89d-b472c718e593" containerID="1385665b452c9c54279b496b70105068cc9ac986718df98cc735fc09bcd4ac05" exitCode=0 Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.545007 4808 generic.go:334] "Generic (PLEG): container finished" podID="5748f02a-e3dd-47c7-b89d-b472c718e593" containerID="363a0f82d4347e522c91f27597bc03aa33f75e0399760fcc5cfdc1772eb6aabf" exitCode=0 Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.545070 4808 generic.go:334] "Generic (PLEG): container finished" podID="5748f02a-e3dd-47c7-b89d-b472c718e593" containerID="58ee49f9d112bd2fe6a3cc5f499d1be9d4c51f2741ffb9bf24754a46a0a12814" exitCode=0 Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.545121 4808 generic.go:334] "Generic (PLEG): container finished" podID="5748f02a-e3dd-47c7-b89d-b472c718e593" containerID="28b04c73bfd5eadf6c1e436f6a7150074ee8357cef79b0e040c1d9f3809aab13" exitCode=0 Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.545178 4808 generic.go:334] "Generic (PLEG): container finished" podID="5748f02a-e3dd-47c7-b89d-b472c718e593" containerID="4c263e6c0445a0badadcbc5b50c370fd4ee9a4d0cb3e535e3d7944e938cbea4f" exitCode=0 Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.545230 4808 generic.go:334] "Generic (PLEG): container finished" podID="5748f02a-e3dd-47c7-b89d-b472c718e593" containerID="80ab3de82f2a3f22425c34c9b4abcbc925a7076e3f2ce3b952f10aeb856e1c09" exitCode=0 Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.545283 4808 generic.go:334] "Generic (PLEG): container finished" podID="5748f02a-e3dd-47c7-b89d-b472c718e593" containerID="5e9e729fa5a68d07a0f7e4a86114ed39e4128428e5a21c2f3f113f869adc9fc2" exitCode=143 Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.545348 4808 generic.go:334] "Generic (PLEG): container finished" podID="5748f02a-e3dd-47c7-b89d-b472c718e593" containerID="26a9d62d12c66018649ffcb84c69e20f1c08f3241bdb02ba4306b08dbe5ec49a" exitCode=143 Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.545048 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.545037 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" event={"ID":"5748f02a-e3dd-47c7-b89d-b472c718e593","Type":"ContainerDied","Data":"1385665b452c9c54279b496b70105068cc9ac986718df98cc735fc09bcd4ac05"} Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.545553 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" event={"ID":"5748f02a-e3dd-47c7-b89d-b472c718e593","Type":"ContainerDied","Data":"363a0f82d4347e522c91f27597bc03aa33f75e0399760fcc5cfdc1772eb6aabf"} Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.545568 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" event={"ID":"5748f02a-e3dd-47c7-b89d-b472c718e593","Type":"ContainerDied","Data":"58ee49f9d112bd2fe6a3cc5f499d1be9d4c51f2741ffb9bf24754a46a0a12814"} Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.545592 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" event={"ID":"5748f02a-e3dd-47c7-b89d-b472c718e593","Type":"ContainerDied","Data":"28b04c73bfd5eadf6c1e436f6a7150074ee8357cef79b0e040c1d9f3809aab13"} Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.545602 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" event={"ID":"5748f02a-e3dd-47c7-b89d-b472c718e593","Type":"ContainerDied","Data":"4c263e6c0445a0badadcbc5b50c370fd4ee9a4d0cb3e535e3d7944e938cbea4f"} Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.545610 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" event={"ID":"5748f02a-e3dd-47c7-b89d-b472c718e593","Type":"ContainerDied","Data":"80ab3de82f2a3f22425c34c9b4abcbc925a7076e3f2ce3b952f10aeb856e1c09"} Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.545620 4808 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1385665b452c9c54279b496b70105068cc9ac986718df98cc735fc09bcd4ac05"} Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.545631 4808 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a3c59386483fde848e69cdd193832875e9c1cbe4725d43032090c9a62494c40f"} Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.545637 4808 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"363a0f82d4347e522c91f27597bc03aa33f75e0399760fcc5cfdc1772eb6aabf"} Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.545643 4808 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"58ee49f9d112bd2fe6a3cc5f499d1be9d4c51f2741ffb9bf24754a46a0a12814"} Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.545648 4808 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"28b04c73bfd5eadf6c1e436f6a7150074ee8357cef79b0e040c1d9f3809aab13"} Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.545654 4808 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4c263e6c0445a0badadcbc5b50c370fd4ee9a4d0cb3e535e3d7944e938cbea4f"} Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.545659 4808 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"80ab3de82f2a3f22425c34c9b4abcbc925a7076e3f2ce3b952f10aeb856e1c09"} Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.545664 4808 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5e9e729fa5a68d07a0f7e4a86114ed39e4128428e5a21c2f3f113f869adc9fc2"} Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.545669 4808 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"26a9d62d12c66018649ffcb84c69e20f1c08f3241bdb02ba4306b08dbe5ec49a"} Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.545675 4808 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437"} Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.545681 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" event={"ID":"5748f02a-e3dd-47c7-b89d-b472c718e593","Type":"ContainerDied","Data":"5e9e729fa5a68d07a0f7e4a86114ed39e4128428e5a21c2f3f113f869adc9fc2"} Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.545690 4808 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1385665b452c9c54279b496b70105068cc9ac986718df98cc735fc09bcd4ac05"} Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.545696 4808 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a3c59386483fde848e69cdd193832875e9c1cbe4725d43032090c9a62494c40f"} Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.545702 4808 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"363a0f82d4347e522c91f27597bc03aa33f75e0399760fcc5cfdc1772eb6aabf"} Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.545707 4808 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"58ee49f9d112bd2fe6a3cc5f499d1be9d4c51f2741ffb9bf24754a46a0a12814"} Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.545712 4808 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"28b04c73bfd5eadf6c1e436f6a7150074ee8357cef79b0e040c1d9f3809aab13"} Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.545718 4808 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4c263e6c0445a0badadcbc5b50c370fd4ee9a4d0cb3e535e3d7944e938cbea4f"} Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.545723 4808 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"80ab3de82f2a3f22425c34c9b4abcbc925a7076e3f2ce3b952f10aeb856e1c09"} Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.545729 4808 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5e9e729fa5a68d07a0f7e4a86114ed39e4128428e5a21c2f3f113f869adc9fc2"} Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.545734 4808 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"26a9d62d12c66018649ffcb84c69e20f1c08f3241bdb02ba4306b08dbe5ec49a"} Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.545740 4808 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437"} Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.545746 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" event={"ID":"5748f02a-e3dd-47c7-b89d-b472c718e593","Type":"ContainerDied","Data":"26a9d62d12c66018649ffcb84c69e20f1c08f3241bdb02ba4306b08dbe5ec49a"} Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.545755 4808 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1385665b452c9c54279b496b70105068cc9ac986718df98cc735fc09bcd4ac05"} Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.545760 4808 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a3c59386483fde848e69cdd193832875e9c1cbe4725d43032090c9a62494c40f"} Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.545766 4808 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"363a0f82d4347e522c91f27597bc03aa33f75e0399760fcc5cfdc1772eb6aabf"} Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.545772 4808 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"58ee49f9d112bd2fe6a3cc5f499d1be9d4c51f2741ffb9bf24754a46a0a12814"} Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.545778 4808 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"28b04c73bfd5eadf6c1e436f6a7150074ee8357cef79b0e040c1d9f3809aab13"} Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.545783 4808 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4c263e6c0445a0badadcbc5b50c370fd4ee9a4d0cb3e535e3d7944e938cbea4f"} Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.545789 4808 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"80ab3de82f2a3f22425c34c9b4abcbc925a7076e3f2ce3b952f10aeb856e1c09"} Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.545795 4808 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5e9e729fa5a68d07a0f7e4a86114ed39e4128428e5a21c2f3f113f869adc9fc2"} Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.545800 4808 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"26a9d62d12c66018649ffcb84c69e20f1c08f3241bdb02ba4306b08dbe5ec49a"} Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.545805 4808 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437"} Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.545812 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" event={"ID":"5748f02a-e3dd-47c7-b89d-b472c718e593","Type":"ContainerDied","Data":"ad60f37f93ae8b251f62c5805faa94eb63cd424e9052d1f8a1dad95e11326ec9"} Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.545820 4808 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1385665b452c9c54279b496b70105068cc9ac986718df98cc735fc09bcd4ac05"} Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.545826 4808 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a3c59386483fde848e69cdd193832875e9c1cbe4725d43032090c9a62494c40f"} Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.545831 4808 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"363a0f82d4347e522c91f27597bc03aa33f75e0399760fcc5cfdc1772eb6aabf"} Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.545836 4808 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"58ee49f9d112bd2fe6a3cc5f499d1be9d4c51f2741ffb9bf24754a46a0a12814"} Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.545842 4808 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"28b04c73bfd5eadf6c1e436f6a7150074ee8357cef79b0e040c1d9f3809aab13"} Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.545847 4808 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4c263e6c0445a0badadcbc5b50c370fd4ee9a4d0cb3e535e3d7944e938cbea4f"} Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.545852 4808 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"80ab3de82f2a3f22425c34c9b4abcbc925a7076e3f2ce3b952f10aeb856e1c09"} Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.545858 4808 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5e9e729fa5a68d07a0f7e4a86114ed39e4128428e5a21c2f3f113f869adc9fc2"} Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.545864 4808 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"26a9d62d12c66018649ffcb84c69e20f1c08f3241bdb02ba4306b08dbe5ec49a"} Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.545870 4808 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437"} Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.560070 4808 scope.go:117] "RemoveContainer" containerID="1385665b452c9c54279b496b70105068cc9ac986718df98cc735fc09bcd4ac05" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.579031 4808 scope.go:117] "RemoveContainer" containerID="a3c59386483fde848e69cdd193832875e9c1cbe4725d43032090c9a62494c40f" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.586200 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5748f02a-e3dd-47c7-b89d-b472c718e593-ovnkube-config\") pod \"5748f02a-e3dd-47c7-b89d-b472c718e593\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.586258 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/5748f02a-e3dd-47c7-b89d-b472c718e593-ovnkube-script-lib\") pod \"5748f02a-e3dd-47c7-b89d-b472c718e593\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.586291 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-host-cni-bin\") pod \"5748f02a-e3dd-47c7-b89d-b472c718e593\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.586350 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "5748f02a-e3dd-47c7-b89d-b472c718e593" (UID: "5748f02a-e3dd-47c7-b89d-b472c718e593"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.586398 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-run-systemd\") pod \"5748f02a-e3dd-47c7-b89d-b472c718e593\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.586771 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5748f02a-e3dd-47c7-b89d-b472c718e593-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "5748f02a-e3dd-47c7-b89d-b472c718e593" (UID: "5748f02a-e3dd-47c7-b89d-b472c718e593"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.586819 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-etc-openvswitch\") pod \"5748f02a-e3dd-47c7-b89d-b472c718e593\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.586844 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "5748f02a-e3dd-47c7-b89d-b472c718e593" (UID: "5748f02a-e3dd-47c7-b89d-b472c718e593"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.586856 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-log-socket\") pod \"5748f02a-e3dd-47c7-b89d-b472c718e593\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.587071 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-host-run-netns\") pod \"5748f02a-e3dd-47c7-b89d-b472c718e593\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.587080 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5748f02a-e3dd-47c7-b89d-b472c718e593-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "5748f02a-e3dd-47c7-b89d-b472c718e593" (UID: "5748f02a-e3dd-47c7-b89d-b472c718e593"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.587137 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-run-openvswitch\") pod \"5748f02a-e3dd-47c7-b89d-b472c718e593\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.587150 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-log-socket" (OuterVolumeSpecName: "log-socket") pod "5748f02a-e3dd-47c7-b89d-b472c718e593" (UID: "5748f02a-e3dd-47c7-b89d-b472c718e593"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.587157 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-host-cni-netd\") pod \"5748f02a-e3dd-47c7-b89d-b472c718e593\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.587179 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "5748f02a-e3dd-47c7-b89d-b472c718e593" (UID: "5748f02a-e3dd-47c7-b89d-b472c718e593"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.587202 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-run-ovn\") pod \"5748f02a-e3dd-47c7-b89d-b472c718e593\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.587234 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-host-var-lib-cni-networks-ovn-kubernetes\") pod \"5748f02a-e3dd-47c7-b89d-b472c718e593\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.587266 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qnzj8\" (UniqueName: \"kubernetes.io/projected/5748f02a-e3dd-47c7-b89d-b472c718e593-kube-api-access-qnzj8\") pod \"5748f02a-e3dd-47c7-b89d-b472c718e593\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.587190 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "5748f02a-e3dd-47c7-b89d-b472c718e593" (UID: "5748f02a-e3dd-47c7-b89d-b472c718e593"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.587207 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "5748f02a-e3dd-47c7-b89d-b472c718e593" (UID: "5748f02a-e3dd-47c7-b89d-b472c718e593"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.587295 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5748f02a-e3dd-47c7-b89d-b472c718e593-env-overrides\") pod \"5748f02a-e3dd-47c7-b89d-b472c718e593\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.587230 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "5748f02a-e3dd-47c7-b89d-b472c718e593" (UID: "5748f02a-e3dd-47c7-b89d-b472c718e593"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.587317 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-var-lib-openvswitch\") pod \"5748f02a-e3dd-47c7-b89d-b472c718e593\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.587300 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "5748f02a-e3dd-47c7-b89d-b472c718e593" (UID: "5748f02a-e3dd-47c7-b89d-b472c718e593"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.587352 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-host-kubelet\") pod \"5748f02a-e3dd-47c7-b89d-b472c718e593\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.587373 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5748f02a-e3dd-47c7-b89d-b472c718e593-ovn-node-metrics-cert\") pod \"5748f02a-e3dd-47c7-b89d-b472c718e593\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.587390 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-node-log\") pod \"5748f02a-e3dd-47c7-b89d-b472c718e593\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.587410 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-host-run-ovn-kubernetes\") pod \"5748f02a-e3dd-47c7-b89d-b472c718e593\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.587425 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-systemd-units\") pod \"5748f02a-e3dd-47c7-b89d-b472c718e593\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.587420 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "5748f02a-e3dd-47c7-b89d-b472c718e593" (UID: "5748f02a-e3dd-47c7-b89d-b472c718e593"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.587457 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-host-slash" (OuterVolumeSpecName: "host-slash") pod "5748f02a-e3dd-47c7-b89d-b472c718e593" (UID: "5748f02a-e3dd-47c7-b89d-b472c718e593"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.587440 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-host-slash\") pod \"5748f02a-e3dd-47c7-b89d-b472c718e593\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.587478 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-node-log" (OuterVolumeSpecName: "node-log") pod "5748f02a-e3dd-47c7-b89d-b472c718e593" (UID: "5748f02a-e3dd-47c7-b89d-b472c718e593"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.587504 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "5748f02a-e3dd-47c7-b89d-b472c718e593" (UID: "5748f02a-e3dd-47c7-b89d-b472c718e593"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.587600 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5748f02a-e3dd-47c7-b89d-b472c718e593-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "5748f02a-e3dd-47c7-b89d-b472c718e593" (UID: "5748f02a-e3dd-47c7-b89d-b472c718e593"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.587636 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/60c87e4f-f758-4e3e-a812-1636091ba578-host-slash\") pod \"ovnkube-node-2q7qz\" (UID: \"60c87e4f-f758-4e3e-a812-1636091ba578\") " pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.587639 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "5748f02a-e3dd-47c7-b89d-b472c718e593" (UID: "5748f02a-e3dd-47c7-b89d-b472c718e593"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.587672 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/60c87e4f-f758-4e3e-a812-1636091ba578-etc-openvswitch\") pod \"ovnkube-node-2q7qz\" (UID: \"60c87e4f-f758-4e3e-a812-1636091ba578\") " pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.587726 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/60c87e4f-f758-4e3e-a812-1636091ba578-run-systemd\") pod \"ovnkube-node-2q7qz\" (UID: \"60c87e4f-f758-4e3e-a812-1636091ba578\") " pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.587761 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/60c87e4f-f758-4e3e-a812-1636091ba578-systemd-units\") pod \"ovnkube-node-2q7qz\" (UID: \"60c87e4f-f758-4e3e-a812-1636091ba578\") " pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.587779 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/60c87e4f-f758-4e3e-a812-1636091ba578-host-cni-netd\") pod \"ovnkube-node-2q7qz\" (UID: \"60c87e4f-f758-4e3e-a812-1636091ba578\") " pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.587794 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/60c87e4f-f758-4e3e-a812-1636091ba578-log-socket\") pod \"ovnkube-node-2q7qz\" (UID: \"60c87e4f-f758-4e3e-a812-1636091ba578\") " pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.587819 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/60c87e4f-f758-4e3e-a812-1636091ba578-host-cni-bin\") pod \"ovnkube-node-2q7qz\" (UID: \"60c87e4f-f758-4e3e-a812-1636091ba578\") " pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.587842 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/60c87e4f-f758-4e3e-a812-1636091ba578-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-2q7qz\" (UID: \"60c87e4f-f758-4e3e-a812-1636091ba578\") " pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.587859 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/60c87e4f-f758-4e3e-a812-1636091ba578-run-openvswitch\") pod \"ovnkube-node-2q7qz\" (UID: \"60c87e4f-f758-4e3e-a812-1636091ba578\") " pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.587878 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/60c87e4f-f758-4e3e-a812-1636091ba578-ovnkube-config\") pod \"ovnkube-node-2q7qz\" (UID: \"60c87e4f-f758-4e3e-a812-1636091ba578\") " pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.587908 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/60c87e4f-f758-4e3e-a812-1636091ba578-host-kubelet\") pod \"ovnkube-node-2q7qz\" (UID: \"60c87e4f-f758-4e3e-a812-1636091ba578\") " pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.587931 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/60c87e4f-f758-4e3e-a812-1636091ba578-run-ovn\") pod \"ovnkube-node-2q7qz\" (UID: \"60c87e4f-f758-4e3e-a812-1636091ba578\") " pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.587947 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/60c87e4f-f758-4e3e-a812-1636091ba578-env-overrides\") pod \"ovnkube-node-2q7qz\" (UID: \"60c87e4f-f758-4e3e-a812-1636091ba578\") " pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.587974 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/60c87e4f-f758-4e3e-a812-1636091ba578-node-log\") pod \"ovnkube-node-2q7qz\" (UID: \"60c87e4f-f758-4e3e-a812-1636091ba578\") " pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.588000 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/60c87e4f-f758-4e3e-a812-1636091ba578-ovnkube-script-lib\") pod \"ovnkube-node-2q7qz\" (UID: \"60c87e4f-f758-4e3e-a812-1636091ba578\") " pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.588016 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/60c87e4f-f758-4e3e-a812-1636091ba578-ovn-node-metrics-cert\") pod \"ovnkube-node-2q7qz\" (UID: \"60c87e4f-f758-4e3e-a812-1636091ba578\") " pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.588035 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/60c87e4f-f758-4e3e-a812-1636091ba578-host-run-netns\") pod \"ovnkube-node-2q7qz\" (UID: \"60c87e4f-f758-4e3e-a812-1636091ba578\") " pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.588064 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/60c87e4f-f758-4e3e-a812-1636091ba578-var-lib-openvswitch\") pod \"ovnkube-node-2q7qz\" (UID: \"60c87e4f-f758-4e3e-a812-1636091ba578\") " pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.588086 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/60c87e4f-f758-4e3e-a812-1636091ba578-host-run-ovn-kubernetes\") pod \"ovnkube-node-2q7qz\" (UID: \"60c87e4f-f758-4e3e-a812-1636091ba578\") " pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.588103 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8xth\" (UniqueName: \"kubernetes.io/projected/60c87e4f-f758-4e3e-a812-1636091ba578-kube-api-access-l8xth\") pod \"ovnkube-node-2q7qz\" (UID: \"60c87e4f-f758-4e3e-a812-1636091ba578\") " pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.588143 4808 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-run-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.588153 4808 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-host-cni-netd\") on node \"crc\" DevicePath \"\"" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.588163 4808 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.588173 4808 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.588182 4808 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5748f02a-e3dd-47c7-b89d-b472c718e593-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.588192 4808 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.588255 4808 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-host-kubelet\") on node \"crc\" DevicePath \"\"" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.588278 4808 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-node-log\") on node \"crc\" DevicePath \"\"" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.588291 4808 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.588307 4808 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-host-slash\") on node \"crc\" DevicePath \"\"" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.588319 4808 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5748f02a-e3dd-47c7-b89d-b472c718e593-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.588336 4808 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/5748f02a-e3dd-47c7-b89d-b472c718e593-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.588349 4808 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-host-cni-bin\") on node \"crc\" DevicePath \"\"" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.588361 4808 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.588374 4808 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-log-socket\") on node \"crc\" DevicePath \"\"" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.588387 4808 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-host-run-netns\") on node \"crc\" DevicePath \"\"" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.587657 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "5748f02a-e3dd-47c7-b89d-b472c718e593" (UID: "5748f02a-e3dd-47c7-b89d-b472c718e593"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.598160 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5748f02a-e3dd-47c7-b89d-b472c718e593-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "5748f02a-e3dd-47c7-b89d-b472c718e593" (UID: "5748f02a-e3dd-47c7-b89d-b472c718e593"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.598605 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5748f02a-e3dd-47c7-b89d-b472c718e593-kube-api-access-qnzj8" (OuterVolumeSpecName: "kube-api-access-qnzj8") pod "5748f02a-e3dd-47c7-b89d-b472c718e593" (UID: "5748f02a-e3dd-47c7-b89d-b472c718e593"). InnerVolumeSpecName "kube-api-access-qnzj8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.610823 4808 scope.go:117] "RemoveContainer" containerID="363a0f82d4347e522c91f27597bc03aa33f75e0399760fcc5cfdc1772eb6aabf" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.633124 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "5748f02a-e3dd-47c7-b89d-b472c718e593" (UID: "5748f02a-e3dd-47c7-b89d-b472c718e593"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.637057 4808 scope.go:117] "RemoveContainer" containerID="58ee49f9d112bd2fe6a3cc5f499d1be9d4c51f2741ffb9bf24754a46a0a12814" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.664938 4808 scope.go:117] "RemoveContainer" containerID="28b04c73bfd5eadf6c1e436f6a7150074ee8357cef79b0e040c1d9f3809aab13" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.690153 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/60c87e4f-f758-4e3e-a812-1636091ba578-systemd-units\") pod \"ovnkube-node-2q7qz\" (UID: \"60c87e4f-f758-4e3e-a812-1636091ba578\") " pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.690204 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/60c87e4f-f758-4e3e-a812-1636091ba578-log-socket\") pod \"ovnkube-node-2q7qz\" (UID: \"60c87e4f-f758-4e3e-a812-1636091ba578\") " pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.690223 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/60c87e4f-f758-4e3e-a812-1636091ba578-host-cni-netd\") pod \"ovnkube-node-2q7qz\" (UID: \"60c87e4f-f758-4e3e-a812-1636091ba578\") " pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.690248 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/60c87e4f-f758-4e3e-a812-1636091ba578-host-cni-bin\") pod \"ovnkube-node-2q7qz\" (UID: \"60c87e4f-f758-4e3e-a812-1636091ba578\") " pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.690268 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/60c87e4f-f758-4e3e-a812-1636091ba578-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-2q7qz\" (UID: \"60c87e4f-f758-4e3e-a812-1636091ba578\") " pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.690289 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/60c87e4f-f758-4e3e-a812-1636091ba578-run-openvswitch\") pod \"ovnkube-node-2q7qz\" (UID: \"60c87e4f-f758-4e3e-a812-1636091ba578\") " pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.690304 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/60c87e4f-f758-4e3e-a812-1636091ba578-ovnkube-config\") pod \"ovnkube-node-2q7qz\" (UID: \"60c87e4f-f758-4e3e-a812-1636091ba578\") " pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.690325 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/60c87e4f-f758-4e3e-a812-1636091ba578-host-kubelet\") pod \"ovnkube-node-2q7qz\" (UID: \"60c87e4f-f758-4e3e-a812-1636091ba578\") " pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.690347 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/60c87e4f-f758-4e3e-a812-1636091ba578-run-ovn\") pod \"ovnkube-node-2q7qz\" (UID: \"60c87e4f-f758-4e3e-a812-1636091ba578\") " pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.690363 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/60c87e4f-f758-4e3e-a812-1636091ba578-env-overrides\") pod \"ovnkube-node-2q7qz\" (UID: \"60c87e4f-f758-4e3e-a812-1636091ba578\") " pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.690380 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/60c87e4f-f758-4e3e-a812-1636091ba578-node-log\") pod \"ovnkube-node-2q7qz\" (UID: \"60c87e4f-f758-4e3e-a812-1636091ba578\") " pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.690404 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/60c87e4f-f758-4e3e-a812-1636091ba578-ovn-node-metrics-cert\") pod \"ovnkube-node-2q7qz\" (UID: \"60c87e4f-f758-4e3e-a812-1636091ba578\") " pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.690419 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/60c87e4f-f758-4e3e-a812-1636091ba578-ovnkube-script-lib\") pod \"ovnkube-node-2q7qz\" (UID: \"60c87e4f-f758-4e3e-a812-1636091ba578\") " pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.690438 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/60c87e4f-f758-4e3e-a812-1636091ba578-host-run-netns\") pod \"ovnkube-node-2q7qz\" (UID: \"60c87e4f-f758-4e3e-a812-1636091ba578\") " pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.690461 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/60c87e4f-f758-4e3e-a812-1636091ba578-var-lib-openvswitch\") pod \"ovnkube-node-2q7qz\" (UID: \"60c87e4f-f758-4e3e-a812-1636091ba578\") " pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.690483 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/60c87e4f-f758-4e3e-a812-1636091ba578-host-run-ovn-kubernetes\") pod \"ovnkube-node-2q7qz\" (UID: \"60c87e4f-f758-4e3e-a812-1636091ba578\") " pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.690500 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8xth\" (UniqueName: \"kubernetes.io/projected/60c87e4f-f758-4e3e-a812-1636091ba578-kube-api-access-l8xth\") pod \"ovnkube-node-2q7qz\" (UID: \"60c87e4f-f758-4e3e-a812-1636091ba578\") " pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.690516 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/60c87e4f-f758-4e3e-a812-1636091ba578-host-slash\") pod \"ovnkube-node-2q7qz\" (UID: \"60c87e4f-f758-4e3e-a812-1636091ba578\") " pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.690532 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/60c87e4f-f758-4e3e-a812-1636091ba578-etc-openvswitch\") pod \"ovnkube-node-2q7qz\" (UID: \"60c87e4f-f758-4e3e-a812-1636091ba578\") " pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.690550 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/60c87e4f-f758-4e3e-a812-1636091ba578-run-systemd\") pod \"ovnkube-node-2q7qz\" (UID: \"60c87e4f-f758-4e3e-a812-1636091ba578\") " pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.690597 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qnzj8\" (UniqueName: \"kubernetes.io/projected/5748f02a-e3dd-47c7-b89d-b472c718e593-kube-api-access-qnzj8\") on node \"crc\" DevicePath \"\"" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.690609 4808 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5748f02a-e3dd-47c7-b89d-b472c718e593-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.690618 4808 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-systemd-units\") on node \"crc\" DevicePath \"\"" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.690626 4808 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-run-systemd\") on node \"crc\" DevicePath \"\"" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.690678 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/60c87e4f-f758-4e3e-a812-1636091ba578-run-systemd\") pod \"ovnkube-node-2q7qz\" (UID: \"60c87e4f-f758-4e3e-a812-1636091ba578\") " pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.690714 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/60c87e4f-f758-4e3e-a812-1636091ba578-systemd-units\") pod \"ovnkube-node-2q7qz\" (UID: \"60c87e4f-f758-4e3e-a812-1636091ba578\") " pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.690735 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/60c87e4f-f758-4e3e-a812-1636091ba578-log-socket\") pod \"ovnkube-node-2q7qz\" (UID: \"60c87e4f-f758-4e3e-a812-1636091ba578\") " pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.690756 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/60c87e4f-f758-4e3e-a812-1636091ba578-host-cni-netd\") pod \"ovnkube-node-2q7qz\" (UID: \"60c87e4f-f758-4e3e-a812-1636091ba578\") " pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.690775 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/60c87e4f-f758-4e3e-a812-1636091ba578-host-cni-bin\") pod \"ovnkube-node-2q7qz\" (UID: \"60c87e4f-f758-4e3e-a812-1636091ba578\") " pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.690796 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/60c87e4f-f758-4e3e-a812-1636091ba578-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-2q7qz\" (UID: \"60c87e4f-f758-4e3e-a812-1636091ba578\") " pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.690816 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/60c87e4f-f758-4e3e-a812-1636091ba578-run-openvswitch\") pod \"ovnkube-node-2q7qz\" (UID: \"60c87e4f-f758-4e3e-a812-1636091ba578\") " pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.691419 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/60c87e4f-f758-4e3e-a812-1636091ba578-ovnkube-config\") pod \"ovnkube-node-2q7qz\" (UID: \"60c87e4f-f758-4e3e-a812-1636091ba578\") " pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.691452 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/60c87e4f-f758-4e3e-a812-1636091ba578-host-kubelet\") pod \"ovnkube-node-2q7qz\" (UID: \"60c87e4f-f758-4e3e-a812-1636091ba578\") " pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.691475 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/60c87e4f-f758-4e3e-a812-1636091ba578-run-ovn\") pod \"ovnkube-node-2q7qz\" (UID: \"60c87e4f-f758-4e3e-a812-1636091ba578\") " pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.691780 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/60c87e4f-f758-4e3e-a812-1636091ba578-env-overrides\") pod \"ovnkube-node-2q7qz\" (UID: \"60c87e4f-f758-4e3e-a812-1636091ba578\") " pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.691813 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/60c87e4f-f758-4e3e-a812-1636091ba578-node-log\") pod \"ovnkube-node-2q7qz\" (UID: \"60c87e4f-f758-4e3e-a812-1636091ba578\") " pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.692098 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/60c87e4f-f758-4e3e-a812-1636091ba578-host-run-ovn-kubernetes\") pod \"ovnkube-node-2q7qz\" (UID: \"60c87e4f-f758-4e3e-a812-1636091ba578\") " pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.692276 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/60c87e4f-f758-4e3e-a812-1636091ba578-host-slash\") pod \"ovnkube-node-2q7qz\" (UID: \"60c87e4f-f758-4e3e-a812-1636091ba578\") " pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.692728 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/60c87e4f-f758-4e3e-a812-1636091ba578-etc-openvswitch\") pod \"ovnkube-node-2q7qz\" (UID: \"60c87e4f-f758-4e3e-a812-1636091ba578\") " pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.692829 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/60c87e4f-f758-4e3e-a812-1636091ba578-var-lib-openvswitch\") pod \"ovnkube-node-2q7qz\" (UID: \"60c87e4f-f758-4e3e-a812-1636091ba578\") " pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.692809 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/60c87e4f-f758-4e3e-a812-1636091ba578-host-run-netns\") pod \"ovnkube-node-2q7qz\" (UID: \"60c87e4f-f758-4e3e-a812-1636091ba578\") " pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.692783 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/60c87e4f-f758-4e3e-a812-1636091ba578-ovnkube-script-lib\") pod \"ovnkube-node-2q7qz\" (UID: \"60c87e4f-f758-4e3e-a812-1636091ba578\") " pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.699041 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/60c87e4f-f758-4e3e-a812-1636091ba578-ovn-node-metrics-cert\") pod \"ovnkube-node-2q7qz\" (UID: \"60c87e4f-f758-4e3e-a812-1636091ba578\") " pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.703031 4808 scope.go:117] "RemoveContainer" containerID="4c263e6c0445a0badadcbc5b50c370fd4ee9a4d0cb3e535e3d7944e938cbea4f" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.727131 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8xth\" (UniqueName: \"kubernetes.io/projected/60c87e4f-f758-4e3e-a812-1636091ba578-kube-api-access-l8xth\") pod \"ovnkube-node-2q7qz\" (UID: \"60c87e4f-f758-4e3e-a812-1636091ba578\") " pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.737829 4808 scope.go:117] "RemoveContainer" containerID="80ab3de82f2a3f22425c34c9b4abcbc925a7076e3f2ce3b952f10aeb856e1c09" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.755038 4808 scope.go:117] "RemoveContainer" containerID="5e9e729fa5a68d07a0f7e4a86114ed39e4128428e5a21c2f3f113f869adc9fc2" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.784464 4808 scope.go:117] "RemoveContainer" containerID="26a9d62d12c66018649ffcb84c69e20f1c08f3241bdb02ba4306b08dbe5ec49a" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.803774 4808 scope.go:117] "RemoveContainer" containerID="35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.825103 4808 scope.go:117] "RemoveContainer" containerID="1385665b452c9c54279b496b70105068cc9ac986718df98cc735fc09bcd4ac05" Feb 17 16:05:53 crc kubenswrapper[4808]: E0217 16:05:53.825821 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1385665b452c9c54279b496b70105068cc9ac986718df98cc735fc09bcd4ac05\": container with ID starting with 1385665b452c9c54279b496b70105068cc9ac986718df98cc735fc09bcd4ac05 not found: ID does not exist" containerID="1385665b452c9c54279b496b70105068cc9ac986718df98cc735fc09bcd4ac05" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.825866 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1385665b452c9c54279b496b70105068cc9ac986718df98cc735fc09bcd4ac05"} err="failed to get container status \"1385665b452c9c54279b496b70105068cc9ac986718df98cc735fc09bcd4ac05\": rpc error: code = NotFound desc = could not find container \"1385665b452c9c54279b496b70105068cc9ac986718df98cc735fc09bcd4ac05\": container with ID starting with 1385665b452c9c54279b496b70105068cc9ac986718df98cc735fc09bcd4ac05 not found: ID does not exist" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.825893 4808 scope.go:117] "RemoveContainer" containerID="a3c59386483fde848e69cdd193832875e9c1cbe4725d43032090c9a62494c40f" Feb 17 16:05:53 crc kubenswrapper[4808]: E0217 16:05:53.826279 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3c59386483fde848e69cdd193832875e9c1cbe4725d43032090c9a62494c40f\": container with ID starting with a3c59386483fde848e69cdd193832875e9c1cbe4725d43032090c9a62494c40f not found: ID does not exist" containerID="a3c59386483fde848e69cdd193832875e9c1cbe4725d43032090c9a62494c40f" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.826331 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3c59386483fde848e69cdd193832875e9c1cbe4725d43032090c9a62494c40f"} err="failed to get container status \"a3c59386483fde848e69cdd193832875e9c1cbe4725d43032090c9a62494c40f\": rpc error: code = NotFound desc = could not find container \"a3c59386483fde848e69cdd193832875e9c1cbe4725d43032090c9a62494c40f\": container with ID starting with a3c59386483fde848e69cdd193832875e9c1cbe4725d43032090c9a62494c40f not found: ID does not exist" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.826366 4808 scope.go:117] "RemoveContainer" containerID="363a0f82d4347e522c91f27597bc03aa33f75e0399760fcc5cfdc1772eb6aabf" Feb 17 16:05:53 crc kubenswrapper[4808]: E0217 16:05:53.826718 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"363a0f82d4347e522c91f27597bc03aa33f75e0399760fcc5cfdc1772eb6aabf\": container with ID starting with 363a0f82d4347e522c91f27597bc03aa33f75e0399760fcc5cfdc1772eb6aabf not found: ID does not exist" containerID="363a0f82d4347e522c91f27597bc03aa33f75e0399760fcc5cfdc1772eb6aabf" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.826742 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"363a0f82d4347e522c91f27597bc03aa33f75e0399760fcc5cfdc1772eb6aabf"} err="failed to get container status \"363a0f82d4347e522c91f27597bc03aa33f75e0399760fcc5cfdc1772eb6aabf\": rpc error: code = NotFound desc = could not find container \"363a0f82d4347e522c91f27597bc03aa33f75e0399760fcc5cfdc1772eb6aabf\": container with ID starting with 363a0f82d4347e522c91f27597bc03aa33f75e0399760fcc5cfdc1772eb6aabf not found: ID does not exist" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.826762 4808 scope.go:117] "RemoveContainer" containerID="58ee49f9d112bd2fe6a3cc5f499d1be9d4c51f2741ffb9bf24754a46a0a12814" Feb 17 16:05:53 crc kubenswrapper[4808]: E0217 16:05:53.827063 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58ee49f9d112bd2fe6a3cc5f499d1be9d4c51f2741ffb9bf24754a46a0a12814\": container with ID starting with 58ee49f9d112bd2fe6a3cc5f499d1be9d4c51f2741ffb9bf24754a46a0a12814 not found: ID does not exist" containerID="58ee49f9d112bd2fe6a3cc5f499d1be9d4c51f2741ffb9bf24754a46a0a12814" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.827104 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58ee49f9d112bd2fe6a3cc5f499d1be9d4c51f2741ffb9bf24754a46a0a12814"} err="failed to get container status \"58ee49f9d112bd2fe6a3cc5f499d1be9d4c51f2741ffb9bf24754a46a0a12814\": rpc error: code = NotFound desc = could not find container \"58ee49f9d112bd2fe6a3cc5f499d1be9d4c51f2741ffb9bf24754a46a0a12814\": container with ID starting with 58ee49f9d112bd2fe6a3cc5f499d1be9d4c51f2741ffb9bf24754a46a0a12814 not found: ID does not exist" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.827131 4808 scope.go:117] "RemoveContainer" containerID="28b04c73bfd5eadf6c1e436f6a7150074ee8357cef79b0e040c1d9f3809aab13" Feb 17 16:05:53 crc kubenswrapper[4808]: E0217 16:05:53.827617 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28b04c73bfd5eadf6c1e436f6a7150074ee8357cef79b0e040c1d9f3809aab13\": container with ID starting with 28b04c73bfd5eadf6c1e436f6a7150074ee8357cef79b0e040c1d9f3809aab13 not found: ID does not exist" containerID="28b04c73bfd5eadf6c1e436f6a7150074ee8357cef79b0e040c1d9f3809aab13" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.827642 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28b04c73bfd5eadf6c1e436f6a7150074ee8357cef79b0e040c1d9f3809aab13"} err="failed to get container status \"28b04c73bfd5eadf6c1e436f6a7150074ee8357cef79b0e040c1d9f3809aab13\": rpc error: code = NotFound desc = could not find container \"28b04c73bfd5eadf6c1e436f6a7150074ee8357cef79b0e040c1d9f3809aab13\": container with ID starting with 28b04c73bfd5eadf6c1e436f6a7150074ee8357cef79b0e040c1d9f3809aab13 not found: ID does not exist" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.827657 4808 scope.go:117] "RemoveContainer" containerID="4c263e6c0445a0badadcbc5b50c370fd4ee9a4d0cb3e535e3d7944e938cbea4f" Feb 17 16:05:53 crc kubenswrapper[4808]: E0217 16:05:53.828050 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c263e6c0445a0badadcbc5b50c370fd4ee9a4d0cb3e535e3d7944e938cbea4f\": container with ID starting with 4c263e6c0445a0badadcbc5b50c370fd4ee9a4d0cb3e535e3d7944e938cbea4f not found: ID does not exist" containerID="4c263e6c0445a0badadcbc5b50c370fd4ee9a4d0cb3e535e3d7944e938cbea4f" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.828083 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c263e6c0445a0badadcbc5b50c370fd4ee9a4d0cb3e535e3d7944e938cbea4f"} err="failed to get container status \"4c263e6c0445a0badadcbc5b50c370fd4ee9a4d0cb3e535e3d7944e938cbea4f\": rpc error: code = NotFound desc = could not find container \"4c263e6c0445a0badadcbc5b50c370fd4ee9a4d0cb3e535e3d7944e938cbea4f\": container with ID starting with 4c263e6c0445a0badadcbc5b50c370fd4ee9a4d0cb3e535e3d7944e938cbea4f not found: ID does not exist" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.828102 4808 scope.go:117] "RemoveContainer" containerID="80ab3de82f2a3f22425c34c9b4abcbc925a7076e3f2ce3b952f10aeb856e1c09" Feb 17 16:05:53 crc kubenswrapper[4808]: E0217 16:05:53.828354 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80ab3de82f2a3f22425c34c9b4abcbc925a7076e3f2ce3b952f10aeb856e1c09\": container with ID starting with 80ab3de82f2a3f22425c34c9b4abcbc925a7076e3f2ce3b952f10aeb856e1c09 not found: ID does not exist" containerID="80ab3de82f2a3f22425c34c9b4abcbc925a7076e3f2ce3b952f10aeb856e1c09" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.828384 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80ab3de82f2a3f22425c34c9b4abcbc925a7076e3f2ce3b952f10aeb856e1c09"} err="failed to get container status \"80ab3de82f2a3f22425c34c9b4abcbc925a7076e3f2ce3b952f10aeb856e1c09\": rpc error: code = NotFound desc = could not find container \"80ab3de82f2a3f22425c34c9b4abcbc925a7076e3f2ce3b952f10aeb856e1c09\": container with ID starting with 80ab3de82f2a3f22425c34c9b4abcbc925a7076e3f2ce3b952f10aeb856e1c09 not found: ID does not exist" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.828404 4808 scope.go:117] "RemoveContainer" containerID="5e9e729fa5a68d07a0f7e4a86114ed39e4128428e5a21c2f3f113f869adc9fc2" Feb 17 16:05:53 crc kubenswrapper[4808]: E0217 16:05:53.828662 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e9e729fa5a68d07a0f7e4a86114ed39e4128428e5a21c2f3f113f869adc9fc2\": container with ID starting with 5e9e729fa5a68d07a0f7e4a86114ed39e4128428e5a21c2f3f113f869adc9fc2 not found: ID does not exist" containerID="5e9e729fa5a68d07a0f7e4a86114ed39e4128428e5a21c2f3f113f869adc9fc2" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.828688 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e9e729fa5a68d07a0f7e4a86114ed39e4128428e5a21c2f3f113f869adc9fc2"} err="failed to get container status \"5e9e729fa5a68d07a0f7e4a86114ed39e4128428e5a21c2f3f113f869adc9fc2\": rpc error: code = NotFound desc = could not find container \"5e9e729fa5a68d07a0f7e4a86114ed39e4128428e5a21c2f3f113f869adc9fc2\": container with ID starting with 5e9e729fa5a68d07a0f7e4a86114ed39e4128428e5a21c2f3f113f869adc9fc2 not found: ID does not exist" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.828704 4808 scope.go:117] "RemoveContainer" containerID="26a9d62d12c66018649ffcb84c69e20f1c08f3241bdb02ba4306b08dbe5ec49a" Feb 17 16:05:53 crc kubenswrapper[4808]: E0217 16:05:53.828959 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"26a9d62d12c66018649ffcb84c69e20f1c08f3241bdb02ba4306b08dbe5ec49a\": container with ID starting with 26a9d62d12c66018649ffcb84c69e20f1c08f3241bdb02ba4306b08dbe5ec49a not found: ID does not exist" containerID="26a9d62d12c66018649ffcb84c69e20f1c08f3241bdb02ba4306b08dbe5ec49a" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.828985 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26a9d62d12c66018649ffcb84c69e20f1c08f3241bdb02ba4306b08dbe5ec49a"} err="failed to get container status \"26a9d62d12c66018649ffcb84c69e20f1c08f3241bdb02ba4306b08dbe5ec49a\": rpc error: code = NotFound desc = could not find container \"26a9d62d12c66018649ffcb84c69e20f1c08f3241bdb02ba4306b08dbe5ec49a\": container with ID starting with 26a9d62d12c66018649ffcb84c69e20f1c08f3241bdb02ba4306b08dbe5ec49a not found: ID does not exist" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.829004 4808 scope.go:117] "RemoveContainer" containerID="35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437" Feb 17 16:05:53 crc kubenswrapper[4808]: E0217 16:05:53.829237 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\": container with ID starting with 35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437 not found: ID does not exist" containerID="35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.829274 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437"} err="failed to get container status \"35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\": rpc error: code = NotFound desc = could not find container \"35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\": container with ID starting with 35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437 not found: ID does not exist" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.829294 4808 scope.go:117] "RemoveContainer" containerID="1385665b452c9c54279b496b70105068cc9ac986718df98cc735fc09bcd4ac05" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.830007 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1385665b452c9c54279b496b70105068cc9ac986718df98cc735fc09bcd4ac05"} err="failed to get container status \"1385665b452c9c54279b496b70105068cc9ac986718df98cc735fc09bcd4ac05\": rpc error: code = NotFound desc = could not find container \"1385665b452c9c54279b496b70105068cc9ac986718df98cc735fc09bcd4ac05\": container with ID starting with 1385665b452c9c54279b496b70105068cc9ac986718df98cc735fc09bcd4ac05 not found: ID does not exist" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.830029 4808 scope.go:117] "RemoveContainer" containerID="a3c59386483fde848e69cdd193832875e9c1cbe4725d43032090c9a62494c40f" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.830234 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3c59386483fde848e69cdd193832875e9c1cbe4725d43032090c9a62494c40f"} err="failed to get container status \"a3c59386483fde848e69cdd193832875e9c1cbe4725d43032090c9a62494c40f\": rpc error: code = NotFound desc = could not find container \"a3c59386483fde848e69cdd193832875e9c1cbe4725d43032090c9a62494c40f\": container with ID starting with a3c59386483fde848e69cdd193832875e9c1cbe4725d43032090c9a62494c40f not found: ID does not exist" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.830257 4808 scope.go:117] "RemoveContainer" containerID="363a0f82d4347e522c91f27597bc03aa33f75e0399760fcc5cfdc1772eb6aabf" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.830467 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"363a0f82d4347e522c91f27597bc03aa33f75e0399760fcc5cfdc1772eb6aabf"} err="failed to get container status \"363a0f82d4347e522c91f27597bc03aa33f75e0399760fcc5cfdc1772eb6aabf\": rpc error: code = NotFound desc = could not find container \"363a0f82d4347e522c91f27597bc03aa33f75e0399760fcc5cfdc1772eb6aabf\": container with ID starting with 363a0f82d4347e522c91f27597bc03aa33f75e0399760fcc5cfdc1772eb6aabf not found: ID does not exist" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.830492 4808 scope.go:117] "RemoveContainer" containerID="58ee49f9d112bd2fe6a3cc5f499d1be9d4c51f2741ffb9bf24754a46a0a12814" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.830727 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.831011 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58ee49f9d112bd2fe6a3cc5f499d1be9d4c51f2741ffb9bf24754a46a0a12814"} err="failed to get container status \"58ee49f9d112bd2fe6a3cc5f499d1be9d4c51f2741ffb9bf24754a46a0a12814\": rpc error: code = NotFound desc = could not find container \"58ee49f9d112bd2fe6a3cc5f499d1be9d4c51f2741ffb9bf24754a46a0a12814\": container with ID starting with 58ee49f9d112bd2fe6a3cc5f499d1be9d4c51f2741ffb9bf24754a46a0a12814 not found: ID does not exist" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.831035 4808 scope.go:117] "RemoveContainer" containerID="28b04c73bfd5eadf6c1e436f6a7150074ee8357cef79b0e040c1d9f3809aab13" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.831266 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28b04c73bfd5eadf6c1e436f6a7150074ee8357cef79b0e040c1d9f3809aab13"} err="failed to get container status \"28b04c73bfd5eadf6c1e436f6a7150074ee8357cef79b0e040c1d9f3809aab13\": rpc error: code = NotFound desc = could not find container \"28b04c73bfd5eadf6c1e436f6a7150074ee8357cef79b0e040c1d9f3809aab13\": container with ID starting with 28b04c73bfd5eadf6c1e436f6a7150074ee8357cef79b0e040c1d9f3809aab13 not found: ID does not exist" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.831302 4808 scope.go:117] "RemoveContainer" containerID="4c263e6c0445a0badadcbc5b50c370fd4ee9a4d0cb3e535e3d7944e938cbea4f" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.831901 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c263e6c0445a0badadcbc5b50c370fd4ee9a4d0cb3e535e3d7944e938cbea4f"} err="failed to get container status \"4c263e6c0445a0badadcbc5b50c370fd4ee9a4d0cb3e535e3d7944e938cbea4f\": rpc error: code = NotFound desc = could not find container \"4c263e6c0445a0badadcbc5b50c370fd4ee9a4d0cb3e535e3d7944e938cbea4f\": container with ID starting with 4c263e6c0445a0badadcbc5b50c370fd4ee9a4d0cb3e535e3d7944e938cbea4f not found: ID does not exist" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.831993 4808 scope.go:117] "RemoveContainer" containerID="80ab3de82f2a3f22425c34c9b4abcbc925a7076e3f2ce3b952f10aeb856e1c09" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.833996 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80ab3de82f2a3f22425c34c9b4abcbc925a7076e3f2ce3b952f10aeb856e1c09"} err="failed to get container status \"80ab3de82f2a3f22425c34c9b4abcbc925a7076e3f2ce3b952f10aeb856e1c09\": rpc error: code = NotFound desc = could not find container \"80ab3de82f2a3f22425c34c9b4abcbc925a7076e3f2ce3b952f10aeb856e1c09\": container with ID starting with 80ab3de82f2a3f22425c34c9b4abcbc925a7076e3f2ce3b952f10aeb856e1c09 not found: ID does not exist" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.834098 4808 scope.go:117] "RemoveContainer" containerID="5e9e729fa5a68d07a0f7e4a86114ed39e4128428e5a21c2f3f113f869adc9fc2" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.834488 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e9e729fa5a68d07a0f7e4a86114ed39e4128428e5a21c2f3f113f869adc9fc2"} err="failed to get container status \"5e9e729fa5a68d07a0f7e4a86114ed39e4128428e5a21c2f3f113f869adc9fc2\": rpc error: code = NotFound desc = could not find container \"5e9e729fa5a68d07a0f7e4a86114ed39e4128428e5a21c2f3f113f869adc9fc2\": container with ID starting with 5e9e729fa5a68d07a0f7e4a86114ed39e4128428e5a21c2f3f113f869adc9fc2 not found: ID does not exist" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.834513 4808 scope.go:117] "RemoveContainer" containerID="26a9d62d12c66018649ffcb84c69e20f1c08f3241bdb02ba4306b08dbe5ec49a" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.834823 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26a9d62d12c66018649ffcb84c69e20f1c08f3241bdb02ba4306b08dbe5ec49a"} err="failed to get container status \"26a9d62d12c66018649ffcb84c69e20f1c08f3241bdb02ba4306b08dbe5ec49a\": rpc error: code = NotFound desc = could not find container \"26a9d62d12c66018649ffcb84c69e20f1c08f3241bdb02ba4306b08dbe5ec49a\": container with ID starting with 26a9d62d12c66018649ffcb84c69e20f1c08f3241bdb02ba4306b08dbe5ec49a not found: ID does not exist" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.834850 4808 scope.go:117] "RemoveContainer" containerID="35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.835087 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437"} err="failed to get container status \"35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\": rpc error: code = NotFound desc = could not find container \"35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\": container with ID starting with 35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437 not found: ID does not exist" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.835107 4808 scope.go:117] "RemoveContainer" containerID="1385665b452c9c54279b496b70105068cc9ac986718df98cc735fc09bcd4ac05" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.835341 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1385665b452c9c54279b496b70105068cc9ac986718df98cc735fc09bcd4ac05"} err="failed to get container status \"1385665b452c9c54279b496b70105068cc9ac986718df98cc735fc09bcd4ac05\": rpc error: code = NotFound desc = could not find container \"1385665b452c9c54279b496b70105068cc9ac986718df98cc735fc09bcd4ac05\": container with ID starting with 1385665b452c9c54279b496b70105068cc9ac986718df98cc735fc09bcd4ac05 not found: ID does not exist" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.835361 4808 scope.go:117] "RemoveContainer" containerID="a3c59386483fde848e69cdd193832875e9c1cbe4725d43032090c9a62494c40f" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.835562 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3c59386483fde848e69cdd193832875e9c1cbe4725d43032090c9a62494c40f"} err="failed to get container status \"a3c59386483fde848e69cdd193832875e9c1cbe4725d43032090c9a62494c40f\": rpc error: code = NotFound desc = could not find container \"a3c59386483fde848e69cdd193832875e9c1cbe4725d43032090c9a62494c40f\": container with ID starting with a3c59386483fde848e69cdd193832875e9c1cbe4725d43032090c9a62494c40f not found: ID does not exist" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.835598 4808 scope.go:117] "RemoveContainer" containerID="363a0f82d4347e522c91f27597bc03aa33f75e0399760fcc5cfdc1772eb6aabf" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.835965 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"363a0f82d4347e522c91f27597bc03aa33f75e0399760fcc5cfdc1772eb6aabf"} err="failed to get container status \"363a0f82d4347e522c91f27597bc03aa33f75e0399760fcc5cfdc1772eb6aabf\": rpc error: code = NotFound desc = could not find container \"363a0f82d4347e522c91f27597bc03aa33f75e0399760fcc5cfdc1772eb6aabf\": container with ID starting with 363a0f82d4347e522c91f27597bc03aa33f75e0399760fcc5cfdc1772eb6aabf not found: ID does not exist" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.835985 4808 scope.go:117] "RemoveContainer" containerID="58ee49f9d112bd2fe6a3cc5f499d1be9d4c51f2741ffb9bf24754a46a0a12814" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.836213 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58ee49f9d112bd2fe6a3cc5f499d1be9d4c51f2741ffb9bf24754a46a0a12814"} err="failed to get container status \"58ee49f9d112bd2fe6a3cc5f499d1be9d4c51f2741ffb9bf24754a46a0a12814\": rpc error: code = NotFound desc = could not find container \"58ee49f9d112bd2fe6a3cc5f499d1be9d4c51f2741ffb9bf24754a46a0a12814\": container with ID starting with 58ee49f9d112bd2fe6a3cc5f499d1be9d4c51f2741ffb9bf24754a46a0a12814 not found: ID does not exist" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.836232 4808 scope.go:117] "RemoveContainer" containerID="28b04c73bfd5eadf6c1e436f6a7150074ee8357cef79b0e040c1d9f3809aab13" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.836464 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28b04c73bfd5eadf6c1e436f6a7150074ee8357cef79b0e040c1d9f3809aab13"} err="failed to get container status \"28b04c73bfd5eadf6c1e436f6a7150074ee8357cef79b0e040c1d9f3809aab13\": rpc error: code = NotFound desc = could not find container \"28b04c73bfd5eadf6c1e436f6a7150074ee8357cef79b0e040c1d9f3809aab13\": container with ID starting with 28b04c73bfd5eadf6c1e436f6a7150074ee8357cef79b0e040c1d9f3809aab13 not found: ID does not exist" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.836485 4808 scope.go:117] "RemoveContainer" containerID="4c263e6c0445a0badadcbc5b50c370fd4ee9a4d0cb3e535e3d7944e938cbea4f" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.836822 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c263e6c0445a0badadcbc5b50c370fd4ee9a4d0cb3e535e3d7944e938cbea4f"} err="failed to get container status \"4c263e6c0445a0badadcbc5b50c370fd4ee9a4d0cb3e535e3d7944e938cbea4f\": rpc error: code = NotFound desc = could not find container \"4c263e6c0445a0badadcbc5b50c370fd4ee9a4d0cb3e535e3d7944e938cbea4f\": container with ID starting with 4c263e6c0445a0badadcbc5b50c370fd4ee9a4d0cb3e535e3d7944e938cbea4f not found: ID does not exist" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.836843 4808 scope.go:117] "RemoveContainer" containerID="80ab3de82f2a3f22425c34c9b4abcbc925a7076e3f2ce3b952f10aeb856e1c09" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.837078 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80ab3de82f2a3f22425c34c9b4abcbc925a7076e3f2ce3b952f10aeb856e1c09"} err="failed to get container status \"80ab3de82f2a3f22425c34c9b4abcbc925a7076e3f2ce3b952f10aeb856e1c09\": rpc error: code = NotFound desc = could not find container \"80ab3de82f2a3f22425c34c9b4abcbc925a7076e3f2ce3b952f10aeb856e1c09\": container with ID starting with 80ab3de82f2a3f22425c34c9b4abcbc925a7076e3f2ce3b952f10aeb856e1c09 not found: ID does not exist" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.837099 4808 scope.go:117] "RemoveContainer" containerID="5e9e729fa5a68d07a0f7e4a86114ed39e4128428e5a21c2f3f113f869adc9fc2" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.837327 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e9e729fa5a68d07a0f7e4a86114ed39e4128428e5a21c2f3f113f869adc9fc2"} err="failed to get container status \"5e9e729fa5a68d07a0f7e4a86114ed39e4128428e5a21c2f3f113f869adc9fc2\": rpc error: code = NotFound desc = could not find container \"5e9e729fa5a68d07a0f7e4a86114ed39e4128428e5a21c2f3f113f869adc9fc2\": container with ID starting with 5e9e729fa5a68d07a0f7e4a86114ed39e4128428e5a21c2f3f113f869adc9fc2 not found: ID does not exist" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.837344 4808 scope.go:117] "RemoveContainer" containerID="26a9d62d12c66018649ffcb84c69e20f1c08f3241bdb02ba4306b08dbe5ec49a" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.837613 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26a9d62d12c66018649ffcb84c69e20f1c08f3241bdb02ba4306b08dbe5ec49a"} err="failed to get container status \"26a9d62d12c66018649ffcb84c69e20f1c08f3241bdb02ba4306b08dbe5ec49a\": rpc error: code = NotFound desc = could not find container \"26a9d62d12c66018649ffcb84c69e20f1c08f3241bdb02ba4306b08dbe5ec49a\": container with ID starting with 26a9d62d12c66018649ffcb84c69e20f1c08f3241bdb02ba4306b08dbe5ec49a not found: ID does not exist" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.837633 4808 scope.go:117] "RemoveContainer" containerID="35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.837858 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437"} err="failed to get container status \"35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\": rpc error: code = NotFound desc = could not find container \"35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\": container with ID starting with 35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437 not found: ID does not exist" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.837939 4808 scope.go:117] "RemoveContainer" containerID="1385665b452c9c54279b496b70105068cc9ac986718df98cc735fc09bcd4ac05" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.838244 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1385665b452c9c54279b496b70105068cc9ac986718df98cc735fc09bcd4ac05"} err="failed to get container status \"1385665b452c9c54279b496b70105068cc9ac986718df98cc735fc09bcd4ac05\": rpc error: code = NotFound desc = could not find container \"1385665b452c9c54279b496b70105068cc9ac986718df98cc735fc09bcd4ac05\": container with ID starting with 1385665b452c9c54279b496b70105068cc9ac986718df98cc735fc09bcd4ac05 not found: ID does not exist" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.838325 4808 scope.go:117] "RemoveContainer" containerID="a3c59386483fde848e69cdd193832875e9c1cbe4725d43032090c9a62494c40f" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.838649 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3c59386483fde848e69cdd193832875e9c1cbe4725d43032090c9a62494c40f"} err="failed to get container status \"a3c59386483fde848e69cdd193832875e9c1cbe4725d43032090c9a62494c40f\": rpc error: code = NotFound desc = could not find container \"a3c59386483fde848e69cdd193832875e9c1cbe4725d43032090c9a62494c40f\": container with ID starting with a3c59386483fde848e69cdd193832875e9c1cbe4725d43032090c9a62494c40f not found: ID does not exist" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.838740 4808 scope.go:117] "RemoveContainer" containerID="363a0f82d4347e522c91f27597bc03aa33f75e0399760fcc5cfdc1772eb6aabf" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.839059 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"363a0f82d4347e522c91f27597bc03aa33f75e0399760fcc5cfdc1772eb6aabf"} err="failed to get container status \"363a0f82d4347e522c91f27597bc03aa33f75e0399760fcc5cfdc1772eb6aabf\": rpc error: code = NotFound desc = could not find container \"363a0f82d4347e522c91f27597bc03aa33f75e0399760fcc5cfdc1772eb6aabf\": container with ID starting with 363a0f82d4347e522c91f27597bc03aa33f75e0399760fcc5cfdc1772eb6aabf not found: ID does not exist" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.839080 4808 scope.go:117] "RemoveContainer" containerID="58ee49f9d112bd2fe6a3cc5f499d1be9d4c51f2741ffb9bf24754a46a0a12814" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.839313 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58ee49f9d112bd2fe6a3cc5f499d1be9d4c51f2741ffb9bf24754a46a0a12814"} err="failed to get container status \"58ee49f9d112bd2fe6a3cc5f499d1be9d4c51f2741ffb9bf24754a46a0a12814\": rpc error: code = NotFound desc = could not find container \"58ee49f9d112bd2fe6a3cc5f499d1be9d4c51f2741ffb9bf24754a46a0a12814\": container with ID starting with 58ee49f9d112bd2fe6a3cc5f499d1be9d4c51f2741ffb9bf24754a46a0a12814 not found: ID does not exist" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.839334 4808 scope.go:117] "RemoveContainer" containerID="28b04c73bfd5eadf6c1e436f6a7150074ee8357cef79b0e040c1d9f3809aab13" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.839561 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28b04c73bfd5eadf6c1e436f6a7150074ee8357cef79b0e040c1d9f3809aab13"} err="failed to get container status \"28b04c73bfd5eadf6c1e436f6a7150074ee8357cef79b0e040c1d9f3809aab13\": rpc error: code = NotFound desc = could not find container \"28b04c73bfd5eadf6c1e436f6a7150074ee8357cef79b0e040c1d9f3809aab13\": container with ID starting with 28b04c73bfd5eadf6c1e436f6a7150074ee8357cef79b0e040c1d9f3809aab13 not found: ID does not exist" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.839594 4808 scope.go:117] "RemoveContainer" containerID="4c263e6c0445a0badadcbc5b50c370fd4ee9a4d0cb3e535e3d7944e938cbea4f" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.839836 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c263e6c0445a0badadcbc5b50c370fd4ee9a4d0cb3e535e3d7944e938cbea4f"} err="failed to get container status \"4c263e6c0445a0badadcbc5b50c370fd4ee9a4d0cb3e535e3d7944e938cbea4f\": rpc error: code = NotFound desc = could not find container \"4c263e6c0445a0badadcbc5b50c370fd4ee9a4d0cb3e535e3d7944e938cbea4f\": container with ID starting with 4c263e6c0445a0badadcbc5b50c370fd4ee9a4d0cb3e535e3d7944e938cbea4f not found: ID does not exist" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.839855 4808 scope.go:117] "RemoveContainer" containerID="80ab3de82f2a3f22425c34c9b4abcbc925a7076e3f2ce3b952f10aeb856e1c09" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.840090 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80ab3de82f2a3f22425c34c9b4abcbc925a7076e3f2ce3b952f10aeb856e1c09"} err="failed to get container status \"80ab3de82f2a3f22425c34c9b4abcbc925a7076e3f2ce3b952f10aeb856e1c09\": rpc error: code = NotFound desc = could not find container \"80ab3de82f2a3f22425c34c9b4abcbc925a7076e3f2ce3b952f10aeb856e1c09\": container with ID starting with 80ab3de82f2a3f22425c34c9b4abcbc925a7076e3f2ce3b952f10aeb856e1c09 not found: ID does not exist" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.840109 4808 scope.go:117] "RemoveContainer" containerID="5e9e729fa5a68d07a0f7e4a86114ed39e4128428e5a21c2f3f113f869adc9fc2" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.841446 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e9e729fa5a68d07a0f7e4a86114ed39e4128428e5a21c2f3f113f869adc9fc2"} err="failed to get container status \"5e9e729fa5a68d07a0f7e4a86114ed39e4128428e5a21c2f3f113f869adc9fc2\": rpc error: code = NotFound desc = could not find container \"5e9e729fa5a68d07a0f7e4a86114ed39e4128428e5a21c2f3f113f869adc9fc2\": container with ID starting with 5e9e729fa5a68d07a0f7e4a86114ed39e4128428e5a21c2f3f113f869adc9fc2 not found: ID does not exist" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.841530 4808 scope.go:117] "RemoveContainer" containerID="26a9d62d12c66018649ffcb84c69e20f1c08f3241bdb02ba4306b08dbe5ec49a" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.858809 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26a9d62d12c66018649ffcb84c69e20f1c08f3241bdb02ba4306b08dbe5ec49a"} err="failed to get container status \"26a9d62d12c66018649ffcb84c69e20f1c08f3241bdb02ba4306b08dbe5ec49a\": rpc error: code = NotFound desc = could not find container \"26a9d62d12c66018649ffcb84c69e20f1c08f3241bdb02ba4306b08dbe5ec49a\": container with ID starting with 26a9d62d12c66018649ffcb84c69e20f1c08f3241bdb02ba4306b08dbe5ec49a not found: ID does not exist" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.858847 4808 scope.go:117] "RemoveContainer" containerID="35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.859927 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437"} err="failed to get container status \"35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\": rpc error: code = NotFound desc = could not find container \"35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\": container with ID starting with 35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437 not found: ID does not exist" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.909997 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-tgvlh"] Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.914202 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-tgvlh"] Feb 17 16:05:54 crc kubenswrapper[4808]: I0217 16:05:54.553863 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-msgfd_18916d6d-e063-40a0-816f-554f95cd2956/kube-multus/2.log" Feb 17 16:05:54 crc kubenswrapper[4808]: I0217 16:05:54.555786 4808 generic.go:334] "Generic (PLEG): container finished" podID="60c87e4f-f758-4e3e-a812-1636091ba578" containerID="891243d5714197c2aa551a24c76441926698db9cb51175d7b6f86c558f055955" exitCode=0 Feb 17 16:05:54 crc kubenswrapper[4808]: I0217 16:05:54.555823 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" event={"ID":"60c87e4f-f758-4e3e-a812-1636091ba578","Type":"ContainerDied","Data":"891243d5714197c2aa551a24c76441926698db9cb51175d7b6f86c558f055955"} Feb 17 16:05:54 crc kubenswrapper[4808]: I0217 16:05:54.555871 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" event={"ID":"60c87e4f-f758-4e3e-a812-1636091ba578","Type":"ContainerStarted","Data":"ae0d57d73f5fc05ce5ec2e4de27484ba682d37ebfa253a15a86795aafd48e9a2"} Feb 17 16:05:54 crc kubenswrapper[4808]: I0217 16:05:54.580468 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-lshnf"] Feb 17 16:05:54 crc kubenswrapper[4808]: I0217 16:05:54.582165 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-lshnf" Feb 17 16:05:54 crc kubenswrapper[4808]: I0217 16:05:54.588386 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-h7dtr" Feb 17 16:05:54 crc kubenswrapper[4808]: I0217 16:05:54.588914 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Feb 17 16:05:54 crc kubenswrapper[4808]: I0217 16:05:54.588956 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Feb 17 16:05:54 crc kubenswrapper[4808]: I0217 16:05:54.600864 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxxjl\" (UniqueName: \"kubernetes.io/projected/038219cb-02e4-4451-b0d4-3e6af1518769-kube-api-access-bxxjl\") pod \"obo-prometheus-operator-68bc856cb9-lshnf\" (UID: \"038219cb-02e4-4451-b0d4-3e6af1518769\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-lshnf" Feb 17 16:05:54 crc kubenswrapper[4808]: I0217 16:05:54.705905 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-98b6f68bc-qxc24"] Feb 17 16:05:54 crc kubenswrapper[4808]: I0217 16:05:54.706477 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-98b6f68bc-qxc24" Feb 17 16:05:54 crc kubenswrapper[4808]: I0217 16:05:54.709310 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bxxjl\" (UniqueName: \"kubernetes.io/projected/038219cb-02e4-4451-b0d4-3e6af1518769-kube-api-access-bxxjl\") pod \"obo-prometheus-operator-68bc856cb9-lshnf\" (UID: \"038219cb-02e4-4451-b0d4-3e6af1518769\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-lshnf" Feb 17 16:05:54 crc kubenswrapper[4808]: I0217 16:05:54.711433 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-nbl5d" Feb 17 16:05:54 crc kubenswrapper[4808]: I0217 16:05:54.711743 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Feb 17 16:05:54 crc kubenswrapper[4808]: I0217 16:05:54.716932 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-98b6f68bc-j86z5"] Feb 17 16:05:54 crc kubenswrapper[4808]: I0217 16:05:54.717627 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-98b6f68bc-j86z5" Feb 17 16:05:54 crc kubenswrapper[4808]: I0217 16:05:54.757165 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxxjl\" (UniqueName: \"kubernetes.io/projected/038219cb-02e4-4451-b0d4-3e6af1518769-kube-api-access-bxxjl\") pod \"obo-prometheus-operator-68bc856cb9-lshnf\" (UID: \"038219cb-02e4-4451-b0d4-3e6af1518769\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-lshnf" Feb 17 16:05:54 crc kubenswrapper[4808]: I0217 16:05:54.810420 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2b8a3138-8c3d-434b-9069-8cafc18a0111-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-98b6f68bc-j86z5\" (UID: \"2b8a3138-8c3d-434b-9069-8cafc18a0111\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-98b6f68bc-j86z5" Feb 17 16:05:54 crc kubenswrapper[4808]: I0217 16:05:54.810498 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6d2656af-cd69-49ff-8d35-7c81fa4c4693-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-98b6f68bc-qxc24\" (UID: \"6d2656af-cd69-49ff-8d35-7c81fa4c4693\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-98b6f68bc-qxc24" Feb 17 16:05:54 crc kubenswrapper[4808]: I0217 16:05:54.810519 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6d2656af-cd69-49ff-8d35-7c81fa4c4693-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-98b6f68bc-qxc24\" (UID: \"6d2656af-cd69-49ff-8d35-7c81fa4c4693\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-98b6f68bc-qxc24" Feb 17 16:05:54 crc kubenswrapper[4808]: I0217 16:05:54.810558 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2b8a3138-8c3d-434b-9069-8cafc18a0111-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-98b6f68bc-j86z5\" (UID: \"2b8a3138-8c3d-434b-9069-8cafc18a0111\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-98b6f68bc-j86z5" Feb 17 16:05:54 crc kubenswrapper[4808]: I0217 16:05:54.837625 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-7nl9q"] Feb 17 16:05:54 crc kubenswrapper[4808]: I0217 16:05:54.838316 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-7nl9q" Feb 17 16:05:54 crc kubenswrapper[4808]: I0217 16:05:54.843212 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Feb 17 16:05:54 crc kubenswrapper[4808]: I0217 16:05:54.843401 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-7x9g9" Feb 17 16:05:54 crc kubenswrapper[4808]: I0217 16:05:54.912173 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2b8a3138-8c3d-434b-9069-8cafc18a0111-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-98b6f68bc-j86z5\" (UID: \"2b8a3138-8c3d-434b-9069-8cafc18a0111\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-98b6f68bc-j86z5" Feb 17 16:05:54 crc kubenswrapper[4808]: I0217 16:05:54.912215 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bg2fm\" (UniqueName: \"kubernetes.io/projected/c7703980-a631-414f-b3fc-a76dfdd1e085-kube-api-access-bg2fm\") pod \"observability-operator-59bdc8b94-7nl9q\" (UID: \"c7703980-a631-414f-b3fc-a76dfdd1e085\") " pod="openshift-operators/observability-operator-59bdc8b94-7nl9q" Feb 17 16:05:54 crc kubenswrapper[4808]: I0217 16:05:54.912275 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2b8a3138-8c3d-434b-9069-8cafc18a0111-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-98b6f68bc-j86z5\" (UID: \"2b8a3138-8c3d-434b-9069-8cafc18a0111\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-98b6f68bc-j86z5" Feb 17 16:05:54 crc kubenswrapper[4808]: I0217 16:05:54.912298 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/c7703980-a631-414f-b3fc-a76dfdd1e085-observability-operator-tls\") pod \"observability-operator-59bdc8b94-7nl9q\" (UID: \"c7703980-a631-414f-b3fc-a76dfdd1e085\") " pod="openshift-operators/observability-operator-59bdc8b94-7nl9q" Feb 17 16:05:54 crc kubenswrapper[4808]: I0217 16:05:54.912371 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6d2656af-cd69-49ff-8d35-7c81fa4c4693-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-98b6f68bc-qxc24\" (UID: \"6d2656af-cd69-49ff-8d35-7c81fa4c4693\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-98b6f68bc-qxc24" Feb 17 16:05:54 crc kubenswrapper[4808]: I0217 16:05:54.912474 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6d2656af-cd69-49ff-8d35-7c81fa4c4693-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-98b6f68bc-qxc24\" (UID: \"6d2656af-cd69-49ff-8d35-7c81fa4c4693\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-98b6f68bc-qxc24" Feb 17 16:05:54 crc kubenswrapper[4808]: I0217 16:05:54.916416 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6d2656af-cd69-49ff-8d35-7c81fa4c4693-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-98b6f68bc-qxc24\" (UID: \"6d2656af-cd69-49ff-8d35-7c81fa4c4693\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-98b6f68bc-qxc24" Feb 17 16:05:54 crc kubenswrapper[4808]: I0217 16:05:54.923664 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2b8a3138-8c3d-434b-9069-8cafc18a0111-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-98b6f68bc-j86z5\" (UID: \"2b8a3138-8c3d-434b-9069-8cafc18a0111\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-98b6f68bc-j86z5" Feb 17 16:05:54 crc kubenswrapper[4808]: I0217 16:05:54.929878 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-lshnf" Feb 17 16:05:54 crc kubenswrapper[4808]: I0217 16:05:54.931136 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6d2656af-cd69-49ff-8d35-7c81fa4c4693-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-98b6f68bc-qxc24\" (UID: \"6d2656af-cd69-49ff-8d35-7c81fa4c4693\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-98b6f68bc-qxc24" Feb 17 16:05:54 crc kubenswrapper[4808]: I0217 16:05:54.935403 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2b8a3138-8c3d-434b-9069-8cafc18a0111-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-98b6f68bc-j86z5\" (UID: \"2b8a3138-8c3d-434b-9069-8cafc18a0111\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-98b6f68bc-j86z5" Feb 17 16:05:54 crc kubenswrapper[4808]: E0217 16:05:54.952741 4808 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-lshnf_openshift-operators_038219cb-02e4-4451-b0d4-3e6af1518769_0(0c589b65d82eb0fdbf770e480e66cfff62221df77fcefc9630953297fe88a9eb): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 16:05:54 crc kubenswrapper[4808]: E0217 16:05:54.952805 4808 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-lshnf_openshift-operators_038219cb-02e4-4451-b0d4-3e6af1518769_0(0c589b65d82eb0fdbf770e480e66cfff62221df77fcefc9630953297fe88a9eb): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-lshnf" Feb 17 16:05:54 crc kubenswrapper[4808]: E0217 16:05:54.952823 4808 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-lshnf_openshift-operators_038219cb-02e4-4451-b0d4-3e6af1518769_0(0c589b65d82eb0fdbf770e480e66cfff62221df77fcefc9630953297fe88a9eb): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-lshnf" Feb 17 16:05:54 crc kubenswrapper[4808]: E0217 16:05:54.952855 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-lshnf_openshift-operators(038219cb-02e4-4451-b0d4-3e6af1518769)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-lshnf_openshift-operators(038219cb-02e4-4451-b0d4-3e6af1518769)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-lshnf_openshift-operators_038219cb-02e4-4451-b0d4-3e6af1518769_0(0c589b65d82eb0fdbf770e480e66cfff62221df77fcefc9630953297fe88a9eb): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-lshnf" podUID="038219cb-02e4-4451-b0d4-3e6af1518769" Feb 17 16:05:55 crc kubenswrapper[4808]: I0217 16:05:55.013565 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bg2fm\" (UniqueName: \"kubernetes.io/projected/c7703980-a631-414f-b3fc-a76dfdd1e085-kube-api-access-bg2fm\") pod \"observability-operator-59bdc8b94-7nl9q\" (UID: \"c7703980-a631-414f-b3fc-a76dfdd1e085\") " pod="openshift-operators/observability-operator-59bdc8b94-7nl9q" Feb 17 16:05:55 crc kubenswrapper[4808]: I0217 16:05:55.013664 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/c7703980-a631-414f-b3fc-a76dfdd1e085-observability-operator-tls\") pod \"observability-operator-59bdc8b94-7nl9q\" (UID: \"c7703980-a631-414f-b3fc-a76dfdd1e085\") " pod="openshift-operators/observability-operator-59bdc8b94-7nl9q" Feb 17 16:05:55 crc kubenswrapper[4808]: I0217 16:05:55.016090 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-pkvl8"] Feb 17 16:05:55 crc kubenswrapper[4808]: I0217 16:05:55.016769 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-pkvl8" Feb 17 16:05:55 crc kubenswrapper[4808]: I0217 16:05:55.018295 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/c7703980-a631-414f-b3fc-a76dfdd1e085-observability-operator-tls\") pod \"observability-operator-59bdc8b94-7nl9q\" (UID: \"c7703980-a631-414f-b3fc-a76dfdd1e085\") " pod="openshift-operators/observability-operator-59bdc8b94-7nl9q" Feb 17 16:05:55 crc kubenswrapper[4808]: I0217 16:05:55.020863 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-dqww6" Feb 17 16:05:55 crc kubenswrapper[4808]: I0217 16:05:55.037604 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-98b6f68bc-qxc24" Feb 17 16:05:55 crc kubenswrapper[4808]: I0217 16:05:55.038441 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bg2fm\" (UniqueName: \"kubernetes.io/projected/c7703980-a631-414f-b3fc-a76dfdd1e085-kube-api-access-bg2fm\") pod \"observability-operator-59bdc8b94-7nl9q\" (UID: \"c7703980-a631-414f-b3fc-a76dfdd1e085\") " pod="openshift-operators/observability-operator-59bdc8b94-7nl9q" Feb 17 16:05:55 crc kubenswrapper[4808]: E0217 16:05:55.056143 4808 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-98b6f68bc-qxc24_openshift-operators_6d2656af-cd69-49ff-8d35-7c81fa4c4693_0(818e5155910ea6ad59c90fc200700170a94afcec59a1a3b3f6aa82388d27c2d4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 16:05:55 crc kubenswrapper[4808]: E0217 16:05:55.056211 4808 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-98b6f68bc-qxc24_openshift-operators_6d2656af-cd69-49ff-8d35-7c81fa4c4693_0(818e5155910ea6ad59c90fc200700170a94afcec59a1a3b3f6aa82388d27c2d4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-98b6f68bc-qxc24" Feb 17 16:05:55 crc kubenswrapper[4808]: E0217 16:05:55.056230 4808 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-98b6f68bc-qxc24_openshift-operators_6d2656af-cd69-49ff-8d35-7c81fa4c4693_0(818e5155910ea6ad59c90fc200700170a94afcec59a1a3b3f6aa82388d27c2d4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-98b6f68bc-qxc24" Feb 17 16:05:55 crc kubenswrapper[4808]: E0217 16:05:55.056291 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-98b6f68bc-qxc24_openshift-operators(6d2656af-cd69-49ff-8d35-7c81fa4c4693)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-98b6f68bc-qxc24_openshift-operators(6d2656af-cd69-49ff-8d35-7c81fa4c4693)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-98b6f68bc-qxc24_openshift-operators_6d2656af-cd69-49ff-8d35-7c81fa4c4693_0(818e5155910ea6ad59c90fc200700170a94afcec59a1a3b3f6aa82388d27c2d4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-98b6f68bc-qxc24" podUID="6d2656af-cd69-49ff-8d35-7c81fa4c4693" Feb 17 16:05:55 crc kubenswrapper[4808]: I0217 16:05:55.067008 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-98b6f68bc-j86z5" Feb 17 16:05:55 crc kubenswrapper[4808]: E0217 16:05:55.090669 4808 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-98b6f68bc-j86z5_openshift-operators_2b8a3138-8c3d-434b-9069-8cafc18a0111_0(97dda1c5c719f178cc3de54b2cfb0238a02f7d3dc8fecc0446b043bec34ce70b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 16:05:55 crc kubenswrapper[4808]: E0217 16:05:55.090735 4808 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-98b6f68bc-j86z5_openshift-operators_2b8a3138-8c3d-434b-9069-8cafc18a0111_0(97dda1c5c719f178cc3de54b2cfb0238a02f7d3dc8fecc0446b043bec34ce70b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-98b6f68bc-j86z5" Feb 17 16:05:55 crc kubenswrapper[4808]: E0217 16:05:55.090768 4808 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-98b6f68bc-j86z5_openshift-operators_2b8a3138-8c3d-434b-9069-8cafc18a0111_0(97dda1c5c719f178cc3de54b2cfb0238a02f7d3dc8fecc0446b043bec34ce70b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-98b6f68bc-j86z5" Feb 17 16:05:55 crc kubenswrapper[4808]: E0217 16:05:55.090838 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-98b6f68bc-j86z5_openshift-operators(2b8a3138-8c3d-434b-9069-8cafc18a0111)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-98b6f68bc-j86z5_openshift-operators(2b8a3138-8c3d-434b-9069-8cafc18a0111)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-98b6f68bc-j86z5_openshift-operators_2b8a3138-8c3d-434b-9069-8cafc18a0111_0(97dda1c5c719f178cc3de54b2cfb0238a02f7d3dc8fecc0446b043bec34ce70b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-98b6f68bc-j86z5" podUID="2b8a3138-8c3d-434b-9069-8cafc18a0111" Feb 17 16:05:55 crc kubenswrapper[4808]: I0217 16:05:55.114902 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvvcn\" (UniqueName: \"kubernetes.io/projected/b6f5eae7-5253-4562-a5d0-30dfe6e5a8ab-kube-api-access-dvvcn\") pod \"perses-operator-5bf474d74f-pkvl8\" (UID: \"b6f5eae7-5253-4562-a5d0-30dfe6e5a8ab\") " pod="openshift-operators/perses-operator-5bf474d74f-pkvl8" Feb 17 16:05:55 crc kubenswrapper[4808]: I0217 16:05:55.114963 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/b6f5eae7-5253-4562-a5d0-30dfe6e5a8ab-openshift-service-ca\") pod \"perses-operator-5bf474d74f-pkvl8\" (UID: \"b6f5eae7-5253-4562-a5d0-30dfe6e5a8ab\") " pod="openshift-operators/perses-operator-5bf474d74f-pkvl8" Feb 17 16:05:55 crc kubenswrapper[4808]: I0217 16:05:55.152517 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5748f02a-e3dd-47c7-b89d-b472c718e593" path="/var/lib/kubelet/pods/5748f02a-e3dd-47c7-b89d-b472c718e593/volumes" Feb 17 16:05:55 crc kubenswrapper[4808]: I0217 16:05:55.162941 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-7nl9q" Feb 17 16:05:55 crc kubenswrapper[4808]: E0217 16:05:55.188781 4808 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-7nl9q_openshift-operators_c7703980-a631-414f-b3fc-a76dfdd1e085_0(460fde54dfa67f209f8ece87bc25964aa98e86670dd9501db630003a221a1434): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 16:05:55 crc kubenswrapper[4808]: E0217 16:05:55.188852 4808 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-7nl9q_openshift-operators_c7703980-a631-414f-b3fc-a76dfdd1e085_0(460fde54dfa67f209f8ece87bc25964aa98e86670dd9501db630003a221a1434): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-7nl9q" Feb 17 16:05:55 crc kubenswrapper[4808]: E0217 16:05:55.188873 4808 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-7nl9q_openshift-operators_c7703980-a631-414f-b3fc-a76dfdd1e085_0(460fde54dfa67f209f8ece87bc25964aa98e86670dd9501db630003a221a1434): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-7nl9q" Feb 17 16:05:55 crc kubenswrapper[4808]: E0217 16:05:55.188923 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-7nl9q_openshift-operators(c7703980-a631-414f-b3fc-a76dfdd1e085)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-7nl9q_openshift-operators(c7703980-a631-414f-b3fc-a76dfdd1e085)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-7nl9q_openshift-operators_c7703980-a631-414f-b3fc-a76dfdd1e085_0(460fde54dfa67f209f8ece87bc25964aa98e86670dd9501db630003a221a1434): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-7nl9q" podUID="c7703980-a631-414f-b3fc-a76dfdd1e085" Feb 17 16:05:55 crc kubenswrapper[4808]: I0217 16:05:55.216487 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvvcn\" (UniqueName: \"kubernetes.io/projected/b6f5eae7-5253-4562-a5d0-30dfe6e5a8ab-kube-api-access-dvvcn\") pod \"perses-operator-5bf474d74f-pkvl8\" (UID: \"b6f5eae7-5253-4562-a5d0-30dfe6e5a8ab\") " pod="openshift-operators/perses-operator-5bf474d74f-pkvl8" Feb 17 16:05:55 crc kubenswrapper[4808]: I0217 16:05:55.216600 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/b6f5eae7-5253-4562-a5d0-30dfe6e5a8ab-openshift-service-ca\") pod \"perses-operator-5bf474d74f-pkvl8\" (UID: \"b6f5eae7-5253-4562-a5d0-30dfe6e5a8ab\") " pod="openshift-operators/perses-operator-5bf474d74f-pkvl8" Feb 17 16:05:55 crc kubenswrapper[4808]: I0217 16:05:55.217491 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/b6f5eae7-5253-4562-a5d0-30dfe6e5a8ab-openshift-service-ca\") pod \"perses-operator-5bf474d74f-pkvl8\" (UID: \"b6f5eae7-5253-4562-a5d0-30dfe6e5a8ab\") " pod="openshift-operators/perses-operator-5bf474d74f-pkvl8" Feb 17 16:05:55 crc kubenswrapper[4808]: I0217 16:05:55.240920 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvvcn\" (UniqueName: \"kubernetes.io/projected/b6f5eae7-5253-4562-a5d0-30dfe6e5a8ab-kube-api-access-dvvcn\") pod \"perses-operator-5bf474d74f-pkvl8\" (UID: \"b6f5eae7-5253-4562-a5d0-30dfe6e5a8ab\") " pod="openshift-operators/perses-operator-5bf474d74f-pkvl8" Feb 17 16:05:55 crc kubenswrapper[4808]: I0217 16:05:55.339947 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-pkvl8" Feb 17 16:05:55 crc kubenswrapper[4808]: E0217 16:05:55.358191 4808 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-pkvl8_openshift-operators_b6f5eae7-5253-4562-a5d0-30dfe6e5a8ab_0(15da61cdc63c72e2fdad213823c8f2e78caac16ff12f4f0a8c6229e53c49e518): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 16:05:55 crc kubenswrapper[4808]: E0217 16:05:55.358257 4808 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-pkvl8_openshift-operators_b6f5eae7-5253-4562-a5d0-30dfe6e5a8ab_0(15da61cdc63c72e2fdad213823c8f2e78caac16ff12f4f0a8c6229e53c49e518): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-pkvl8" Feb 17 16:05:55 crc kubenswrapper[4808]: E0217 16:05:55.358282 4808 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-pkvl8_openshift-operators_b6f5eae7-5253-4562-a5d0-30dfe6e5a8ab_0(15da61cdc63c72e2fdad213823c8f2e78caac16ff12f4f0a8c6229e53c49e518): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-pkvl8" Feb 17 16:05:55 crc kubenswrapper[4808]: E0217 16:05:55.358343 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-pkvl8_openshift-operators(b6f5eae7-5253-4562-a5d0-30dfe6e5a8ab)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-pkvl8_openshift-operators(b6f5eae7-5253-4562-a5d0-30dfe6e5a8ab)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-pkvl8_openshift-operators_b6f5eae7-5253-4562-a5d0-30dfe6e5a8ab_0(15da61cdc63c72e2fdad213823c8f2e78caac16ff12f4f0a8c6229e53c49e518): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-pkvl8" podUID="b6f5eae7-5253-4562-a5d0-30dfe6e5a8ab" Feb 17 16:05:55 crc kubenswrapper[4808]: I0217 16:05:55.564153 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" event={"ID":"60c87e4f-f758-4e3e-a812-1636091ba578","Type":"ContainerStarted","Data":"ee57b94cab0b03328a446cdf0ae564fea660e269b2587ae2cc143ac045e98980"} Feb 17 16:05:55 crc kubenswrapper[4808]: I0217 16:05:55.564191 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" event={"ID":"60c87e4f-f758-4e3e-a812-1636091ba578","Type":"ContainerStarted","Data":"f0afa1fc9ee7af0b73896d96c3b6c8e7d59ce02d7e7b4baa4b2462925eb7159a"} Feb 17 16:05:55 crc kubenswrapper[4808]: I0217 16:05:55.564202 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" event={"ID":"60c87e4f-f758-4e3e-a812-1636091ba578","Type":"ContainerStarted","Data":"139a35b7f1e25b6300d41c7bbeb759d48a42a0f5b0ead08cb8437ca8ff60d5f2"} Feb 17 16:05:55 crc kubenswrapper[4808]: I0217 16:05:55.564211 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" event={"ID":"60c87e4f-f758-4e3e-a812-1636091ba578","Type":"ContainerStarted","Data":"14e480693f2117575fae84765eb1818fcff9d17e172dcdc8602f08558cc059b0"} Feb 17 16:05:55 crc kubenswrapper[4808]: I0217 16:05:55.564219 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" event={"ID":"60c87e4f-f758-4e3e-a812-1636091ba578","Type":"ContainerStarted","Data":"7e19c5b68e5100b134fd90854f3c6959f62854a72d5c94541b09aed5b4f8f89b"} Feb 17 16:05:55 crc kubenswrapper[4808]: I0217 16:05:55.564228 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" event={"ID":"60c87e4f-f758-4e3e-a812-1636091ba578","Type":"ContainerStarted","Data":"7766663331b10bfbe045973076d5aa51a9dff0225e6a2f9d0fb225d78ff287be"} Feb 17 16:05:57 crc kubenswrapper[4808]: I0217 16:05:57.581381 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" event={"ID":"60c87e4f-f758-4e3e-a812-1636091ba578","Type":"ContainerStarted","Data":"b397ee82843a1a5ec091822d16025b85f95efbbf1af5d1d8088446cb3f45843c"} Feb 17 16:06:00 crc kubenswrapper[4808]: I0217 16:06:00.600121 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" event={"ID":"60c87e4f-f758-4e3e-a812-1636091ba578","Type":"ContainerStarted","Data":"153a45d841ae98960df594c65a735856b8792637444cdab267529897e8dbff9b"} Feb 17 16:06:00 crc kubenswrapper[4808]: I0217 16:06:00.600845 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" Feb 17 16:06:00 crc kubenswrapper[4808]: I0217 16:06:00.600867 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" Feb 17 16:06:00 crc kubenswrapper[4808]: I0217 16:06:00.600881 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" Feb 17 16:06:00 crc kubenswrapper[4808]: I0217 16:06:00.636754 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-lshnf"] Feb 17 16:06:00 crc kubenswrapper[4808]: I0217 16:06:00.636925 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-lshnf" Feb 17 16:06:00 crc kubenswrapper[4808]: I0217 16:06:00.637024 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" Feb 17 16:06:00 crc kubenswrapper[4808]: I0217 16:06:00.637528 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-lshnf" Feb 17 16:06:00 crc kubenswrapper[4808]: I0217 16:06:00.638613 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" Feb 17 16:06:00 crc kubenswrapper[4808]: I0217 16:06:00.643879 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-98b6f68bc-j86z5"] Feb 17 16:06:00 crc kubenswrapper[4808]: I0217 16:06:00.644050 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-98b6f68bc-j86z5" Feb 17 16:06:00 crc kubenswrapper[4808]: I0217 16:06:00.644613 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-98b6f68bc-j86z5" Feb 17 16:06:00 crc kubenswrapper[4808]: I0217 16:06:00.646443 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" podStartSLOduration=7.646420303 podStartE2EDuration="7.646420303s" podCreationTimestamp="2026-02-17 16:05:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:06:00.628726112 +0000 UTC m=+724.145085195" watchObservedRunningTime="2026-02-17 16:06:00.646420303 +0000 UTC m=+724.162779386" Feb 17 16:06:00 crc kubenswrapper[4808]: I0217 16:06:00.671830 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-98b6f68bc-qxc24"] Feb 17 16:06:00 crc kubenswrapper[4808]: I0217 16:06:00.671987 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-98b6f68bc-qxc24" Feb 17 16:06:00 crc kubenswrapper[4808]: I0217 16:06:00.672769 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-98b6f68bc-qxc24" Feb 17 16:06:00 crc kubenswrapper[4808]: I0217 16:06:00.674505 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-pkvl8"] Feb 17 16:06:00 crc kubenswrapper[4808]: I0217 16:06:00.674671 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-pkvl8" Feb 17 16:06:00 crc kubenswrapper[4808]: I0217 16:06:00.675214 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-pkvl8" Feb 17 16:06:00 crc kubenswrapper[4808]: E0217 16:06:00.677962 4808 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-lshnf_openshift-operators_038219cb-02e4-4451-b0d4-3e6af1518769_0(b456343ccfd9f1afe3374da29f1ba3760643f04d6051e650045a0ac2385ab0f6): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 16:06:00 crc kubenswrapper[4808]: E0217 16:06:00.678018 4808 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-lshnf_openshift-operators_038219cb-02e4-4451-b0d4-3e6af1518769_0(b456343ccfd9f1afe3374da29f1ba3760643f04d6051e650045a0ac2385ab0f6): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-lshnf" Feb 17 16:06:00 crc kubenswrapper[4808]: E0217 16:06:00.678045 4808 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-lshnf_openshift-operators_038219cb-02e4-4451-b0d4-3e6af1518769_0(b456343ccfd9f1afe3374da29f1ba3760643f04d6051e650045a0ac2385ab0f6): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-lshnf" Feb 17 16:06:00 crc kubenswrapper[4808]: E0217 16:06:00.678089 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-lshnf_openshift-operators(038219cb-02e4-4451-b0d4-3e6af1518769)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-lshnf_openshift-operators(038219cb-02e4-4451-b0d4-3e6af1518769)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-lshnf_openshift-operators_038219cb-02e4-4451-b0d4-3e6af1518769_0(b456343ccfd9f1afe3374da29f1ba3760643f04d6051e650045a0ac2385ab0f6): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-lshnf" podUID="038219cb-02e4-4451-b0d4-3e6af1518769" Feb 17 16:06:00 crc kubenswrapper[4808]: E0217 16:06:00.684482 4808 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-98b6f68bc-j86z5_openshift-operators_2b8a3138-8c3d-434b-9069-8cafc18a0111_0(61728f6d021b62f247506d978ec227fb1c5943b28a9867e6d17a32f2292655e4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 16:06:00 crc kubenswrapper[4808]: E0217 16:06:00.684552 4808 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-98b6f68bc-j86z5_openshift-operators_2b8a3138-8c3d-434b-9069-8cafc18a0111_0(61728f6d021b62f247506d978ec227fb1c5943b28a9867e6d17a32f2292655e4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-98b6f68bc-j86z5" Feb 17 16:06:00 crc kubenswrapper[4808]: E0217 16:06:00.684597 4808 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-98b6f68bc-j86z5_openshift-operators_2b8a3138-8c3d-434b-9069-8cafc18a0111_0(61728f6d021b62f247506d978ec227fb1c5943b28a9867e6d17a32f2292655e4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-98b6f68bc-j86z5" Feb 17 16:06:00 crc kubenswrapper[4808]: E0217 16:06:00.684645 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-98b6f68bc-j86z5_openshift-operators(2b8a3138-8c3d-434b-9069-8cafc18a0111)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-98b6f68bc-j86z5_openshift-operators(2b8a3138-8c3d-434b-9069-8cafc18a0111)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-98b6f68bc-j86z5_openshift-operators_2b8a3138-8c3d-434b-9069-8cafc18a0111_0(61728f6d021b62f247506d978ec227fb1c5943b28a9867e6d17a32f2292655e4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-98b6f68bc-j86z5" podUID="2b8a3138-8c3d-434b-9069-8cafc18a0111" Feb 17 16:06:00 crc kubenswrapper[4808]: I0217 16:06:00.731312 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-7nl9q"] Feb 17 16:06:00 crc kubenswrapper[4808]: I0217 16:06:00.731474 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-7nl9q" Feb 17 16:06:00 crc kubenswrapper[4808]: I0217 16:06:00.732138 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-7nl9q" Feb 17 16:06:00 crc kubenswrapper[4808]: E0217 16:06:00.769724 4808 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-pkvl8_openshift-operators_b6f5eae7-5253-4562-a5d0-30dfe6e5a8ab_0(dbfc53532c3456391e7fb5aaa2296fb573ecea3510258035c5e589290d07c4ea): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 16:06:00 crc kubenswrapper[4808]: E0217 16:06:00.769818 4808 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-pkvl8_openshift-operators_b6f5eae7-5253-4562-a5d0-30dfe6e5a8ab_0(dbfc53532c3456391e7fb5aaa2296fb573ecea3510258035c5e589290d07c4ea): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-pkvl8" Feb 17 16:06:00 crc kubenswrapper[4808]: E0217 16:06:00.769846 4808 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-pkvl8_openshift-operators_b6f5eae7-5253-4562-a5d0-30dfe6e5a8ab_0(dbfc53532c3456391e7fb5aaa2296fb573ecea3510258035c5e589290d07c4ea): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-pkvl8" Feb 17 16:06:00 crc kubenswrapper[4808]: E0217 16:06:00.769897 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-pkvl8_openshift-operators(b6f5eae7-5253-4562-a5d0-30dfe6e5a8ab)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-pkvl8_openshift-operators(b6f5eae7-5253-4562-a5d0-30dfe6e5a8ab)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-pkvl8_openshift-operators_b6f5eae7-5253-4562-a5d0-30dfe6e5a8ab_0(dbfc53532c3456391e7fb5aaa2296fb573ecea3510258035c5e589290d07c4ea): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-pkvl8" podUID="b6f5eae7-5253-4562-a5d0-30dfe6e5a8ab" Feb 17 16:06:00 crc kubenswrapper[4808]: E0217 16:06:00.774503 4808 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-98b6f68bc-qxc24_openshift-operators_6d2656af-cd69-49ff-8d35-7c81fa4c4693_0(4737dacb8b8e1ebc8fba4282a225103fbe1300fcfc6d068cc82f9b92c4d47382): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 16:06:00 crc kubenswrapper[4808]: E0217 16:06:00.774579 4808 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-98b6f68bc-qxc24_openshift-operators_6d2656af-cd69-49ff-8d35-7c81fa4c4693_0(4737dacb8b8e1ebc8fba4282a225103fbe1300fcfc6d068cc82f9b92c4d47382): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-98b6f68bc-qxc24" Feb 17 16:06:00 crc kubenswrapper[4808]: E0217 16:06:00.774599 4808 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-98b6f68bc-qxc24_openshift-operators_6d2656af-cd69-49ff-8d35-7c81fa4c4693_0(4737dacb8b8e1ebc8fba4282a225103fbe1300fcfc6d068cc82f9b92c4d47382): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-98b6f68bc-qxc24" Feb 17 16:06:00 crc kubenswrapper[4808]: E0217 16:06:00.774660 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-98b6f68bc-qxc24_openshift-operators(6d2656af-cd69-49ff-8d35-7c81fa4c4693)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-98b6f68bc-qxc24_openshift-operators(6d2656af-cd69-49ff-8d35-7c81fa4c4693)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-98b6f68bc-qxc24_openshift-operators_6d2656af-cd69-49ff-8d35-7c81fa4c4693_0(4737dacb8b8e1ebc8fba4282a225103fbe1300fcfc6d068cc82f9b92c4d47382): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-98b6f68bc-qxc24" podUID="6d2656af-cd69-49ff-8d35-7c81fa4c4693" Feb 17 16:06:00 crc kubenswrapper[4808]: E0217 16:06:00.793829 4808 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-7nl9q_openshift-operators_c7703980-a631-414f-b3fc-a76dfdd1e085_0(9dae60cdbedd2f47631d569523ad840a1971860516b898c235029c8f90f8cc4c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 16:06:00 crc kubenswrapper[4808]: E0217 16:06:00.793930 4808 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-7nl9q_openshift-operators_c7703980-a631-414f-b3fc-a76dfdd1e085_0(9dae60cdbedd2f47631d569523ad840a1971860516b898c235029c8f90f8cc4c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-7nl9q" Feb 17 16:06:00 crc kubenswrapper[4808]: E0217 16:06:00.793957 4808 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-7nl9q_openshift-operators_c7703980-a631-414f-b3fc-a76dfdd1e085_0(9dae60cdbedd2f47631d569523ad840a1971860516b898c235029c8f90f8cc4c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-7nl9q" Feb 17 16:06:00 crc kubenswrapper[4808]: E0217 16:06:00.794009 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-7nl9q_openshift-operators(c7703980-a631-414f-b3fc-a76dfdd1e085)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-7nl9q_openshift-operators(c7703980-a631-414f-b3fc-a76dfdd1e085)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-7nl9q_openshift-operators_c7703980-a631-414f-b3fc-a76dfdd1e085_0(9dae60cdbedd2f47631d569523ad840a1971860516b898c235029c8f90f8cc4c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-7nl9q" podUID="c7703980-a631-414f-b3fc-a76dfdd1e085" Feb 17 16:06:06 crc kubenswrapper[4808]: I0217 16:06:06.146019 4808 scope.go:117] "RemoveContainer" containerID="a6961e0c67ed7d26f44519f3b555fda05bf5219f4205ed2528b68394bcb91f2c" Feb 17 16:06:06 crc kubenswrapper[4808]: I0217 16:06:06.644872 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-msgfd_18916d6d-e063-40a0-816f-554f95cd2956/kube-multus/2.log" Feb 17 16:06:06 crc kubenswrapper[4808]: I0217 16:06:06.645326 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-msgfd" event={"ID":"18916d6d-e063-40a0-816f-554f95cd2956","Type":"ContainerStarted","Data":"b2be79d131dfd425911d83bcd2437def405f952539da3aa726991db602fe1e17"} Feb 17 16:06:12 crc kubenswrapper[4808]: I0217 16:06:12.144810 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-98b6f68bc-qxc24" Feb 17 16:06:12 crc kubenswrapper[4808]: I0217 16:06:12.144929 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-lshnf" Feb 17 16:06:12 crc kubenswrapper[4808]: I0217 16:06:12.145688 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-lshnf" Feb 17 16:06:12 crc kubenswrapper[4808]: I0217 16:06:12.145781 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-98b6f68bc-qxc24" Feb 17 16:06:12 crc kubenswrapper[4808]: I0217 16:06:12.407622 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-98b6f68bc-qxc24"] Feb 17 16:06:12 crc kubenswrapper[4808]: I0217 16:06:12.455964 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-lshnf"] Feb 17 16:06:12 crc kubenswrapper[4808]: I0217 16:06:12.674701 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-98b6f68bc-qxc24" event={"ID":"6d2656af-cd69-49ff-8d35-7c81fa4c4693","Type":"ContainerStarted","Data":"315ce1493cadfe027f2be0c66995e53f8d57e66808c72c5c73f5a6d7953d7001"} Feb 17 16:06:12 crc kubenswrapper[4808]: I0217 16:06:12.675759 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-lshnf" event={"ID":"038219cb-02e4-4451-b0d4-3e6af1518769","Type":"ContainerStarted","Data":"6feedadaaaffce9323d260982aca6f22ce23b4483b518e9cd46fd3c2081fd6aa"} Feb 17 16:06:13 crc kubenswrapper[4808]: I0217 16:06:13.145745 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-pkvl8" Feb 17 16:06:13 crc kubenswrapper[4808]: I0217 16:06:13.146126 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-pkvl8" Feb 17 16:06:13 crc kubenswrapper[4808]: I0217 16:06:13.337832 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-pkvl8"] Feb 17 16:06:13 crc kubenswrapper[4808]: I0217 16:06:13.692678 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-pkvl8" event={"ID":"b6f5eae7-5253-4562-a5d0-30dfe6e5a8ab","Type":"ContainerStarted","Data":"0f1e424d6710d90da9306f1017501fb0f80ca068e4469ff6268b207067114701"} Feb 17 16:06:14 crc kubenswrapper[4808]: I0217 16:06:14.144820 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-7nl9q" Feb 17 16:06:14 crc kubenswrapper[4808]: I0217 16:06:14.145523 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-7nl9q" Feb 17 16:06:14 crc kubenswrapper[4808]: I0217 16:06:14.368177 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-7nl9q"] Feb 17 16:06:14 crc kubenswrapper[4808]: I0217 16:06:14.699451 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-7nl9q" event={"ID":"c7703980-a631-414f-b3fc-a76dfdd1e085","Type":"ContainerStarted","Data":"14af039fdf7c3008c63aa220221515c0b42dcaa912e3a1c9ad8e3e5786a07af3"} Feb 17 16:06:16 crc kubenswrapper[4808]: I0217 16:06:16.145383 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-98b6f68bc-j86z5" Feb 17 16:06:16 crc kubenswrapper[4808]: I0217 16:06:16.146140 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-98b6f68bc-j86z5" Feb 17 16:06:16 crc kubenswrapper[4808]: I0217 16:06:16.446798 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-98b6f68bc-j86z5"] Feb 17 16:06:16 crc kubenswrapper[4808]: W0217 16:06:16.472725 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2b8a3138_8c3d_434b_9069_8cafc18a0111.slice/crio-cc0ae8ebf18f35dcc09cec26c79f5e7b87893fbb9f28e913d054e7f279031da9 WatchSource:0}: Error finding container cc0ae8ebf18f35dcc09cec26c79f5e7b87893fbb9f28e913d054e7f279031da9: Status 404 returned error can't find the container with id cc0ae8ebf18f35dcc09cec26c79f5e7b87893fbb9f28e913d054e7f279031da9 Feb 17 16:06:16 crc kubenswrapper[4808]: I0217 16:06:16.755878 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-98b6f68bc-j86z5" event={"ID":"2b8a3138-8c3d-434b-9069-8cafc18a0111","Type":"ContainerStarted","Data":"cc0ae8ebf18f35dcc09cec26c79f5e7b87893fbb9f28e913d054e7f279031da9"} Feb 17 16:06:21 crc kubenswrapper[4808]: I0217 16:06:21.592361 4808 patch_prober.go:28] interesting pod/machine-config-daemon-k8v8k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:06:21 crc kubenswrapper[4808]: I0217 16:06:21.592421 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:06:21 crc kubenswrapper[4808]: I0217 16:06:21.592469 4808 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" Feb 17 16:06:21 crc kubenswrapper[4808]: I0217 16:06:21.593059 4808 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"51dff3d704e9a98a9fc5f37394f1d0157cc8cebcc4571b1aa78c7b9262eeb36c"} pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 16:06:21 crc kubenswrapper[4808]: I0217 16:06:21.593109 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" containerID="cri-o://51dff3d704e9a98a9fc5f37394f1d0157cc8cebcc4571b1aa78c7b9262eeb36c" gracePeriod=600 Feb 17 16:06:21 crc kubenswrapper[4808]: I0217 16:06:21.796282 4808 generic.go:334] "Generic (PLEG): container finished" podID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerID="51dff3d704e9a98a9fc5f37394f1d0157cc8cebcc4571b1aa78c7b9262eeb36c" exitCode=0 Feb 17 16:06:21 crc kubenswrapper[4808]: I0217 16:06:21.796326 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" event={"ID":"ca38b6e7-b21c-453d-8b6c-a163dac84b35","Type":"ContainerDied","Data":"51dff3d704e9a98a9fc5f37394f1d0157cc8cebcc4571b1aa78c7b9262eeb36c"} Feb 17 16:06:21 crc kubenswrapper[4808]: I0217 16:06:21.796357 4808 scope.go:117] "RemoveContainer" containerID="088a965aa6da48d3335f0fd7b3ea4dc5ac44753ad3722fc3086c2312ec7c03db" Feb 17 16:06:23 crc kubenswrapper[4808]: I0217 16:06:23.815093 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-pkvl8" event={"ID":"b6f5eae7-5253-4562-a5d0-30dfe6e5a8ab","Type":"ContainerStarted","Data":"8ac777c99872f45b25a038f193252f3ffa545029acd4e9f5bd4fb467aa7397f2"} Feb 17 16:06:23 crc kubenswrapper[4808]: I0217 16:06:23.815624 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-pkvl8" Feb 17 16:06:23 crc kubenswrapper[4808]: I0217 16:06:23.817177 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-98b6f68bc-j86z5" event={"ID":"2b8a3138-8c3d-434b-9069-8cafc18a0111","Type":"ContainerStarted","Data":"0c737d97005027182cee956998bce1cc09e0e41efcdf257112ad80295357b063"} Feb 17 16:06:23 crc kubenswrapper[4808]: I0217 16:06:23.819908 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-98b6f68bc-qxc24" event={"ID":"6d2656af-cd69-49ff-8d35-7c81fa4c4693","Type":"ContainerStarted","Data":"98e09363b1bb0f0a86eae5e4462dd49cf323aef6acfb9841f69bac483cb8fe03"} Feb 17 16:06:23 crc kubenswrapper[4808]: I0217 16:06:23.822346 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-lshnf" event={"ID":"038219cb-02e4-4451-b0d4-3e6af1518769","Type":"ContainerStarted","Data":"6176baeb8348833598843dee63a35c5629f6ddbd0a35d4dff740d9c4accddfdb"} Feb 17 16:06:23 crc kubenswrapper[4808]: I0217 16:06:23.826092 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" event={"ID":"ca38b6e7-b21c-453d-8b6c-a163dac84b35","Type":"ContainerStarted","Data":"284430f1fb330ef6ae53b6d6dd49c2af767ae61ae02d682d5cba6dbd7c4ce02d"} Feb 17 16:06:23 crc kubenswrapper[4808]: I0217 16:06:23.830102 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-7nl9q" event={"ID":"c7703980-a631-414f-b3fc-a76dfdd1e085","Type":"ContainerStarted","Data":"3da5b1ba6353f511635696dc8f27ed1b144f737a18540f1a3d1a058357382927"} Feb 17 16:06:23 crc kubenswrapper[4808]: I0217 16:06:23.830506 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-7nl9q" Feb 17 16:06:23 crc kubenswrapper[4808]: I0217 16:06:23.855025 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-pkvl8" podStartSLOduration=20.573321866 podStartE2EDuration="29.855003579s" podCreationTimestamp="2026-02-17 16:05:54 +0000 UTC" firstStartedPulling="2026-02-17 16:06:13.346356187 +0000 UTC m=+736.862715260" lastFinishedPulling="2026-02-17 16:06:22.6280379 +0000 UTC m=+746.144396973" observedRunningTime="2026-02-17 16:06:23.848393599 +0000 UTC m=+747.364752702" watchObservedRunningTime="2026-02-17 16:06:23.855003579 +0000 UTC m=+747.371362692" Feb 17 16:06:23 crc kubenswrapper[4808]: I0217 16:06:23.867968 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" Feb 17 16:06:23 crc kubenswrapper[4808]: I0217 16:06:23.876479 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-7nl9q" Feb 17 16:06:23 crc kubenswrapper[4808]: I0217 16:06:23.927303 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-98b6f68bc-j86z5" podStartSLOduration=23.754188817 podStartE2EDuration="29.927283573s" podCreationTimestamp="2026-02-17 16:05:54 +0000 UTC" firstStartedPulling="2026-02-17 16:06:16.475823251 +0000 UTC m=+739.992182324" lastFinishedPulling="2026-02-17 16:06:22.648918007 +0000 UTC m=+746.165277080" observedRunningTime="2026-02-17 16:06:23.886541986 +0000 UTC m=+747.402901099" watchObservedRunningTime="2026-02-17 16:06:23.927283573 +0000 UTC m=+747.443642656" Feb 17 16:06:23 crc kubenswrapper[4808]: I0217 16:06:23.943489 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-lshnf" podStartSLOduration=19.796835357 podStartE2EDuration="29.943467432s" podCreationTimestamp="2026-02-17 16:05:54 +0000 UTC" firstStartedPulling="2026-02-17 16:06:12.481330863 +0000 UTC m=+735.997689936" lastFinishedPulling="2026-02-17 16:06:22.627962948 +0000 UTC m=+746.144322011" observedRunningTime="2026-02-17 16:06:23.939750042 +0000 UTC m=+747.456109155" watchObservedRunningTime="2026-02-17 16:06:23.943467432 +0000 UTC m=+747.459826515" Feb 17 16:06:23 crc kubenswrapper[4808]: I0217 16:06:23.968599 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-98b6f68bc-qxc24" podStartSLOduration=19.757607871 podStartE2EDuration="29.968561974s" podCreationTimestamp="2026-02-17 16:05:54 +0000 UTC" firstStartedPulling="2026-02-17 16:06:12.437565843 +0000 UTC m=+735.953924916" lastFinishedPulling="2026-02-17 16:06:22.648519946 +0000 UTC m=+746.164879019" observedRunningTime="2026-02-17 16:06:23.962488659 +0000 UTC m=+747.478847752" watchObservedRunningTime="2026-02-17 16:06:23.968561974 +0000 UTC m=+747.484921067" Feb 17 16:06:23 crc kubenswrapper[4808]: I0217 16:06:23.993001 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-7nl9q" podStartSLOduration=21.761054609 podStartE2EDuration="29.992980808s" podCreationTimestamp="2026-02-17 16:05:54 +0000 UTC" firstStartedPulling="2026-02-17 16:06:14.395971817 +0000 UTC m=+737.912330890" lastFinishedPulling="2026-02-17 16:06:22.627898016 +0000 UTC m=+746.144257089" observedRunningTime="2026-02-17 16:06:23.989690279 +0000 UTC m=+747.506049362" watchObservedRunningTime="2026-02-17 16:06:23.992980808 +0000 UTC m=+747.509339891" Feb 17 16:06:32 crc kubenswrapper[4808]: I0217 16:06:32.733268 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-cjbd9"] Feb 17 16:06:32 crc kubenswrapper[4808]: I0217 16:06:32.734521 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-cjbd9" Feb 17 16:06:32 crc kubenswrapper[4808]: I0217 16:06:32.739197 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Feb 17 16:06:32 crc kubenswrapper[4808]: I0217 16:06:32.739249 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Feb 17 16:06:32 crc kubenswrapper[4808]: I0217 16:06:32.743732 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-2mptt"] Feb 17 16:06:32 crc kubenswrapper[4808]: I0217 16:06:32.744537 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-2mptt" Feb 17 16:06:32 crc kubenswrapper[4808]: I0217 16:06:32.745063 4808 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-4fddd" Feb 17 16:06:32 crc kubenswrapper[4808]: I0217 16:06:32.749719 4808 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-jrc9v" Feb 17 16:06:32 crc kubenswrapper[4808]: I0217 16:06:32.756161 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-cjbd9"] Feb 17 16:06:32 crc kubenswrapper[4808]: I0217 16:06:32.761622 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-2mptt"] Feb 17 16:06:32 crc kubenswrapper[4808]: I0217 16:06:32.773623 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-dgw65"] Feb 17 16:06:32 crc kubenswrapper[4808]: I0217 16:06:32.774353 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-dgw65" Feb 17 16:06:32 crc kubenswrapper[4808]: I0217 16:06:32.779395 4808 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-r4gtf" Feb 17 16:06:32 crc kubenswrapper[4808]: I0217 16:06:32.782101 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25qc2\" (UniqueName: \"kubernetes.io/projected/5bcb3c4d-b451-49ff-87b7-7b95830c0628-kube-api-access-25qc2\") pod \"cert-manager-webhook-687f57d79b-dgw65\" (UID: \"5bcb3c4d-b451-49ff-87b7-7b95830c0628\") " pod="cert-manager/cert-manager-webhook-687f57d79b-dgw65" Feb 17 16:06:32 crc kubenswrapper[4808]: I0217 16:06:32.782180 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6ss2\" (UniqueName: \"kubernetes.io/projected/f70c72b0-4029-491f-b93e-4b4e52c5bf77-kube-api-access-r6ss2\") pod \"cert-manager-cainjector-cf98fcc89-cjbd9\" (UID: \"f70c72b0-4029-491f-b93e-4b4e52c5bf77\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-cjbd9" Feb 17 16:06:32 crc kubenswrapper[4808]: I0217 16:06:32.782256 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8wh9\" (UniqueName: \"kubernetes.io/projected/e17861f0-9138-4fa1-8fa0-7bd761f1e1bd-kube-api-access-s8wh9\") pod \"cert-manager-858654f9db-2mptt\" (UID: \"e17861f0-9138-4fa1-8fa0-7bd761f1e1bd\") " pod="cert-manager/cert-manager-858654f9db-2mptt" Feb 17 16:06:32 crc kubenswrapper[4808]: I0217 16:06:32.799117 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-dgw65"] Feb 17 16:06:32 crc kubenswrapper[4808]: I0217 16:06:32.885110 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25qc2\" (UniqueName: \"kubernetes.io/projected/5bcb3c4d-b451-49ff-87b7-7b95830c0628-kube-api-access-25qc2\") pod \"cert-manager-webhook-687f57d79b-dgw65\" (UID: \"5bcb3c4d-b451-49ff-87b7-7b95830c0628\") " pod="cert-manager/cert-manager-webhook-687f57d79b-dgw65" Feb 17 16:06:32 crc kubenswrapper[4808]: I0217 16:06:32.885211 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6ss2\" (UniqueName: \"kubernetes.io/projected/f70c72b0-4029-491f-b93e-4b4e52c5bf77-kube-api-access-r6ss2\") pod \"cert-manager-cainjector-cf98fcc89-cjbd9\" (UID: \"f70c72b0-4029-491f-b93e-4b4e52c5bf77\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-cjbd9" Feb 17 16:06:32 crc kubenswrapper[4808]: I0217 16:06:32.885286 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8wh9\" (UniqueName: \"kubernetes.io/projected/e17861f0-9138-4fa1-8fa0-7bd761f1e1bd-kube-api-access-s8wh9\") pod \"cert-manager-858654f9db-2mptt\" (UID: \"e17861f0-9138-4fa1-8fa0-7bd761f1e1bd\") " pod="cert-manager/cert-manager-858654f9db-2mptt" Feb 17 16:06:32 crc kubenswrapper[4808]: I0217 16:06:32.910636 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6ss2\" (UniqueName: \"kubernetes.io/projected/f70c72b0-4029-491f-b93e-4b4e52c5bf77-kube-api-access-r6ss2\") pod \"cert-manager-cainjector-cf98fcc89-cjbd9\" (UID: \"f70c72b0-4029-491f-b93e-4b4e52c5bf77\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-cjbd9" Feb 17 16:06:32 crc kubenswrapper[4808]: I0217 16:06:32.911602 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25qc2\" (UniqueName: \"kubernetes.io/projected/5bcb3c4d-b451-49ff-87b7-7b95830c0628-kube-api-access-25qc2\") pod \"cert-manager-webhook-687f57d79b-dgw65\" (UID: \"5bcb3c4d-b451-49ff-87b7-7b95830c0628\") " pod="cert-manager/cert-manager-webhook-687f57d79b-dgw65" Feb 17 16:06:32 crc kubenswrapper[4808]: I0217 16:06:32.925897 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8wh9\" (UniqueName: \"kubernetes.io/projected/e17861f0-9138-4fa1-8fa0-7bd761f1e1bd-kube-api-access-s8wh9\") pod \"cert-manager-858654f9db-2mptt\" (UID: \"e17861f0-9138-4fa1-8fa0-7bd761f1e1bd\") " pod="cert-manager/cert-manager-858654f9db-2mptt" Feb 17 16:06:33 crc kubenswrapper[4808]: I0217 16:06:33.059425 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-cjbd9" Feb 17 16:06:33 crc kubenswrapper[4808]: I0217 16:06:33.068900 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-2mptt" Feb 17 16:06:33 crc kubenswrapper[4808]: I0217 16:06:33.090766 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-dgw65" Feb 17 16:06:33 crc kubenswrapper[4808]: I0217 16:06:33.349961 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-2mptt"] Feb 17 16:06:33 crc kubenswrapper[4808]: I0217 16:06:33.391964 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-cjbd9"] Feb 17 16:06:33 crc kubenswrapper[4808]: W0217 16:06:33.392326 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf70c72b0_4029_491f_b93e_4b4e52c5bf77.slice/crio-a3912113415609af6197241c1726c114844023979ff7ce3cfd64117095345979 WatchSource:0}: Error finding container a3912113415609af6197241c1726c114844023979ff7ce3cfd64117095345979: Status 404 returned error can't find the container with id a3912113415609af6197241c1726c114844023979ff7ce3cfd64117095345979 Feb 17 16:06:33 crc kubenswrapper[4808]: I0217 16:06:33.416525 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-dgw65"] Feb 17 16:06:33 crc kubenswrapper[4808]: W0217 16:06:33.417475 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5bcb3c4d_b451_49ff_87b7_7b95830c0628.slice/crio-27aa8e2d871f29d9c3447647b4367cdfd0164bd440a6229a2a49c196a671fd0a WatchSource:0}: Error finding container 27aa8e2d871f29d9c3447647b4367cdfd0164bd440a6229a2a49c196a671fd0a: Status 404 returned error can't find the container with id 27aa8e2d871f29d9c3447647b4367cdfd0164bd440a6229a2a49c196a671fd0a Feb 17 16:06:33 crc kubenswrapper[4808]: I0217 16:06:33.906665 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-2mptt" event={"ID":"e17861f0-9138-4fa1-8fa0-7bd761f1e1bd","Type":"ContainerStarted","Data":"d4731825c937cb528d1f743ecd654c596e6dc8dd3d59ccc73a12daad262f2d6e"} Feb 17 16:06:33 crc kubenswrapper[4808]: I0217 16:06:33.909836 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-dgw65" event={"ID":"5bcb3c4d-b451-49ff-87b7-7b95830c0628","Type":"ContainerStarted","Data":"27aa8e2d871f29d9c3447647b4367cdfd0164bd440a6229a2a49c196a671fd0a"} Feb 17 16:06:33 crc kubenswrapper[4808]: I0217 16:06:33.914662 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-cjbd9" event={"ID":"f70c72b0-4029-491f-b93e-4b4e52c5bf77","Type":"ContainerStarted","Data":"a3912113415609af6197241c1726c114844023979ff7ce3cfd64117095345979"} Feb 17 16:06:35 crc kubenswrapper[4808]: I0217 16:06:35.342943 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-pkvl8" Feb 17 16:06:38 crc kubenswrapper[4808]: I0217 16:06:38.953303 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-dgw65" event={"ID":"5bcb3c4d-b451-49ff-87b7-7b95830c0628","Type":"ContainerStarted","Data":"36a1cf2ddc7cf09feea6f0227066f9fdd5073a3e1abd24f39c2bfb6af9e0f434"} Feb 17 16:06:38 crc kubenswrapper[4808]: I0217 16:06:38.953995 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-dgw65" Feb 17 16:06:38 crc kubenswrapper[4808]: I0217 16:06:38.954584 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-cjbd9" event={"ID":"f70c72b0-4029-491f-b93e-4b4e52c5bf77","Type":"ContainerStarted","Data":"0c4db39151f8ef5adecf6fdab35766e0051d8c4a640dbfd1abdb8974fdcfa643"} Feb 17 16:06:38 crc kubenswrapper[4808]: I0217 16:06:38.955815 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-2mptt" event={"ID":"e17861f0-9138-4fa1-8fa0-7bd761f1e1bd","Type":"ContainerStarted","Data":"68e5ae3c31d44d177a2b5748c59eb12216a5ecae434961cdc32253d2e28fd647"} Feb 17 16:06:38 crc kubenswrapper[4808]: I0217 16:06:38.982644 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-dgw65" podStartSLOduration=2.224439618 podStartE2EDuration="6.982622548s" podCreationTimestamp="2026-02-17 16:06:32 +0000 UTC" firstStartedPulling="2026-02-17 16:06:33.419587749 +0000 UTC m=+756.935946822" lastFinishedPulling="2026-02-17 16:06:38.177770669 +0000 UTC m=+761.694129752" observedRunningTime="2026-02-17 16:06:38.980728796 +0000 UTC m=+762.497087869" watchObservedRunningTime="2026-02-17 16:06:38.982622548 +0000 UTC m=+762.498981621" Feb 17 16:06:39 crc kubenswrapper[4808]: I0217 16:06:39.010494 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-cjbd9" podStartSLOduration=2.035282049 podStartE2EDuration="7.010466065s" podCreationTimestamp="2026-02-17 16:06:32 +0000 UTC" firstStartedPulling="2026-02-17 16:06:33.394679162 +0000 UTC m=+756.911038235" lastFinishedPulling="2026-02-17 16:06:38.369863178 +0000 UTC m=+761.886222251" observedRunningTime="2026-02-17 16:06:39.005253063 +0000 UTC m=+762.521612136" watchObservedRunningTime="2026-02-17 16:06:39.010466065 +0000 UTC m=+762.526825138" Feb 17 16:06:39 crc kubenswrapper[4808]: I0217 16:06:39.042440 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-2mptt" podStartSLOduration=2.214478487 podStartE2EDuration="7.042416933s" podCreationTimestamp="2026-02-17 16:06:32 +0000 UTC" firstStartedPulling="2026-02-17 16:06:33.349652888 +0000 UTC m=+756.866011961" lastFinishedPulling="2026-02-17 16:06:38.177591334 +0000 UTC m=+761.693950407" observedRunningTime="2026-02-17 16:06:39.029670427 +0000 UTC m=+762.546029500" watchObservedRunningTime="2026-02-17 16:06:39.042416933 +0000 UTC m=+762.558776006" Feb 17 16:06:39 crc kubenswrapper[4808]: I0217 16:06:39.814705 4808 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 17 16:06:43 crc kubenswrapper[4808]: I0217 16:06:43.094779 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-dgw65" Feb 17 16:06:46 crc kubenswrapper[4808]: I0217 16:06:46.378106 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-hcq8m"] Feb 17 16:06:46 crc kubenswrapper[4808]: I0217 16:06:46.379629 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hcq8m" Feb 17 16:06:46 crc kubenswrapper[4808]: I0217 16:06:46.388255 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hcq8m"] Feb 17 16:06:46 crc kubenswrapper[4808]: I0217 16:06:46.474212 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scd7n\" (UniqueName: \"kubernetes.io/projected/269e3307-558f-4451-bf67-eb8e9be6237f-kube-api-access-scd7n\") pod \"community-operators-hcq8m\" (UID: \"269e3307-558f-4451-bf67-eb8e9be6237f\") " pod="openshift-marketplace/community-operators-hcq8m" Feb 17 16:06:46 crc kubenswrapper[4808]: I0217 16:06:46.474278 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/269e3307-558f-4451-bf67-eb8e9be6237f-catalog-content\") pod \"community-operators-hcq8m\" (UID: \"269e3307-558f-4451-bf67-eb8e9be6237f\") " pod="openshift-marketplace/community-operators-hcq8m" Feb 17 16:06:46 crc kubenswrapper[4808]: I0217 16:06:46.474453 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/269e3307-558f-4451-bf67-eb8e9be6237f-utilities\") pod \"community-operators-hcq8m\" (UID: \"269e3307-558f-4451-bf67-eb8e9be6237f\") " pod="openshift-marketplace/community-operators-hcq8m" Feb 17 16:06:46 crc kubenswrapper[4808]: I0217 16:06:46.576511 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/269e3307-558f-4451-bf67-eb8e9be6237f-catalog-content\") pod \"community-operators-hcq8m\" (UID: \"269e3307-558f-4451-bf67-eb8e9be6237f\") " pod="openshift-marketplace/community-operators-hcq8m" Feb 17 16:06:46 crc kubenswrapper[4808]: I0217 16:06:46.576606 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/269e3307-558f-4451-bf67-eb8e9be6237f-utilities\") pod \"community-operators-hcq8m\" (UID: \"269e3307-558f-4451-bf67-eb8e9be6237f\") " pod="openshift-marketplace/community-operators-hcq8m" Feb 17 16:06:46 crc kubenswrapper[4808]: I0217 16:06:46.576676 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-scd7n\" (UniqueName: \"kubernetes.io/projected/269e3307-558f-4451-bf67-eb8e9be6237f-kube-api-access-scd7n\") pod \"community-operators-hcq8m\" (UID: \"269e3307-558f-4451-bf67-eb8e9be6237f\") " pod="openshift-marketplace/community-operators-hcq8m" Feb 17 16:06:46 crc kubenswrapper[4808]: I0217 16:06:46.577051 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/269e3307-558f-4451-bf67-eb8e9be6237f-catalog-content\") pod \"community-operators-hcq8m\" (UID: \"269e3307-558f-4451-bf67-eb8e9be6237f\") " pod="openshift-marketplace/community-operators-hcq8m" Feb 17 16:06:46 crc kubenswrapper[4808]: I0217 16:06:46.577152 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/269e3307-558f-4451-bf67-eb8e9be6237f-utilities\") pod \"community-operators-hcq8m\" (UID: \"269e3307-558f-4451-bf67-eb8e9be6237f\") " pod="openshift-marketplace/community-operators-hcq8m" Feb 17 16:06:46 crc kubenswrapper[4808]: I0217 16:06:46.597041 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-scd7n\" (UniqueName: \"kubernetes.io/projected/269e3307-558f-4451-bf67-eb8e9be6237f-kube-api-access-scd7n\") pod \"community-operators-hcq8m\" (UID: \"269e3307-558f-4451-bf67-eb8e9be6237f\") " pod="openshift-marketplace/community-operators-hcq8m" Feb 17 16:06:46 crc kubenswrapper[4808]: I0217 16:06:46.696670 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hcq8m" Feb 17 16:06:47 crc kubenswrapper[4808]: I0217 16:06:47.027084 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hcq8m"] Feb 17 16:06:48 crc kubenswrapper[4808]: I0217 16:06:48.016293 4808 generic.go:334] "Generic (PLEG): container finished" podID="269e3307-558f-4451-bf67-eb8e9be6237f" containerID="af4b617fa0a9e93e637d807f206e575d6517f5e1d1a1ce815f1a1f35fca1c587" exitCode=0 Feb 17 16:06:48 crc kubenswrapper[4808]: I0217 16:06:48.016376 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hcq8m" event={"ID":"269e3307-558f-4451-bf67-eb8e9be6237f","Type":"ContainerDied","Data":"af4b617fa0a9e93e637d807f206e575d6517f5e1d1a1ce815f1a1f35fca1c587"} Feb 17 16:06:48 crc kubenswrapper[4808]: I0217 16:06:48.016425 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hcq8m" event={"ID":"269e3307-558f-4451-bf67-eb8e9be6237f","Type":"ContainerStarted","Data":"1fd2bf092e18b776d10b7a03c35f6845c7dfb7c5d54cda2b4dcb7c0f8b0de573"} Feb 17 16:06:52 crc kubenswrapper[4808]: I0217 16:06:52.042598 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hcq8m" event={"ID":"269e3307-558f-4451-bf67-eb8e9be6237f","Type":"ContainerStarted","Data":"6732b31f19f9917ed5b7e9a5e17b2a7cdea0ad7e072c62eed971fc3ab3ba2cd3"} Feb 17 16:06:53 crc kubenswrapper[4808]: I0217 16:06:53.052084 4808 generic.go:334] "Generic (PLEG): container finished" podID="269e3307-558f-4451-bf67-eb8e9be6237f" containerID="6732b31f19f9917ed5b7e9a5e17b2a7cdea0ad7e072c62eed971fc3ab3ba2cd3" exitCode=0 Feb 17 16:06:53 crc kubenswrapper[4808]: I0217 16:06:53.052156 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hcq8m" event={"ID":"269e3307-558f-4451-bf67-eb8e9be6237f","Type":"ContainerDied","Data":"6732b31f19f9917ed5b7e9a5e17b2a7cdea0ad7e072c62eed971fc3ab3ba2cd3"} Feb 17 16:06:54 crc kubenswrapper[4808]: I0217 16:06:54.066962 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hcq8m" event={"ID":"269e3307-558f-4451-bf67-eb8e9be6237f","Type":"ContainerStarted","Data":"567ccf9b348817541d64c9c0d47904ae360ad809841f67f8b47d370c74c2890b"} Feb 17 16:06:56 crc kubenswrapper[4808]: I0217 16:06:56.697259 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-hcq8m" Feb 17 16:06:56 crc kubenswrapper[4808]: I0217 16:06:56.697609 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-hcq8m" Feb 17 16:06:56 crc kubenswrapper[4808]: I0217 16:06:56.739921 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-hcq8m" Feb 17 16:06:56 crc kubenswrapper[4808]: I0217 16:06:56.757413 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-hcq8m" podStartSLOduration=4.953725329 podStartE2EDuration="10.757393246s" podCreationTimestamp="2026-02-17 16:06:46 +0000 UTC" firstStartedPulling="2026-02-17 16:06:48.018893583 +0000 UTC m=+771.535252686" lastFinishedPulling="2026-02-17 16:06:53.8225615 +0000 UTC m=+777.338920603" observedRunningTime="2026-02-17 16:06:54.08864401 +0000 UTC m=+777.605003123" watchObservedRunningTime="2026-02-17 16:06:56.757393246 +0000 UTC m=+780.273752339" Feb 17 16:07:06 crc kubenswrapper[4808]: I0217 16:07:06.766489 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-hcq8m" Feb 17 16:07:06 crc kubenswrapper[4808]: I0217 16:07:06.840157 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hcq8m"] Feb 17 16:07:07 crc kubenswrapper[4808]: I0217 16:07:07.166883 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-hcq8m" podUID="269e3307-558f-4451-bf67-eb8e9be6237f" containerName="registry-server" containerID="cri-o://567ccf9b348817541d64c9c0d47904ae360ad809841f67f8b47d370c74c2890b" gracePeriod=2 Feb 17 16:07:07 crc kubenswrapper[4808]: I0217 16:07:07.546401 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hcq8m" Feb 17 16:07:07 crc kubenswrapper[4808]: I0217 16:07:07.585738 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-scd7n\" (UniqueName: \"kubernetes.io/projected/269e3307-558f-4451-bf67-eb8e9be6237f-kube-api-access-scd7n\") pod \"269e3307-558f-4451-bf67-eb8e9be6237f\" (UID: \"269e3307-558f-4451-bf67-eb8e9be6237f\") " Feb 17 16:07:07 crc kubenswrapper[4808]: I0217 16:07:07.585875 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/269e3307-558f-4451-bf67-eb8e9be6237f-utilities\") pod \"269e3307-558f-4451-bf67-eb8e9be6237f\" (UID: \"269e3307-558f-4451-bf67-eb8e9be6237f\") " Feb 17 16:07:07 crc kubenswrapper[4808]: I0217 16:07:07.585950 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/269e3307-558f-4451-bf67-eb8e9be6237f-catalog-content\") pod \"269e3307-558f-4451-bf67-eb8e9be6237f\" (UID: \"269e3307-558f-4451-bf67-eb8e9be6237f\") " Feb 17 16:07:07 crc kubenswrapper[4808]: I0217 16:07:07.587414 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/269e3307-558f-4451-bf67-eb8e9be6237f-utilities" (OuterVolumeSpecName: "utilities") pod "269e3307-558f-4451-bf67-eb8e9be6237f" (UID: "269e3307-558f-4451-bf67-eb8e9be6237f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:07:07 crc kubenswrapper[4808]: I0217 16:07:07.597200 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/269e3307-558f-4451-bf67-eb8e9be6237f-kube-api-access-scd7n" (OuterVolumeSpecName: "kube-api-access-scd7n") pod "269e3307-558f-4451-bf67-eb8e9be6237f" (UID: "269e3307-558f-4451-bf67-eb8e9be6237f"). InnerVolumeSpecName "kube-api-access-scd7n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:07:07 crc kubenswrapper[4808]: I0217 16:07:07.637931 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/269e3307-558f-4451-bf67-eb8e9be6237f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "269e3307-558f-4451-bf67-eb8e9be6237f" (UID: "269e3307-558f-4451-bf67-eb8e9be6237f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:07:07 crc kubenswrapper[4808]: I0217 16:07:07.678334 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651nnldz"] Feb 17 16:07:07 crc kubenswrapper[4808]: E0217 16:07:07.678742 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="269e3307-558f-4451-bf67-eb8e9be6237f" containerName="registry-server" Feb 17 16:07:07 crc kubenswrapper[4808]: I0217 16:07:07.678774 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="269e3307-558f-4451-bf67-eb8e9be6237f" containerName="registry-server" Feb 17 16:07:07 crc kubenswrapper[4808]: E0217 16:07:07.678794 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="269e3307-558f-4451-bf67-eb8e9be6237f" containerName="extract-utilities" Feb 17 16:07:07 crc kubenswrapper[4808]: I0217 16:07:07.678807 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="269e3307-558f-4451-bf67-eb8e9be6237f" containerName="extract-utilities" Feb 17 16:07:07 crc kubenswrapper[4808]: E0217 16:07:07.678835 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="269e3307-558f-4451-bf67-eb8e9be6237f" containerName="extract-content" Feb 17 16:07:07 crc kubenswrapper[4808]: I0217 16:07:07.678847 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="269e3307-558f-4451-bf67-eb8e9be6237f" containerName="extract-content" Feb 17 16:07:07 crc kubenswrapper[4808]: I0217 16:07:07.679011 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="269e3307-558f-4451-bf67-eb8e9be6237f" containerName="registry-server" Feb 17 16:07:07 crc kubenswrapper[4808]: I0217 16:07:07.680336 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651nnldz" Feb 17 16:07:07 crc kubenswrapper[4808]: I0217 16:07:07.682191 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 17 16:07:07 crc kubenswrapper[4808]: I0217 16:07:07.687114 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/269e3307-558f-4451-bf67-eb8e9be6237f-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:07:07 crc kubenswrapper[4808]: I0217 16:07:07.687152 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/269e3307-558f-4451-bf67-eb8e9be6237f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:07:07 crc kubenswrapper[4808]: I0217 16:07:07.687166 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-scd7n\" (UniqueName: \"kubernetes.io/projected/269e3307-558f-4451-bf67-eb8e9be6237f-kube-api-access-scd7n\") on node \"crc\" DevicePath \"\"" Feb 17 16:07:07 crc kubenswrapper[4808]: I0217 16:07:07.692609 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651nnldz"] Feb 17 16:07:07 crc kubenswrapper[4808]: I0217 16:07:07.788983 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/da4f14dc-179d-4178-9a9c-747ab825f3e4-bundle\") pod \"7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651nnldz\" (UID: \"da4f14dc-179d-4178-9a9c-747ab825f3e4\") " pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651nnldz" Feb 17 16:07:07 crc kubenswrapper[4808]: I0217 16:07:07.789039 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/da4f14dc-179d-4178-9a9c-747ab825f3e4-util\") pod \"7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651nnldz\" (UID: \"da4f14dc-179d-4178-9a9c-747ab825f3e4\") " pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651nnldz" Feb 17 16:07:07 crc kubenswrapper[4808]: I0217 16:07:07.789104 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6j6d\" (UniqueName: \"kubernetes.io/projected/da4f14dc-179d-4178-9a9c-747ab825f3e4-kube-api-access-h6j6d\") pod \"7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651nnldz\" (UID: \"da4f14dc-179d-4178-9a9c-747ab825f3e4\") " pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651nnldz" Feb 17 16:07:07 crc kubenswrapper[4808]: I0217 16:07:07.889927 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/da4f14dc-179d-4178-9a9c-747ab825f3e4-bundle\") pod \"7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651nnldz\" (UID: \"da4f14dc-179d-4178-9a9c-747ab825f3e4\") " pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651nnldz" Feb 17 16:07:07 crc kubenswrapper[4808]: I0217 16:07:07.889988 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/da4f14dc-179d-4178-9a9c-747ab825f3e4-util\") pod \"7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651nnldz\" (UID: \"da4f14dc-179d-4178-9a9c-747ab825f3e4\") " pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651nnldz" Feb 17 16:07:07 crc kubenswrapper[4808]: I0217 16:07:07.890043 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6j6d\" (UniqueName: \"kubernetes.io/projected/da4f14dc-179d-4178-9a9c-747ab825f3e4-kube-api-access-h6j6d\") pod \"7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651nnldz\" (UID: \"da4f14dc-179d-4178-9a9c-747ab825f3e4\") " pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651nnldz" Feb 17 16:07:07 crc kubenswrapper[4808]: I0217 16:07:07.890631 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/da4f14dc-179d-4178-9a9c-747ab825f3e4-bundle\") pod \"7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651nnldz\" (UID: \"da4f14dc-179d-4178-9a9c-747ab825f3e4\") " pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651nnldz" Feb 17 16:07:07 crc kubenswrapper[4808]: I0217 16:07:07.890944 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/da4f14dc-179d-4178-9a9c-747ab825f3e4-util\") pod \"7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651nnldz\" (UID: \"da4f14dc-179d-4178-9a9c-747ab825f3e4\") " pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651nnldz" Feb 17 16:07:07 crc kubenswrapper[4808]: I0217 16:07:07.911318 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6j6d\" (UniqueName: \"kubernetes.io/projected/da4f14dc-179d-4178-9a9c-747ab825f3e4-kube-api-access-h6j6d\") pod \"7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651nnldz\" (UID: \"da4f14dc-179d-4178-9a9c-747ab825f3e4\") " pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651nnldz" Feb 17 16:07:08 crc kubenswrapper[4808]: I0217 16:07:08.004190 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651nnldz" Feb 17 16:07:08 crc kubenswrapper[4808]: I0217 16:07:08.177300 4808 generic.go:334] "Generic (PLEG): container finished" podID="269e3307-558f-4451-bf67-eb8e9be6237f" containerID="567ccf9b348817541d64c9c0d47904ae360ad809841f67f8b47d370c74c2890b" exitCode=0 Feb 17 16:07:08 crc kubenswrapper[4808]: I0217 16:07:08.177393 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hcq8m" event={"ID":"269e3307-558f-4451-bf67-eb8e9be6237f","Type":"ContainerDied","Data":"567ccf9b348817541d64c9c0d47904ae360ad809841f67f8b47d370c74c2890b"} Feb 17 16:07:08 crc kubenswrapper[4808]: I0217 16:07:08.177422 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hcq8m" event={"ID":"269e3307-558f-4451-bf67-eb8e9be6237f","Type":"ContainerDied","Data":"1fd2bf092e18b776d10b7a03c35f6845c7dfb7c5d54cda2b4dcb7c0f8b0de573"} Feb 17 16:07:08 crc kubenswrapper[4808]: I0217 16:07:08.177440 4808 scope.go:117] "RemoveContainer" containerID="567ccf9b348817541d64c9c0d47904ae360ad809841f67f8b47d370c74c2890b" Feb 17 16:07:08 crc kubenswrapper[4808]: I0217 16:07:08.177483 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hcq8m" Feb 17 16:07:08 crc kubenswrapper[4808]: I0217 16:07:08.206889 4808 scope.go:117] "RemoveContainer" containerID="6732b31f19f9917ed5b7e9a5e17b2a7cdea0ad7e072c62eed971fc3ab3ba2cd3" Feb 17 16:07:08 crc kubenswrapper[4808]: I0217 16:07:08.211009 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hcq8m"] Feb 17 16:07:08 crc kubenswrapper[4808]: I0217 16:07:08.213379 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-hcq8m"] Feb 17 16:07:08 crc kubenswrapper[4808]: I0217 16:07:08.231522 4808 scope.go:117] "RemoveContainer" containerID="af4b617fa0a9e93e637d807f206e575d6517f5e1d1a1ce815f1a1f35fca1c587" Feb 17 16:07:08 crc kubenswrapper[4808]: I0217 16:07:08.232741 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651nnldz"] Feb 17 16:07:08 crc kubenswrapper[4808]: I0217 16:07:08.249287 4808 scope.go:117] "RemoveContainer" containerID="567ccf9b348817541d64c9c0d47904ae360ad809841f67f8b47d370c74c2890b" Feb 17 16:07:08 crc kubenswrapper[4808]: E0217 16:07:08.249730 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"567ccf9b348817541d64c9c0d47904ae360ad809841f67f8b47d370c74c2890b\": container with ID starting with 567ccf9b348817541d64c9c0d47904ae360ad809841f67f8b47d370c74c2890b not found: ID does not exist" containerID="567ccf9b348817541d64c9c0d47904ae360ad809841f67f8b47d370c74c2890b" Feb 17 16:07:08 crc kubenswrapper[4808]: I0217 16:07:08.249789 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"567ccf9b348817541d64c9c0d47904ae360ad809841f67f8b47d370c74c2890b"} err="failed to get container status \"567ccf9b348817541d64c9c0d47904ae360ad809841f67f8b47d370c74c2890b\": rpc error: code = NotFound desc = could not find container \"567ccf9b348817541d64c9c0d47904ae360ad809841f67f8b47d370c74c2890b\": container with ID starting with 567ccf9b348817541d64c9c0d47904ae360ad809841f67f8b47d370c74c2890b not found: ID does not exist" Feb 17 16:07:08 crc kubenswrapper[4808]: I0217 16:07:08.249818 4808 scope.go:117] "RemoveContainer" containerID="6732b31f19f9917ed5b7e9a5e17b2a7cdea0ad7e072c62eed971fc3ab3ba2cd3" Feb 17 16:07:08 crc kubenswrapper[4808]: E0217 16:07:08.250230 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6732b31f19f9917ed5b7e9a5e17b2a7cdea0ad7e072c62eed971fc3ab3ba2cd3\": container with ID starting with 6732b31f19f9917ed5b7e9a5e17b2a7cdea0ad7e072c62eed971fc3ab3ba2cd3 not found: ID does not exist" containerID="6732b31f19f9917ed5b7e9a5e17b2a7cdea0ad7e072c62eed971fc3ab3ba2cd3" Feb 17 16:07:08 crc kubenswrapper[4808]: I0217 16:07:08.250252 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6732b31f19f9917ed5b7e9a5e17b2a7cdea0ad7e072c62eed971fc3ab3ba2cd3"} err="failed to get container status \"6732b31f19f9917ed5b7e9a5e17b2a7cdea0ad7e072c62eed971fc3ab3ba2cd3\": rpc error: code = NotFound desc = could not find container \"6732b31f19f9917ed5b7e9a5e17b2a7cdea0ad7e072c62eed971fc3ab3ba2cd3\": container with ID starting with 6732b31f19f9917ed5b7e9a5e17b2a7cdea0ad7e072c62eed971fc3ab3ba2cd3 not found: ID does not exist" Feb 17 16:07:08 crc kubenswrapper[4808]: I0217 16:07:08.250268 4808 scope.go:117] "RemoveContainer" containerID="af4b617fa0a9e93e637d807f206e575d6517f5e1d1a1ce815f1a1f35fca1c587" Feb 17 16:07:08 crc kubenswrapper[4808]: E0217 16:07:08.250607 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af4b617fa0a9e93e637d807f206e575d6517f5e1d1a1ce815f1a1f35fca1c587\": container with ID starting with af4b617fa0a9e93e637d807f206e575d6517f5e1d1a1ce815f1a1f35fca1c587 not found: ID does not exist" containerID="af4b617fa0a9e93e637d807f206e575d6517f5e1d1a1ce815f1a1f35fca1c587" Feb 17 16:07:08 crc kubenswrapper[4808]: I0217 16:07:08.250623 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af4b617fa0a9e93e637d807f206e575d6517f5e1d1a1ce815f1a1f35fca1c587"} err="failed to get container status \"af4b617fa0a9e93e637d807f206e575d6517f5e1d1a1ce815f1a1f35fca1c587\": rpc error: code = NotFound desc = could not find container \"af4b617fa0a9e93e637d807f206e575d6517f5e1d1a1ce815f1a1f35fca1c587\": container with ID starting with af4b617fa0a9e93e637d807f206e575d6517f5e1d1a1ce815f1a1f35fca1c587 not found: ID does not exist" Feb 17 16:07:09 crc kubenswrapper[4808]: I0217 16:07:09.158916 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="269e3307-558f-4451-bf67-eb8e9be6237f" path="/var/lib/kubelet/pods/269e3307-558f-4451-bf67-eb8e9be6237f/volumes" Feb 17 16:07:09 crc kubenswrapper[4808]: I0217 16:07:09.187282 4808 generic.go:334] "Generic (PLEG): container finished" podID="da4f14dc-179d-4178-9a9c-747ab825f3e4" containerID="82792d966d5393ffaa4332aea9a17514adac42b7cc94afea4847c0cb7c99de4f" exitCode=0 Feb 17 16:07:09 crc kubenswrapper[4808]: I0217 16:07:09.187354 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651nnldz" event={"ID":"da4f14dc-179d-4178-9a9c-747ab825f3e4","Type":"ContainerDied","Data":"82792d966d5393ffaa4332aea9a17514adac42b7cc94afea4847c0cb7c99de4f"} Feb 17 16:07:09 crc kubenswrapper[4808]: I0217 16:07:09.187396 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651nnldz" event={"ID":"da4f14dc-179d-4178-9a9c-747ab825f3e4","Type":"ContainerStarted","Data":"5679c07336490e02ce9e6644859a6efa88ccfa9a9e2b80bfb7f81039ee25987b"} Feb 17 16:07:10 crc kubenswrapper[4808]: I0217 16:07:10.217159 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["minio-dev/minio"] Feb 17 16:07:10 crc kubenswrapper[4808]: I0217 16:07:10.217879 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Feb 17 16:07:10 crc kubenswrapper[4808]: I0217 16:07:10.220509 4808 reflector.go:368] Caches populated for *v1.Secret from object-"minio-dev"/"default-dockercfg-26fhb" Feb 17 16:07:10 crc kubenswrapper[4808]: I0217 16:07:10.220991 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"openshift-service-ca.crt" Feb 17 16:07:10 crc kubenswrapper[4808]: I0217 16:07:10.226041 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"kube-root-ca.crt" Feb 17 16:07:10 crc kubenswrapper[4808]: I0217 16:07:10.230148 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Feb 17 16:07:10 crc kubenswrapper[4808]: I0217 16:07:10.323037 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqdmn\" (UniqueName: \"kubernetes.io/projected/e722f9d4-4e9f-4cb6-bed6-59c141dffcb6-kube-api-access-zqdmn\") pod \"minio\" (UID: \"e722f9d4-4e9f-4cb6-bed6-59c141dffcb6\") " pod="minio-dev/minio" Feb 17 16:07:10 crc kubenswrapper[4808]: I0217 16:07:10.323095 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-abf4a987-c4f6-472c-8f72-ed6151cc0597\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-abf4a987-c4f6-472c-8f72-ed6151cc0597\") pod \"minio\" (UID: \"e722f9d4-4e9f-4cb6-bed6-59c141dffcb6\") " pod="minio-dev/minio" Feb 17 16:07:10 crc kubenswrapper[4808]: I0217 16:07:10.424179 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqdmn\" (UniqueName: \"kubernetes.io/projected/e722f9d4-4e9f-4cb6-bed6-59c141dffcb6-kube-api-access-zqdmn\") pod \"minio\" (UID: \"e722f9d4-4e9f-4cb6-bed6-59c141dffcb6\") " pod="minio-dev/minio" Feb 17 16:07:10 crc kubenswrapper[4808]: I0217 16:07:10.424261 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-abf4a987-c4f6-472c-8f72-ed6151cc0597\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-abf4a987-c4f6-472c-8f72-ed6151cc0597\") pod \"minio\" (UID: \"e722f9d4-4e9f-4cb6-bed6-59c141dffcb6\") " pod="minio-dev/minio" Feb 17 16:07:10 crc kubenswrapper[4808]: I0217 16:07:10.430651 4808 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 16:07:10 crc kubenswrapper[4808]: I0217 16:07:10.430726 4808 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-abf4a987-c4f6-472c-8f72-ed6151cc0597\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-abf4a987-c4f6-472c-8f72-ed6151cc0597\") pod \"minio\" (UID: \"e722f9d4-4e9f-4cb6-bed6-59c141dffcb6\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/951df6a420e49aca90444f5a4550c43a8d1257bfc8291118537598533d0c9023/globalmount\"" pod="minio-dev/minio" Feb 17 16:07:10 crc kubenswrapper[4808]: I0217 16:07:10.466246 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqdmn\" (UniqueName: \"kubernetes.io/projected/e722f9d4-4e9f-4cb6-bed6-59c141dffcb6-kube-api-access-zqdmn\") pod \"minio\" (UID: \"e722f9d4-4e9f-4cb6-bed6-59c141dffcb6\") " pod="minio-dev/minio" Feb 17 16:07:10 crc kubenswrapper[4808]: I0217 16:07:10.472479 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-abf4a987-c4f6-472c-8f72-ed6151cc0597\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-abf4a987-c4f6-472c-8f72-ed6151cc0597\") pod \"minio\" (UID: \"e722f9d4-4e9f-4cb6-bed6-59c141dffcb6\") " pod="minio-dev/minio" Feb 17 16:07:10 crc kubenswrapper[4808]: I0217 16:07:10.542185 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Feb 17 16:07:10 crc kubenswrapper[4808]: I0217 16:07:10.761713 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Feb 17 16:07:10 crc kubenswrapper[4808]: W0217 16:07:10.765211 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode722f9d4_4e9f_4cb6_bed6_59c141dffcb6.slice/crio-bf92541f1a9427958953ce47848753035414d23e220fe7d2f6a583f5b250056e WatchSource:0}: Error finding container bf92541f1a9427958953ce47848753035414d23e220fe7d2f6a583f5b250056e: Status 404 returned error can't find the container with id bf92541f1a9427958953ce47848753035414d23e220fe7d2f6a583f5b250056e Feb 17 16:07:11 crc kubenswrapper[4808]: I0217 16:07:11.201405 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"e722f9d4-4e9f-4cb6-bed6-59c141dffcb6","Type":"ContainerStarted","Data":"bf92541f1a9427958953ce47848753035414d23e220fe7d2f6a583f5b250056e"} Feb 17 16:07:11 crc kubenswrapper[4808]: I0217 16:07:11.203719 4808 generic.go:334] "Generic (PLEG): container finished" podID="da4f14dc-179d-4178-9a9c-747ab825f3e4" containerID="cb0b48b5a25cf604e7682c779b1d79f2d82c02abe4339836e41cde853024f884" exitCode=0 Feb 17 16:07:11 crc kubenswrapper[4808]: I0217 16:07:11.203754 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651nnldz" event={"ID":"da4f14dc-179d-4178-9a9c-747ab825f3e4","Type":"ContainerDied","Data":"cb0b48b5a25cf604e7682c779b1d79f2d82c02abe4339836e41cde853024f884"} Feb 17 16:07:11 crc kubenswrapper[4808]: I0217 16:07:11.217467 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-gqxh7"] Feb 17 16:07:11 crc kubenswrapper[4808]: I0217 16:07:11.218523 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gqxh7" Feb 17 16:07:11 crc kubenswrapper[4808]: I0217 16:07:11.238842 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gqxh7"] Feb 17 16:07:11 crc kubenswrapper[4808]: I0217 16:07:11.244560 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a78a92b2-62a6-4695-8363-7585b9131e18-catalog-content\") pod \"redhat-operators-gqxh7\" (UID: \"a78a92b2-62a6-4695-8363-7585b9131e18\") " pod="openshift-marketplace/redhat-operators-gqxh7" Feb 17 16:07:11 crc kubenswrapper[4808]: I0217 16:07:11.245040 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4jgp\" (UniqueName: \"kubernetes.io/projected/a78a92b2-62a6-4695-8363-7585b9131e18-kube-api-access-s4jgp\") pod \"redhat-operators-gqxh7\" (UID: \"a78a92b2-62a6-4695-8363-7585b9131e18\") " pod="openshift-marketplace/redhat-operators-gqxh7" Feb 17 16:07:11 crc kubenswrapper[4808]: I0217 16:07:11.245141 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a78a92b2-62a6-4695-8363-7585b9131e18-utilities\") pod \"redhat-operators-gqxh7\" (UID: \"a78a92b2-62a6-4695-8363-7585b9131e18\") " pod="openshift-marketplace/redhat-operators-gqxh7" Feb 17 16:07:11 crc kubenswrapper[4808]: I0217 16:07:11.345777 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a78a92b2-62a6-4695-8363-7585b9131e18-catalog-content\") pod \"redhat-operators-gqxh7\" (UID: \"a78a92b2-62a6-4695-8363-7585b9131e18\") " pod="openshift-marketplace/redhat-operators-gqxh7" Feb 17 16:07:11 crc kubenswrapper[4808]: I0217 16:07:11.345830 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4jgp\" (UniqueName: \"kubernetes.io/projected/a78a92b2-62a6-4695-8363-7585b9131e18-kube-api-access-s4jgp\") pod \"redhat-operators-gqxh7\" (UID: \"a78a92b2-62a6-4695-8363-7585b9131e18\") " pod="openshift-marketplace/redhat-operators-gqxh7" Feb 17 16:07:11 crc kubenswrapper[4808]: I0217 16:07:11.345859 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a78a92b2-62a6-4695-8363-7585b9131e18-utilities\") pod \"redhat-operators-gqxh7\" (UID: \"a78a92b2-62a6-4695-8363-7585b9131e18\") " pod="openshift-marketplace/redhat-operators-gqxh7" Feb 17 16:07:11 crc kubenswrapper[4808]: I0217 16:07:11.346354 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a78a92b2-62a6-4695-8363-7585b9131e18-utilities\") pod \"redhat-operators-gqxh7\" (UID: \"a78a92b2-62a6-4695-8363-7585b9131e18\") " pod="openshift-marketplace/redhat-operators-gqxh7" Feb 17 16:07:11 crc kubenswrapper[4808]: I0217 16:07:11.346413 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a78a92b2-62a6-4695-8363-7585b9131e18-catalog-content\") pod \"redhat-operators-gqxh7\" (UID: \"a78a92b2-62a6-4695-8363-7585b9131e18\") " pod="openshift-marketplace/redhat-operators-gqxh7" Feb 17 16:07:11 crc kubenswrapper[4808]: I0217 16:07:11.374102 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4jgp\" (UniqueName: \"kubernetes.io/projected/a78a92b2-62a6-4695-8363-7585b9131e18-kube-api-access-s4jgp\") pod \"redhat-operators-gqxh7\" (UID: \"a78a92b2-62a6-4695-8363-7585b9131e18\") " pod="openshift-marketplace/redhat-operators-gqxh7" Feb 17 16:07:11 crc kubenswrapper[4808]: I0217 16:07:11.549084 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gqxh7" Feb 17 16:07:12 crc kubenswrapper[4808]: I0217 16:07:12.006005 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gqxh7"] Feb 17 16:07:12 crc kubenswrapper[4808]: I0217 16:07:12.213647 4808 generic.go:334] "Generic (PLEG): container finished" podID="da4f14dc-179d-4178-9a9c-747ab825f3e4" containerID="c5e8563e9f798c18d0db5fdb9fe721f12311862ab9c9c89d722f9e1221976b26" exitCode=0 Feb 17 16:07:12 crc kubenswrapper[4808]: I0217 16:07:12.213688 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651nnldz" event={"ID":"da4f14dc-179d-4178-9a9c-747ab825f3e4","Type":"ContainerDied","Data":"c5e8563e9f798c18d0db5fdb9fe721f12311862ab9c9c89d722f9e1221976b26"} Feb 17 16:07:13 crc kubenswrapper[4808]: I0217 16:07:13.220253 4808 generic.go:334] "Generic (PLEG): container finished" podID="a78a92b2-62a6-4695-8363-7585b9131e18" containerID="20a3d1f808a67532d6a3df73638ee8fad690961583885e086547b65bb3334b96" exitCode=0 Feb 17 16:07:13 crc kubenswrapper[4808]: I0217 16:07:13.220434 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gqxh7" event={"ID":"a78a92b2-62a6-4695-8363-7585b9131e18","Type":"ContainerDied","Data":"20a3d1f808a67532d6a3df73638ee8fad690961583885e086547b65bb3334b96"} Feb 17 16:07:13 crc kubenswrapper[4808]: I0217 16:07:13.221528 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gqxh7" event={"ID":"a78a92b2-62a6-4695-8363-7585b9131e18","Type":"ContainerStarted","Data":"a419479e130f9555fcdc8967da0620259792ea15250ef1b70571c1f01800c407"} Feb 17 16:07:13 crc kubenswrapper[4808]: I0217 16:07:13.900740 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651nnldz" Feb 17 16:07:14 crc kubenswrapper[4808]: I0217 16:07:14.085493 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/da4f14dc-179d-4178-9a9c-747ab825f3e4-util\") pod \"da4f14dc-179d-4178-9a9c-747ab825f3e4\" (UID: \"da4f14dc-179d-4178-9a9c-747ab825f3e4\") " Feb 17 16:07:14 crc kubenswrapper[4808]: I0217 16:07:14.085608 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/da4f14dc-179d-4178-9a9c-747ab825f3e4-bundle\") pod \"da4f14dc-179d-4178-9a9c-747ab825f3e4\" (UID: \"da4f14dc-179d-4178-9a9c-747ab825f3e4\") " Feb 17 16:07:14 crc kubenswrapper[4808]: I0217 16:07:14.085658 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h6j6d\" (UniqueName: \"kubernetes.io/projected/da4f14dc-179d-4178-9a9c-747ab825f3e4-kube-api-access-h6j6d\") pod \"da4f14dc-179d-4178-9a9c-747ab825f3e4\" (UID: \"da4f14dc-179d-4178-9a9c-747ab825f3e4\") " Feb 17 16:07:14 crc kubenswrapper[4808]: I0217 16:07:14.086800 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da4f14dc-179d-4178-9a9c-747ab825f3e4-bundle" (OuterVolumeSpecName: "bundle") pod "da4f14dc-179d-4178-9a9c-747ab825f3e4" (UID: "da4f14dc-179d-4178-9a9c-747ab825f3e4"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:07:14 crc kubenswrapper[4808]: I0217 16:07:14.097400 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da4f14dc-179d-4178-9a9c-747ab825f3e4-util" (OuterVolumeSpecName: "util") pod "da4f14dc-179d-4178-9a9c-747ab825f3e4" (UID: "da4f14dc-179d-4178-9a9c-747ab825f3e4"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:07:14 crc kubenswrapper[4808]: I0217 16:07:14.098793 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da4f14dc-179d-4178-9a9c-747ab825f3e4-kube-api-access-h6j6d" (OuterVolumeSpecName: "kube-api-access-h6j6d") pod "da4f14dc-179d-4178-9a9c-747ab825f3e4" (UID: "da4f14dc-179d-4178-9a9c-747ab825f3e4"). InnerVolumeSpecName "kube-api-access-h6j6d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:07:14 crc kubenswrapper[4808]: I0217 16:07:14.186980 4808 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/da4f14dc-179d-4178-9a9c-747ab825f3e4-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:07:14 crc kubenswrapper[4808]: I0217 16:07:14.187079 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h6j6d\" (UniqueName: \"kubernetes.io/projected/da4f14dc-179d-4178-9a9c-747ab825f3e4-kube-api-access-h6j6d\") on node \"crc\" DevicePath \"\"" Feb 17 16:07:14 crc kubenswrapper[4808]: I0217 16:07:14.187094 4808 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/da4f14dc-179d-4178-9a9c-747ab825f3e4-util\") on node \"crc\" DevicePath \"\"" Feb 17 16:07:14 crc kubenswrapper[4808]: I0217 16:07:14.229493 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651nnldz" event={"ID":"da4f14dc-179d-4178-9a9c-747ab825f3e4","Type":"ContainerDied","Data":"5679c07336490e02ce9e6644859a6efa88ccfa9a9e2b80bfb7f81039ee25987b"} Feb 17 16:07:14 crc kubenswrapper[4808]: I0217 16:07:14.229530 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5679c07336490e02ce9e6644859a6efa88ccfa9a9e2b80bfb7f81039ee25987b" Feb 17 16:07:14 crc kubenswrapper[4808]: I0217 16:07:14.229646 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651nnldz" Feb 17 16:07:15 crc kubenswrapper[4808]: I0217 16:07:15.241098 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gqxh7" event={"ID":"a78a92b2-62a6-4695-8363-7585b9131e18","Type":"ContainerStarted","Data":"fafa3388f16d372afa05bf1e6edc88215825c2eed92931f869b65e0a268bbc45"} Feb 17 16:07:15 crc kubenswrapper[4808]: I0217 16:07:15.245673 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"e722f9d4-4e9f-4cb6-bed6-59c141dffcb6","Type":"ContainerStarted","Data":"62a147fb05d1f8af7a89e11b6938afed2a7e8fab9079bc0d38119bc3c0149235"} Feb 17 16:07:15 crc kubenswrapper[4808]: I0217 16:07:15.287358 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="minio-dev/minio" podStartSLOduration=4.568218926 podStartE2EDuration="8.287333993s" podCreationTimestamp="2026-02-17 16:07:07 +0000 UTC" firstStartedPulling="2026-02-17 16:07:10.767757337 +0000 UTC m=+794.284116410" lastFinishedPulling="2026-02-17 16:07:14.486872394 +0000 UTC m=+798.003231477" observedRunningTime="2026-02-17 16:07:15.283731876 +0000 UTC m=+798.800090999" watchObservedRunningTime="2026-02-17 16:07:15.287333993 +0000 UTC m=+798.803693076" Feb 17 16:07:16 crc kubenswrapper[4808]: I0217 16:07:16.256469 4808 generic.go:334] "Generic (PLEG): container finished" podID="a78a92b2-62a6-4695-8363-7585b9131e18" containerID="fafa3388f16d372afa05bf1e6edc88215825c2eed92931f869b65e0a268bbc45" exitCode=0 Feb 17 16:07:16 crc kubenswrapper[4808]: I0217 16:07:16.256592 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gqxh7" event={"ID":"a78a92b2-62a6-4695-8363-7585b9131e18","Type":"ContainerDied","Data":"fafa3388f16d372afa05bf1e6edc88215825c2eed92931f869b65e0a268bbc45"} Feb 17 16:07:17 crc kubenswrapper[4808]: I0217 16:07:17.266158 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gqxh7" event={"ID":"a78a92b2-62a6-4695-8363-7585b9131e18","Type":"ContainerStarted","Data":"bfb452b12035a5fd06394af94686f8fe9c71aeb2ce1ecc7af97247031bc8365f"} Feb 17 16:07:17 crc kubenswrapper[4808]: I0217 16:07:17.340015 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-gqxh7" podStartSLOduration=3.464451665 podStartE2EDuration="6.339997899s" podCreationTimestamp="2026-02-17 16:07:11 +0000 UTC" firstStartedPulling="2026-02-17 16:07:13.835050002 +0000 UTC m=+797.351409075" lastFinishedPulling="2026-02-17 16:07:16.710596226 +0000 UTC m=+800.226955309" observedRunningTime="2026-02-17 16:07:17.333693508 +0000 UTC m=+800.850052581" watchObservedRunningTime="2026-02-17 16:07:17.339997899 +0000 UTC m=+800.856356972" Feb 17 16:07:19 crc kubenswrapper[4808]: I0217 16:07:19.739008 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-85fb78767c-g2qqj"] Feb 17 16:07:19 crc kubenswrapper[4808]: E0217 16:07:19.739552 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da4f14dc-179d-4178-9a9c-747ab825f3e4" containerName="util" Feb 17 16:07:19 crc kubenswrapper[4808]: I0217 16:07:19.739564 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="da4f14dc-179d-4178-9a9c-747ab825f3e4" containerName="util" Feb 17 16:07:19 crc kubenswrapper[4808]: E0217 16:07:19.739599 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da4f14dc-179d-4178-9a9c-747ab825f3e4" containerName="pull" Feb 17 16:07:19 crc kubenswrapper[4808]: I0217 16:07:19.739605 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="da4f14dc-179d-4178-9a9c-747ab825f3e4" containerName="pull" Feb 17 16:07:19 crc kubenswrapper[4808]: E0217 16:07:19.739614 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da4f14dc-179d-4178-9a9c-747ab825f3e4" containerName="extract" Feb 17 16:07:19 crc kubenswrapper[4808]: I0217 16:07:19.739621 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="da4f14dc-179d-4178-9a9c-747ab825f3e4" containerName="extract" Feb 17 16:07:19 crc kubenswrapper[4808]: I0217 16:07:19.739737 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="da4f14dc-179d-4178-9a9c-747ab825f3e4" containerName="extract" Feb 17 16:07:19 crc kubenswrapper[4808]: I0217 16:07:19.740380 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-85fb78767c-g2qqj" Feb 17 16:07:19 crc kubenswrapper[4808]: I0217 16:07:19.743791 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-service-cert" Feb 17 16:07:19 crc kubenswrapper[4808]: I0217 16:07:19.743882 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"openshift-service-ca.crt" Feb 17 16:07:19 crc kubenswrapper[4808]: I0217 16:07:19.743925 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"kube-root-ca.crt" Feb 17 16:07:19 crc kubenswrapper[4808]: I0217 16:07:19.745923 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"loki-operator-manager-config" Feb 17 16:07:19 crc kubenswrapper[4808]: I0217 16:07:19.746796 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-metrics" Feb 17 16:07:19 crc kubenswrapper[4808]: I0217 16:07:19.757588 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-dockercfg-rnm4v" Feb 17 16:07:19 crc kubenswrapper[4808]: I0217 16:07:19.780324 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-85fb78767c-g2qqj"] Feb 17 16:07:19 crc kubenswrapper[4808]: I0217 16:07:19.797477 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/fb7a346a-c0ef-4aa3-bfb0-b111bdef90ec-apiservice-cert\") pod \"loki-operator-controller-manager-85fb78767c-g2qqj\" (UID: \"fb7a346a-c0ef-4aa3-bfb0-b111bdef90ec\") " pod="openshift-operators-redhat/loki-operator-controller-manager-85fb78767c-g2qqj" Feb 17 16:07:19 crc kubenswrapper[4808]: I0217 16:07:19.797554 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fb7a346a-c0ef-4aa3-bfb0-b111bdef90ec-webhook-cert\") pod \"loki-operator-controller-manager-85fb78767c-g2qqj\" (UID: \"fb7a346a-c0ef-4aa3-bfb0-b111bdef90ec\") " pod="openshift-operators-redhat/loki-operator-controller-manager-85fb78767c-g2qqj" Feb 17 16:07:19 crc kubenswrapper[4808]: I0217 16:07:19.797677 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fb7a346a-c0ef-4aa3-bfb0-b111bdef90ec-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-85fb78767c-g2qqj\" (UID: \"fb7a346a-c0ef-4aa3-bfb0-b111bdef90ec\") " pod="openshift-operators-redhat/loki-operator-controller-manager-85fb78767c-g2qqj" Feb 17 16:07:19 crc kubenswrapper[4808]: I0217 16:07:19.797783 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqvg5\" (UniqueName: \"kubernetes.io/projected/fb7a346a-c0ef-4aa3-bfb0-b111bdef90ec-kube-api-access-tqvg5\") pod \"loki-operator-controller-manager-85fb78767c-g2qqj\" (UID: \"fb7a346a-c0ef-4aa3-bfb0-b111bdef90ec\") " pod="openshift-operators-redhat/loki-operator-controller-manager-85fb78767c-g2qqj" Feb 17 16:07:19 crc kubenswrapper[4808]: I0217 16:07:19.797888 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/fb7a346a-c0ef-4aa3-bfb0-b111bdef90ec-manager-config\") pod \"loki-operator-controller-manager-85fb78767c-g2qqj\" (UID: \"fb7a346a-c0ef-4aa3-bfb0-b111bdef90ec\") " pod="openshift-operators-redhat/loki-operator-controller-manager-85fb78767c-g2qqj" Feb 17 16:07:19 crc kubenswrapper[4808]: I0217 16:07:19.898841 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/fb7a346a-c0ef-4aa3-bfb0-b111bdef90ec-apiservice-cert\") pod \"loki-operator-controller-manager-85fb78767c-g2qqj\" (UID: \"fb7a346a-c0ef-4aa3-bfb0-b111bdef90ec\") " pod="openshift-operators-redhat/loki-operator-controller-manager-85fb78767c-g2qqj" Feb 17 16:07:19 crc kubenswrapper[4808]: I0217 16:07:19.898902 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fb7a346a-c0ef-4aa3-bfb0-b111bdef90ec-webhook-cert\") pod \"loki-operator-controller-manager-85fb78767c-g2qqj\" (UID: \"fb7a346a-c0ef-4aa3-bfb0-b111bdef90ec\") " pod="openshift-operators-redhat/loki-operator-controller-manager-85fb78767c-g2qqj" Feb 17 16:07:19 crc kubenswrapper[4808]: I0217 16:07:19.898928 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fb7a346a-c0ef-4aa3-bfb0-b111bdef90ec-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-85fb78767c-g2qqj\" (UID: \"fb7a346a-c0ef-4aa3-bfb0-b111bdef90ec\") " pod="openshift-operators-redhat/loki-operator-controller-manager-85fb78767c-g2qqj" Feb 17 16:07:19 crc kubenswrapper[4808]: I0217 16:07:19.898956 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tqvg5\" (UniqueName: \"kubernetes.io/projected/fb7a346a-c0ef-4aa3-bfb0-b111bdef90ec-kube-api-access-tqvg5\") pod \"loki-operator-controller-manager-85fb78767c-g2qqj\" (UID: \"fb7a346a-c0ef-4aa3-bfb0-b111bdef90ec\") " pod="openshift-operators-redhat/loki-operator-controller-manager-85fb78767c-g2qqj" Feb 17 16:07:19 crc kubenswrapper[4808]: I0217 16:07:19.898994 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/fb7a346a-c0ef-4aa3-bfb0-b111bdef90ec-manager-config\") pod \"loki-operator-controller-manager-85fb78767c-g2qqj\" (UID: \"fb7a346a-c0ef-4aa3-bfb0-b111bdef90ec\") " pod="openshift-operators-redhat/loki-operator-controller-manager-85fb78767c-g2qqj" Feb 17 16:07:19 crc kubenswrapper[4808]: I0217 16:07:19.899900 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/fb7a346a-c0ef-4aa3-bfb0-b111bdef90ec-manager-config\") pod \"loki-operator-controller-manager-85fb78767c-g2qqj\" (UID: \"fb7a346a-c0ef-4aa3-bfb0-b111bdef90ec\") " pod="openshift-operators-redhat/loki-operator-controller-manager-85fb78767c-g2qqj" Feb 17 16:07:19 crc kubenswrapper[4808]: I0217 16:07:19.904737 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fb7a346a-c0ef-4aa3-bfb0-b111bdef90ec-webhook-cert\") pod \"loki-operator-controller-manager-85fb78767c-g2qqj\" (UID: \"fb7a346a-c0ef-4aa3-bfb0-b111bdef90ec\") " pod="openshift-operators-redhat/loki-operator-controller-manager-85fb78767c-g2qqj" Feb 17 16:07:19 crc kubenswrapper[4808]: I0217 16:07:19.904735 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fb7a346a-c0ef-4aa3-bfb0-b111bdef90ec-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-85fb78767c-g2qqj\" (UID: \"fb7a346a-c0ef-4aa3-bfb0-b111bdef90ec\") " pod="openshift-operators-redhat/loki-operator-controller-manager-85fb78767c-g2qqj" Feb 17 16:07:19 crc kubenswrapper[4808]: I0217 16:07:19.906192 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/fb7a346a-c0ef-4aa3-bfb0-b111bdef90ec-apiservice-cert\") pod \"loki-operator-controller-manager-85fb78767c-g2qqj\" (UID: \"fb7a346a-c0ef-4aa3-bfb0-b111bdef90ec\") " pod="openshift-operators-redhat/loki-operator-controller-manager-85fb78767c-g2qqj" Feb 17 16:07:19 crc kubenswrapper[4808]: I0217 16:07:19.917804 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqvg5\" (UniqueName: \"kubernetes.io/projected/fb7a346a-c0ef-4aa3-bfb0-b111bdef90ec-kube-api-access-tqvg5\") pod \"loki-operator-controller-manager-85fb78767c-g2qqj\" (UID: \"fb7a346a-c0ef-4aa3-bfb0-b111bdef90ec\") " pod="openshift-operators-redhat/loki-operator-controller-manager-85fb78767c-g2qqj" Feb 17 16:07:20 crc kubenswrapper[4808]: I0217 16:07:20.060796 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-85fb78767c-g2qqj" Feb 17 16:07:20 crc kubenswrapper[4808]: I0217 16:07:20.280765 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-85fb78767c-g2qqj"] Feb 17 16:07:20 crc kubenswrapper[4808]: I0217 16:07:20.301070 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-85fb78767c-g2qqj" event={"ID":"fb7a346a-c0ef-4aa3-bfb0-b111bdef90ec","Type":"ContainerStarted","Data":"cfec0a27f8d32b6591fb291282a859313a8962cc233323b16ed723d7ade2cac8"} Feb 17 16:07:21 crc kubenswrapper[4808]: I0217 16:07:21.549355 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-gqxh7" Feb 17 16:07:21 crc kubenswrapper[4808]: I0217 16:07:21.549744 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-gqxh7" Feb 17 16:07:22 crc kubenswrapper[4808]: I0217 16:07:22.602776 4808 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-gqxh7" podUID="a78a92b2-62a6-4695-8363-7585b9131e18" containerName="registry-server" probeResult="failure" output=< Feb 17 16:07:22 crc kubenswrapper[4808]: timeout: failed to connect service ":50051" within 1s Feb 17 16:07:22 crc kubenswrapper[4808]: > Feb 17 16:07:26 crc kubenswrapper[4808]: I0217 16:07:26.351335 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-85fb78767c-g2qqj" event={"ID":"fb7a346a-c0ef-4aa3-bfb0-b111bdef90ec","Type":"ContainerStarted","Data":"97ff72783f138b3f69f602096a300d9bbdb9f63954ae9d4d801b9b136080fbbc"} Feb 17 16:07:31 crc kubenswrapper[4808]: I0217 16:07:31.612517 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-gqxh7" Feb 17 16:07:31 crc kubenswrapper[4808]: I0217 16:07:31.670367 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-gqxh7" Feb 17 16:07:31 crc kubenswrapper[4808]: I0217 16:07:31.847116 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gqxh7"] Feb 17 16:07:32 crc kubenswrapper[4808]: I0217 16:07:32.387036 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-85fb78767c-g2qqj" event={"ID":"fb7a346a-c0ef-4aa3-bfb0-b111bdef90ec","Type":"ContainerStarted","Data":"2bab7c8842c1ae4881cf254bb42d2d92593fdc5607b5097adfe47cdd1de7b485"} Feb 17 16:07:32 crc kubenswrapper[4808]: I0217 16:07:32.387812 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators-redhat/loki-operator-controller-manager-85fb78767c-g2qqj" Feb 17 16:07:32 crc kubenswrapper[4808]: I0217 16:07:32.391136 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators-redhat/loki-operator-controller-manager-85fb78767c-g2qqj" Feb 17 16:07:32 crc kubenswrapper[4808]: I0217 16:07:32.418466 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators-redhat/loki-operator-controller-manager-85fb78767c-g2qqj" podStartSLOduration=1.550923957 podStartE2EDuration="13.418424242s" podCreationTimestamp="2026-02-17 16:07:19 +0000 UTC" firstStartedPulling="2026-02-17 16:07:20.293892423 +0000 UTC m=+803.810251496" lastFinishedPulling="2026-02-17 16:07:32.161392708 +0000 UTC m=+815.677751781" observedRunningTime="2026-02-17 16:07:32.414235098 +0000 UTC m=+815.930594181" watchObservedRunningTime="2026-02-17 16:07:32.418424242 +0000 UTC m=+815.934783335" Feb 17 16:07:33 crc kubenswrapper[4808]: I0217 16:07:33.395000 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-gqxh7" podUID="a78a92b2-62a6-4695-8363-7585b9131e18" containerName="registry-server" containerID="cri-o://bfb452b12035a5fd06394af94686f8fe9c71aeb2ce1ecc7af97247031bc8365f" gracePeriod=2 Feb 17 16:07:33 crc kubenswrapper[4808]: I0217 16:07:33.822774 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gqxh7" Feb 17 16:07:33 crc kubenswrapper[4808]: I0217 16:07:33.993229 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4jgp\" (UniqueName: \"kubernetes.io/projected/a78a92b2-62a6-4695-8363-7585b9131e18-kube-api-access-s4jgp\") pod \"a78a92b2-62a6-4695-8363-7585b9131e18\" (UID: \"a78a92b2-62a6-4695-8363-7585b9131e18\") " Feb 17 16:07:33 crc kubenswrapper[4808]: I0217 16:07:33.993304 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a78a92b2-62a6-4695-8363-7585b9131e18-utilities\") pod \"a78a92b2-62a6-4695-8363-7585b9131e18\" (UID: \"a78a92b2-62a6-4695-8363-7585b9131e18\") " Feb 17 16:07:33 crc kubenswrapper[4808]: I0217 16:07:33.993374 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a78a92b2-62a6-4695-8363-7585b9131e18-catalog-content\") pod \"a78a92b2-62a6-4695-8363-7585b9131e18\" (UID: \"a78a92b2-62a6-4695-8363-7585b9131e18\") " Feb 17 16:07:33 crc kubenswrapper[4808]: I0217 16:07:33.994211 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a78a92b2-62a6-4695-8363-7585b9131e18-utilities" (OuterVolumeSpecName: "utilities") pod "a78a92b2-62a6-4695-8363-7585b9131e18" (UID: "a78a92b2-62a6-4695-8363-7585b9131e18"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:07:33 crc kubenswrapper[4808]: I0217 16:07:33.999526 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a78a92b2-62a6-4695-8363-7585b9131e18-kube-api-access-s4jgp" (OuterVolumeSpecName: "kube-api-access-s4jgp") pod "a78a92b2-62a6-4695-8363-7585b9131e18" (UID: "a78a92b2-62a6-4695-8363-7585b9131e18"). InnerVolumeSpecName "kube-api-access-s4jgp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:07:34 crc kubenswrapper[4808]: I0217 16:07:34.095177 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4jgp\" (UniqueName: \"kubernetes.io/projected/a78a92b2-62a6-4695-8363-7585b9131e18-kube-api-access-s4jgp\") on node \"crc\" DevicePath \"\"" Feb 17 16:07:34 crc kubenswrapper[4808]: I0217 16:07:34.095226 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a78a92b2-62a6-4695-8363-7585b9131e18-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:07:34 crc kubenswrapper[4808]: I0217 16:07:34.114541 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a78a92b2-62a6-4695-8363-7585b9131e18-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a78a92b2-62a6-4695-8363-7585b9131e18" (UID: "a78a92b2-62a6-4695-8363-7585b9131e18"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:07:34 crc kubenswrapper[4808]: I0217 16:07:34.196715 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a78a92b2-62a6-4695-8363-7585b9131e18-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:07:34 crc kubenswrapper[4808]: I0217 16:07:34.402293 4808 generic.go:334] "Generic (PLEG): container finished" podID="a78a92b2-62a6-4695-8363-7585b9131e18" containerID="bfb452b12035a5fd06394af94686f8fe9c71aeb2ce1ecc7af97247031bc8365f" exitCode=0 Feb 17 16:07:34 crc kubenswrapper[4808]: I0217 16:07:34.402363 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gqxh7" event={"ID":"a78a92b2-62a6-4695-8363-7585b9131e18","Type":"ContainerDied","Data":"bfb452b12035a5fd06394af94686f8fe9c71aeb2ce1ecc7af97247031bc8365f"} Feb 17 16:07:34 crc kubenswrapper[4808]: I0217 16:07:34.402391 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gqxh7" Feb 17 16:07:34 crc kubenswrapper[4808]: I0217 16:07:34.402429 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gqxh7" event={"ID":"a78a92b2-62a6-4695-8363-7585b9131e18","Type":"ContainerDied","Data":"a419479e130f9555fcdc8967da0620259792ea15250ef1b70571c1f01800c407"} Feb 17 16:07:34 crc kubenswrapper[4808]: I0217 16:07:34.402449 4808 scope.go:117] "RemoveContainer" containerID="bfb452b12035a5fd06394af94686f8fe9c71aeb2ce1ecc7af97247031bc8365f" Feb 17 16:07:34 crc kubenswrapper[4808]: I0217 16:07:34.421368 4808 scope.go:117] "RemoveContainer" containerID="fafa3388f16d372afa05bf1e6edc88215825c2eed92931f869b65e0a268bbc45" Feb 17 16:07:34 crc kubenswrapper[4808]: I0217 16:07:34.451252 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gqxh7"] Feb 17 16:07:34 crc kubenswrapper[4808]: I0217 16:07:34.456727 4808 scope.go:117] "RemoveContainer" containerID="20a3d1f808a67532d6a3df73638ee8fad690961583885e086547b65bb3334b96" Feb 17 16:07:34 crc kubenswrapper[4808]: I0217 16:07:34.458000 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-gqxh7"] Feb 17 16:07:34 crc kubenswrapper[4808]: I0217 16:07:34.470754 4808 scope.go:117] "RemoveContainer" containerID="bfb452b12035a5fd06394af94686f8fe9c71aeb2ce1ecc7af97247031bc8365f" Feb 17 16:07:34 crc kubenswrapper[4808]: E0217 16:07:34.473697 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bfb452b12035a5fd06394af94686f8fe9c71aeb2ce1ecc7af97247031bc8365f\": container with ID starting with bfb452b12035a5fd06394af94686f8fe9c71aeb2ce1ecc7af97247031bc8365f not found: ID does not exist" containerID="bfb452b12035a5fd06394af94686f8fe9c71aeb2ce1ecc7af97247031bc8365f" Feb 17 16:07:34 crc kubenswrapper[4808]: I0217 16:07:34.473740 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bfb452b12035a5fd06394af94686f8fe9c71aeb2ce1ecc7af97247031bc8365f"} err="failed to get container status \"bfb452b12035a5fd06394af94686f8fe9c71aeb2ce1ecc7af97247031bc8365f\": rpc error: code = NotFound desc = could not find container \"bfb452b12035a5fd06394af94686f8fe9c71aeb2ce1ecc7af97247031bc8365f\": container with ID starting with bfb452b12035a5fd06394af94686f8fe9c71aeb2ce1ecc7af97247031bc8365f not found: ID does not exist" Feb 17 16:07:34 crc kubenswrapper[4808]: I0217 16:07:34.473766 4808 scope.go:117] "RemoveContainer" containerID="fafa3388f16d372afa05bf1e6edc88215825c2eed92931f869b65e0a268bbc45" Feb 17 16:07:34 crc kubenswrapper[4808]: E0217 16:07:34.474109 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fafa3388f16d372afa05bf1e6edc88215825c2eed92931f869b65e0a268bbc45\": container with ID starting with fafa3388f16d372afa05bf1e6edc88215825c2eed92931f869b65e0a268bbc45 not found: ID does not exist" containerID="fafa3388f16d372afa05bf1e6edc88215825c2eed92931f869b65e0a268bbc45" Feb 17 16:07:34 crc kubenswrapper[4808]: I0217 16:07:34.474165 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fafa3388f16d372afa05bf1e6edc88215825c2eed92931f869b65e0a268bbc45"} err="failed to get container status \"fafa3388f16d372afa05bf1e6edc88215825c2eed92931f869b65e0a268bbc45\": rpc error: code = NotFound desc = could not find container \"fafa3388f16d372afa05bf1e6edc88215825c2eed92931f869b65e0a268bbc45\": container with ID starting with fafa3388f16d372afa05bf1e6edc88215825c2eed92931f869b65e0a268bbc45 not found: ID does not exist" Feb 17 16:07:34 crc kubenswrapper[4808]: I0217 16:07:34.474207 4808 scope.go:117] "RemoveContainer" containerID="20a3d1f808a67532d6a3df73638ee8fad690961583885e086547b65bb3334b96" Feb 17 16:07:34 crc kubenswrapper[4808]: E0217 16:07:34.474466 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"20a3d1f808a67532d6a3df73638ee8fad690961583885e086547b65bb3334b96\": container with ID starting with 20a3d1f808a67532d6a3df73638ee8fad690961583885e086547b65bb3334b96 not found: ID does not exist" containerID="20a3d1f808a67532d6a3df73638ee8fad690961583885e086547b65bb3334b96" Feb 17 16:07:34 crc kubenswrapper[4808]: I0217 16:07:34.474486 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20a3d1f808a67532d6a3df73638ee8fad690961583885e086547b65bb3334b96"} err="failed to get container status \"20a3d1f808a67532d6a3df73638ee8fad690961583885e086547b65bb3334b96\": rpc error: code = NotFound desc = could not find container \"20a3d1f808a67532d6a3df73638ee8fad690961583885e086547b65bb3334b96\": container with ID starting with 20a3d1f808a67532d6a3df73638ee8fad690961583885e086547b65bb3334b96 not found: ID does not exist" Feb 17 16:07:35 crc kubenswrapper[4808]: I0217 16:07:35.157797 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a78a92b2-62a6-4695-8363-7585b9131e18" path="/var/lib/kubelet/pods/a78a92b2-62a6-4695-8363-7585b9131e18/volumes" Feb 17 16:08:07 crc kubenswrapper[4808]: I0217 16:08:07.616093 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecal9zzl"] Feb 17 16:08:07 crc kubenswrapper[4808]: E0217 16:08:07.616842 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a78a92b2-62a6-4695-8363-7585b9131e18" containerName="extract-utilities" Feb 17 16:08:07 crc kubenswrapper[4808]: I0217 16:08:07.616853 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="a78a92b2-62a6-4695-8363-7585b9131e18" containerName="extract-utilities" Feb 17 16:08:07 crc kubenswrapper[4808]: E0217 16:08:07.616869 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a78a92b2-62a6-4695-8363-7585b9131e18" containerName="registry-server" Feb 17 16:08:07 crc kubenswrapper[4808]: I0217 16:08:07.616875 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="a78a92b2-62a6-4695-8363-7585b9131e18" containerName="registry-server" Feb 17 16:08:07 crc kubenswrapper[4808]: E0217 16:08:07.616886 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a78a92b2-62a6-4695-8363-7585b9131e18" containerName="extract-content" Feb 17 16:08:07 crc kubenswrapper[4808]: I0217 16:08:07.616892 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="a78a92b2-62a6-4695-8363-7585b9131e18" containerName="extract-content" Feb 17 16:08:07 crc kubenswrapper[4808]: I0217 16:08:07.616993 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="a78a92b2-62a6-4695-8363-7585b9131e18" containerName="registry-server" Feb 17 16:08:07 crc kubenswrapper[4808]: I0217 16:08:07.617808 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecal9zzl" Feb 17 16:08:07 crc kubenswrapper[4808]: I0217 16:08:07.620112 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 17 16:08:07 crc kubenswrapper[4808]: I0217 16:08:07.624426 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecal9zzl"] Feb 17 16:08:07 crc kubenswrapper[4808]: I0217 16:08:07.930609 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5903df73-c7d6-46cf-8aa2-4f0067c08b99-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecal9zzl\" (UID: \"5903df73-c7d6-46cf-8aa2-4f0067c08b99\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecal9zzl" Feb 17 16:08:07 crc kubenswrapper[4808]: I0217 16:08:07.930701 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmg5j\" (UniqueName: \"kubernetes.io/projected/5903df73-c7d6-46cf-8aa2-4f0067c08b99-kube-api-access-gmg5j\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecal9zzl\" (UID: \"5903df73-c7d6-46cf-8aa2-4f0067c08b99\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecal9zzl" Feb 17 16:08:07 crc kubenswrapper[4808]: I0217 16:08:07.930793 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5903df73-c7d6-46cf-8aa2-4f0067c08b99-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecal9zzl\" (UID: \"5903df73-c7d6-46cf-8aa2-4f0067c08b99\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecal9zzl" Feb 17 16:08:08 crc kubenswrapper[4808]: I0217 16:08:08.031796 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5903df73-c7d6-46cf-8aa2-4f0067c08b99-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecal9zzl\" (UID: \"5903df73-c7d6-46cf-8aa2-4f0067c08b99\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecal9zzl" Feb 17 16:08:08 crc kubenswrapper[4808]: I0217 16:08:08.031870 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5903df73-c7d6-46cf-8aa2-4f0067c08b99-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecal9zzl\" (UID: \"5903df73-c7d6-46cf-8aa2-4f0067c08b99\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecal9zzl" Feb 17 16:08:08 crc kubenswrapper[4808]: I0217 16:08:08.031909 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmg5j\" (UniqueName: \"kubernetes.io/projected/5903df73-c7d6-46cf-8aa2-4f0067c08b99-kube-api-access-gmg5j\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecal9zzl\" (UID: \"5903df73-c7d6-46cf-8aa2-4f0067c08b99\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecal9zzl" Feb 17 16:08:08 crc kubenswrapper[4808]: I0217 16:08:08.032351 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5903df73-c7d6-46cf-8aa2-4f0067c08b99-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecal9zzl\" (UID: \"5903df73-c7d6-46cf-8aa2-4f0067c08b99\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecal9zzl" Feb 17 16:08:08 crc kubenswrapper[4808]: I0217 16:08:08.032648 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5903df73-c7d6-46cf-8aa2-4f0067c08b99-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecal9zzl\" (UID: \"5903df73-c7d6-46cf-8aa2-4f0067c08b99\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecal9zzl" Feb 17 16:08:08 crc kubenswrapper[4808]: I0217 16:08:08.050753 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmg5j\" (UniqueName: \"kubernetes.io/projected/5903df73-c7d6-46cf-8aa2-4f0067c08b99-kube-api-access-gmg5j\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecal9zzl\" (UID: \"5903df73-c7d6-46cf-8aa2-4f0067c08b99\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecal9zzl" Feb 17 16:08:08 crc kubenswrapper[4808]: I0217 16:08:08.249024 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecal9zzl" Feb 17 16:08:08 crc kubenswrapper[4808]: I0217 16:08:08.447445 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecal9zzl"] Feb 17 16:08:08 crc kubenswrapper[4808]: I0217 16:08:08.944317 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecal9zzl" event={"ID":"5903df73-c7d6-46cf-8aa2-4f0067c08b99","Type":"ContainerStarted","Data":"74b894f184bb83b076cc8f257ea609aa7d7356620da3cd381d1989e96fd746cf"} Feb 17 16:08:09 crc kubenswrapper[4808]: I0217 16:08:09.953317 4808 generic.go:334] "Generic (PLEG): container finished" podID="5903df73-c7d6-46cf-8aa2-4f0067c08b99" containerID="fdf9f991333fcecafde3c4ecc81e0edee4d4616057eb1fff2bed6420d00eea2b" exitCode=0 Feb 17 16:08:09 crc kubenswrapper[4808]: I0217 16:08:09.953393 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecal9zzl" event={"ID":"5903df73-c7d6-46cf-8aa2-4f0067c08b99","Type":"ContainerDied","Data":"fdf9f991333fcecafde3c4ecc81e0edee4d4616057eb1fff2bed6420d00eea2b"} Feb 17 16:08:11 crc kubenswrapper[4808]: I0217 16:08:11.969022 4808 generic.go:334] "Generic (PLEG): container finished" podID="5903df73-c7d6-46cf-8aa2-4f0067c08b99" containerID="927cd9ed4f81fcf9ea82385b3d924f88e461102f00f2185156a7e4092db64b6a" exitCode=0 Feb 17 16:08:11 crc kubenswrapper[4808]: I0217 16:08:11.969207 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecal9zzl" event={"ID":"5903df73-c7d6-46cf-8aa2-4f0067c08b99","Type":"ContainerDied","Data":"927cd9ed4f81fcf9ea82385b3d924f88e461102f00f2185156a7e4092db64b6a"} Feb 17 16:08:12 crc kubenswrapper[4808]: I0217 16:08:12.986617 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecal9zzl" event={"ID":"5903df73-c7d6-46cf-8aa2-4f0067c08b99","Type":"ContainerDied","Data":"5b696282029bbdf5471ceac57569133dc5868022bf6e116af1b0d637d41ff5d7"} Feb 17 16:08:12 crc kubenswrapper[4808]: I0217 16:08:12.986559 4808 generic.go:334] "Generic (PLEG): container finished" podID="5903df73-c7d6-46cf-8aa2-4f0067c08b99" containerID="5b696282029bbdf5471ceac57569133dc5868022bf6e116af1b0d637d41ff5d7" exitCode=0 Feb 17 16:08:14 crc kubenswrapper[4808]: I0217 16:08:14.335812 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecal9zzl" Feb 17 16:08:14 crc kubenswrapper[4808]: I0217 16:08:14.477977 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gmg5j\" (UniqueName: \"kubernetes.io/projected/5903df73-c7d6-46cf-8aa2-4f0067c08b99-kube-api-access-gmg5j\") pod \"5903df73-c7d6-46cf-8aa2-4f0067c08b99\" (UID: \"5903df73-c7d6-46cf-8aa2-4f0067c08b99\") " Feb 17 16:08:14 crc kubenswrapper[4808]: I0217 16:08:14.478171 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5903df73-c7d6-46cf-8aa2-4f0067c08b99-util\") pod \"5903df73-c7d6-46cf-8aa2-4f0067c08b99\" (UID: \"5903df73-c7d6-46cf-8aa2-4f0067c08b99\") " Feb 17 16:08:14 crc kubenswrapper[4808]: I0217 16:08:14.478313 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5903df73-c7d6-46cf-8aa2-4f0067c08b99-bundle\") pod \"5903df73-c7d6-46cf-8aa2-4f0067c08b99\" (UID: \"5903df73-c7d6-46cf-8aa2-4f0067c08b99\") " Feb 17 16:08:14 crc kubenswrapper[4808]: I0217 16:08:14.478808 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5903df73-c7d6-46cf-8aa2-4f0067c08b99-bundle" (OuterVolumeSpecName: "bundle") pod "5903df73-c7d6-46cf-8aa2-4f0067c08b99" (UID: "5903df73-c7d6-46cf-8aa2-4f0067c08b99"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:08:14 crc kubenswrapper[4808]: I0217 16:08:14.484811 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5903df73-c7d6-46cf-8aa2-4f0067c08b99-kube-api-access-gmg5j" (OuterVolumeSpecName: "kube-api-access-gmg5j") pod "5903df73-c7d6-46cf-8aa2-4f0067c08b99" (UID: "5903df73-c7d6-46cf-8aa2-4f0067c08b99"). InnerVolumeSpecName "kube-api-access-gmg5j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:08:14 crc kubenswrapper[4808]: I0217 16:08:14.580134 4808 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5903df73-c7d6-46cf-8aa2-4f0067c08b99-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:08:14 crc kubenswrapper[4808]: I0217 16:08:14.580180 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gmg5j\" (UniqueName: \"kubernetes.io/projected/5903df73-c7d6-46cf-8aa2-4f0067c08b99-kube-api-access-gmg5j\") on node \"crc\" DevicePath \"\"" Feb 17 16:08:14 crc kubenswrapper[4808]: I0217 16:08:14.746164 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5903df73-c7d6-46cf-8aa2-4f0067c08b99-util" (OuterVolumeSpecName: "util") pod "5903df73-c7d6-46cf-8aa2-4f0067c08b99" (UID: "5903df73-c7d6-46cf-8aa2-4f0067c08b99"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:08:14 crc kubenswrapper[4808]: I0217 16:08:14.782780 4808 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5903df73-c7d6-46cf-8aa2-4f0067c08b99-util\") on node \"crc\" DevicePath \"\"" Feb 17 16:08:15 crc kubenswrapper[4808]: I0217 16:08:15.003024 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecal9zzl" event={"ID":"5903df73-c7d6-46cf-8aa2-4f0067c08b99","Type":"ContainerDied","Data":"74b894f184bb83b076cc8f257ea609aa7d7356620da3cd381d1989e96fd746cf"} Feb 17 16:08:15 crc kubenswrapper[4808]: I0217 16:08:15.003080 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="74b894f184bb83b076cc8f257ea609aa7d7356620da3cd381d1989e96fd746cf" Feb 17 16:08:15 crc kubenswrapper[4808]: I0217 16:08:15.003129 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecal9zzl" Feb 17 16:08:16 crc kubenswrapper[4808]: I0217 16:08:16.714284 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-bjzdq"] Feb 17 16:08:16 crc kubenswrapper[4808]: E0217 16:08:16.714540 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5903df73-c7d6-46cf-8aa2-4f0067c08b99" containerName="util" Feb 17 16:08:16 crc kubenswrapper[4808]: I0217 16:08:16.714555 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="5903df73-c7d6-46cf-8aa2-4f0067c08b99" containerName="util" Feb 17 16:08:16 crc kubenswrapper[4808]: E0217 16:08:16.714566 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5903df73-c7d6-46cf-8aa2-4f0067c08b99" containerName="extract" Feb 17 16:08:16 crc kubenswrapper[4808]: I0217 16:08:16.714599 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="5903df73-c7d6-46cf-8aa2-4f0067c08b99" containerName="extract" Feb 17 16:08:16 crc kubenswrapper[4808]: E0217 16:08:16.714615 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5903df73-c7d6-46cf-8aa2-4f0067c08b99" containerName="pull" Feb 17 16:08:16 crc kubenswrapper[4808]: I0217 16:08:16.714627 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="5903df73-c7d6-46cf-8aa2-4f0067c08b99" containerName="pull" Feb 17 16:08:16 crc kubenswrapper[4808]: I0217 16:08:16.714768 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="5903df73-c7d6-46cf-8aa2-4f0067c08b99" containerName="extract" Feb 17 16:08:16 crc kubenswrapper[4808]: I0217 16:08:16.715236 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-bjzdq" Feb 17 16:08:16 crc kubenswrapper[4808]: I0217 16:08:16.717700 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Feb 17 16:08:16 crc kubenswrapper[4808]: I0217 16:08:16.718350 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Feb 17 16:08:16 crc kubenswrapper[4808]: I0217 16:08:16.718350 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-9sqg6" Feb 17 16:08:16 crc kubenswrapper[4808]: I0217 16:08:16.741562 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-bjzdq"] Feb 17 16:08:16 crc kubenswrapper[4808]: I0217 16:08:16.810495 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qw8kr\" (UniqueName: \"kubernetes.io/projected/691d742f-d55e-48e4-89bc-7936f6b31f12-kube-api-access-qw8kr\") pod \"nmstate-operator-694c9596b7-bjzdq\" (UID: \"691d742f-d55e-48e4-89bc-7936f6b31f12\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-bjzdq" Feb 17 16:08:16 crc kubenswrapper[4808]: I0217 16:08:16.911531 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qw8kr\" (UniqueName: \"kubernetes.io/projected/691d742f-d55e-48e4-89bc-7936f6b31f12-kube-api-access-qw8kr\") pod \"nmstate-operator-694c9596b7-bjzdq\" (UID: \"691d742f-d55e-48e4-89bc-7936f6b31f12\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-bjzdq" Feb 17 16:08:16 crc kubenswrapper[4808]: I0217 16:08:16.942204 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qw8kr\" (UniqueName: \"kubernetes.io/projected/691d742f-d55e-48e4-89bc-7936f6b31f12-kube-api-access-qw8kr\") pod \"nmstate-operator-694c9596b7-bjzdq\" (UID: \"691d742f-d55e-48e4-89bc-7936f6b31f12\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-bjzdq" Feb 17 16:08:17 crc kubenswrapper[4808]: I0217 16:08:17.103322 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-bjzdq" Feb 17 16:08:17 crc kubenswrapper[4808]: I0217 16:08:17.423147 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-bjzdq"] Feb 17 16:08:18 crc kubenswrapper[4808]: I0217 16:08:18.023786 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-bjzdq" event={"ID":"691d742f-d55e-48e4-89bc-7936f6b31f12","Type":"ContainerStarted","Data":"369b6c728989f36b73866d643238042a7890f00ca5c64336d2a8b9e3b8265cee"} Feb 17 16:08:20 crc kubenswrapper[4808]: I0217 16:08:20.035499 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-bjzdq" event={"ID":"691d742f-d55e-48e4-89bc-7936f6b31f12","Type":"ContainerStarted","Data":"6823c5483e3a0f31a02ad66732891203651922e948c6e6d64989a130cad26b65"} Feb 17 16:08:20 crc kubenswrapper[4808]: I0217 16:08:20.056969 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-694c9596b7-bjzdq" podStartSLOduration=1.9167388650000001 podStartE2EDuration="4.05694338s" podCreationTimestamp="2026-02-17 16:08:16 +0000 UTC" firstStartedPulling="2026-02-17 16:08:17.422768778 +0000 UTC m=+860.939127861" lastFinishedPulling="2026-02-17 16:08:19.562973273 +0000 UTC m=+863.079332376" observedRunningTime="2026-02-17 16:08:20.049070378 +0000 UTC m=+863.565429461" watchObservedRunningTime="2026-02-17 16:08:20.05694338 +0000 UTC m=+863.573302493" Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.042946 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-j8rw5"] Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.052342 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-j8rw5" Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.054815 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-nsdgw" Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.067220 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-vz75q"] Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.068525 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-vz75q" Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.079055 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.085821 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-j8rw5"] Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.098927 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-vz75q"] Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.126657 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-q5xs9"] Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.127821 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-q5xs9" Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.184182 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/9f2e1846-9112-48fb-b69e-0a12393c62e6-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-vz75q\" (UID: \"9f2e1846-9112-48fb-b69e-0a12393c62e6\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-vz75q" Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.184225 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfhnk\" (UniqueName: \"kubernetes.io/projected/56fb3ff0-71b6-4792-acdf-33edb0cb23b4-kube-api-access-nfhnk\") pod \"nmstate-metrics-58c85c668d-j8rw5\" (UID: \"56fb3ff0-71b6-4792-acdf-33edb0cb23b4\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-j8rw5" Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.184248 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fxfp\" (UniqueName: \"kubernetes.io/projected/9f2e1846-9112-48fb-b69e-0a12393c62e6-kube-api-access-6fxfp\") pod \"nmstate-webhook-866bcb46dc-vz75q\" (UID: \"9f2e1846-9112-48fb-b69e-0a12393c62e6\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-vz75q" Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.218074 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-48n66"] Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.218920 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-48n66" Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.225033 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.225273 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.226283 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-h64qz" Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.246131 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-48n66"] Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.285729 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/16498191-a001-4403-af35-b76104720e91-nmstate-lock\") pod \"nmstate-handler-q5xs9\" (UID: \"16498191-a001-4403-af35-b76104720e91\") " pod="openshift-nmstate/nmstate-handler-q5xs9" Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.285821 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/16498191-a001-4403-af35-b76104720e91-ovs-socket\") pod \"nmstate-handler-q5xs9\" (UID: \"16498191-a001-4403-af35-b76104720e91\") " pod="openshift-nmstate/nmstate-handler-q5xs9" Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.285901 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/16498191-a001-4403-af35-b76104720e91-dbus-socket\") pod \"nmstate-handler-q5xs9\" (UID: \"16498191-a001-4403-af35-b76104720e91\") " pod="openshift-nmstate/nmstate-handler-q5xs9" Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.285939 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/9f2e1846-9112-48fb-b69e-0a12393c62e6-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-vz75q\" (UID: \"9f2e1846-9112-48fb-b69e-0a12393c62e6\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-vz75q" Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.285968 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6fxfp\" (UniqueName: \"kubernetes.io/projected/9f2e1846-9112-48fb-b69e-0a12393c62e6-kube-api-access-6fxfp\") pod \"nmstate-webhook-866bcb46dc-vz75q\" (UID: \"9f2e1846-9112-48fb-b69e-0a12393c62e6\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-vz75q" Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.285997 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfhnk\" (UniqueName: \"kubernetes.io/projected/56fb3ff0-71b6-4792-acdf-33edb0cb23b4-kube-api-access-nfhnk\") pod \"nmstate-metrics-58c85c668d-j8rw5\" (UID: \"56fb3ff0-71b6-4792-acdf-33edb0cb23b4\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-j8rw5" Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.286067 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9cddt\" (UniqueName: \"kubernetes.io/projected/16498191-a001-4403-af35-b76104720e91-kube-api-access-9cddt\") pod \"nmstate-handler-q5xs9\" (UID: \"16498191-a001-4403-af35-b76104720e91\") " pod="openshift-nmstate/nmstate-handler-q5xs9" Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.316089 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/9f2e1846-9112-48fb-b69e-0a12393c62e6-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-vz75q\" (UID: \"9f2e1846-9112-48fb-b69e-0a12393c62e6\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-vz75q" Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.326188 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6fxfp\" (UniqueName: \"kubernetes.io/projected/9f2e1846-9112-48fb-b69e-0a12393c62e6-kube-api-access-6fxfp\") pod \"nmstate-webhook-866bcb46dc-vz75q\" (UID: \"9f2e1846-9112-48fb-b69e-0a12393c62e6\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-vz75q" Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.329456 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfhnk\" (UniqueName: \"kubernetes.io/projected/56fb3ff0-71b6-4792-acdf-33edb0cb23b4-kube-api-access-nfhnk\") pod \"nmstate-metrics-58c85c668d-j8rw5\" (UID: \"56fb3ff0-71b6-4792-acdf-33edb0cb23b4\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-j8rw5" Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.387514 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9cddt\" (UniqueName: \"kubernetes.io/projected/16498191-a001-4403-af35-b76104720e91-kube-api-access-9cddt\") pod \"nmstate-handler-q5xs9\" (UID: \"16498191-a001-4403-af35-b76104720e91\") " pod="openshift-nmstate/nmstate-handler-q5xs9" Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.388077 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/16498191-a001-4403-af35-b76104720e91-nmstate-lock\") pod \"nmstate-handler-q5xs9\" (UID: \"16498191-a001-4403-af35-b76104720e91\") " pod="openshift-nmstate/nmstate-handler-q5xs9" Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.388116 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bf7bs\" (UniqueName: \"kubernetes.io/projected/2c731526-11bd-4ef9-bb62-eb3a0512ff1d-kube-api-access-bf7bs\") pod \"nmstate-console-plugin-5c78fc5d65-48n66\" (UID: \"2c731526-11bd-4ef9-bb62-eb3a0512ff1d\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-48n66" Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.388158 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/16498191-a001-4403-af35-b76104720e91-ovs-socket\") pod \"nmstate-handler-q5xs9\" (UID: \"16498191-a001-4403-af35-b76104720e91\") " pod="openshift-nmstate/nmstate-handler-q5xs9" Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.388185 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/2c731526-11bd-4ef9-bb62-eb3a0512ff1d-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-48n66\" (UID: \"2c731526-11bd-4ef9-bb62-eb3a0512ff1d\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-48n66" Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.388237 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/16498191-a001-4403-af35-b76104720e91-dbus-socket\") pod \"nmstate-handler-q5xs9\" (UID: \"16498191-a001-4403-af35-b76104720e91\") " pod="openshift-nmstate/nmstate-handler-q5xs9" Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.388270 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/2c731526-11bd-4ef9-bb62-eb3a0512ff1d-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-48n66\" (UID: \"2c731526-11bd-4ef9-bb62-eb3a0512ff1d\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-48n66" Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.388833 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/16498191-a001-4403-af35-b76104720e91-nmstate-lock\") pod \"nmstate-handler-q5xs9\" (UID: \"16498191-a001-4403-af35-b76104720e91\") " pod="openshift-nmstate/nmstate-handler-q5xs9" Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.388888 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/16498191-a001-4403-af35-b76104720e91-ovs-socket\") pod \"nmstate-handler-q5xs9\" (UID: \"16498191-a001-4403-af35-b76104720e91\") " pod="openshift-nmstate/nmstate-handler-q5xs9" Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.389189 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/16498191-a001-4403-af35-b76104720e91-dbus-socket\") pod \"nmstate-handler-q5xs9\" (UID: \"16498191-a001-4403-af35-b76104720e91\") " pod="openshift-nmstate/nmstate-handler-q5xs9" Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.392480 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-j8rw5" Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.402085 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-vz75q" Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.405557 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9cddt\" (UniqueName: \"kubernetes.io/projected/16498191-a001-4403-af35-b76104720e91-kube-api-access-9cddt\") pod \"nmstate-handler-q5xs9\" (UID: \"16498191-a001-4403-af35-b76104720e91\") " pod="openshift-nmstate/nmstate-handler-q5xs9" Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.453045 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-q5xs9" Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.470407 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-79ddccbf49-dhwd5"] Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.471419 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-79ddccbf49-dhwd5" Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.480838 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-79ddccbf49-dhwd5"] Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.495503 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bf7bs\" (UniqueName: \"kubernetes.io/projected/2c731526-11bd-4ef9-bb62-eb3a0512ff1d-kube-api-access-bf7bs\") pod \"nmstate-console-plugin-5c78fc5d65-48n66\" (UID: \"2c731526-11bd-4ef9-bb62-eb3a0512ff1d\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-48n66" Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.495608 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/2c731526-11bd-4ef9-bb62-eb3a0512ff1d-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-48n66\" (UID: \"2c731526-11bd-4ef9-bb62-eb3a0512ff1d\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-48n66" Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.495716 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/2c731526-11bd-4ef9-bb62-eb3a0512ff1d-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-48n66\" (UID: \"2c731526-11bd-4ef9-bb62-eb3a0512ff1d\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-48n66" Feb 17 16:08:21 crc kubenswrapper[4808]: E0217 16:08:21.496014 4808 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Feb 17 16:08:21 crc kubenswrapper[4808]: E0217 16:08:21.496095 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2c731526-11bd-4ef9-bb62-eb3a0512ff1d-plugin-serving-cert podName:2c731526-11bd-4ef9-bb62-eb3a0512ff1d nodeName:}" failed. No retries permitted until 2026-02-17 16:08:21.996069174 +0000 UTC m=+865.512428247 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/2c731526-11bd-4ef9-bb62-eb3a0512ff1d-plugin-serving-cert") pod "nmstate-console-plugin-5c78fc5d65-48n66" (UID: "2c731526-11bd-4ef9-bb62-eb3a0512ff1d") : secret "plugin-serving-cert" not found Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.497540 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/2c731526-11bd-4ef9-bb62-eb3a0512ff1d-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-48n66\" (UID: \"2c731526-11bd-4ef9-bb62-eb3a0512ff1d\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-48n66" Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.529747 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bf7bs\" (UniqueName: \"kubernetes.io/projected/2c731526-11bd-4ef9-bb62-eb3a0512ff1d-kube-api-access-bf7bs\") pod \"nmstate-console-plugin-5c78fc5d65-48n66\" (UID: \"2c731526-11bd-4ef9-bb62-eb3a0512ff1d\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-48n66" Feb 17 16:08:21 crc kubenswrapper[4808]: W0217 16:08:21.540884 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16498191_a001_4403_af35_b76104720e91.slice/crio-f3f3d4e51eb70ecd36943694b53b6fe16de56f082a2662b348f39fc036736fab WatchSource:0}: Error finding container f3f3d4e51eb70ecd36943694b53b6fe16de56f082a2662b348f39fc036736fab: Status 404 returned error can't find the container with id f3f3d4e51eb70ecd36943694b53b6fe16de56f082a2662b348f39fc036736fab Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.597693 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c546f1bc-ad95-41f2-988e-23868a5ab5dd-trusted-ca-bundle\") pod \"console-79ddccbf49-dhwd5\" (UID: \"c546f1bc-ad95-41f2-988e-23868a5ab5dd\") " pod="openshift-console/console-79ddccbf49-dhwd5" Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.598458 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c546f1bc-ad95-41f2-988e-23868a5ab5dd-oauth-serving-cert\") pod \"console-79ddccbf49-dhwd5\" (UID: \"c546f1bc-ad95-41f2-988e-23868a5ab5dd\") " pod="openshift-console/console-79ddccbf49-dhwd5" Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.598620 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c546f1bc-ad95-41f2-988e-23868a5ab5dd-console-oauth-config\") pod \"console-79ddccbf49-dhwd5\" (UID: \"c546f1bc-ad95-41f2-988e-23868a5ab5dd\") " pod="openshift-console/console-79ddccbf49-dhwd5" Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.598698 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c546f1bc-ad95-41f2-988e-23868a5ab5dd-console-serving-cert\") pod \"console-79ddccbf49-dhwd5\" (UID: \"c546f1bc-ad95-41f2-988e-23868a5ab5dd\") " pod="openshift-console/console-79ddccbf49-dhwd5" Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.598774 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbfpw\" (UniqueName: \"kubernetes.io/projected/c546f1bc-ad95-41f2-988e-23868a5ab5dd-kube-api-access-cbfpw\") pod \"console-79ddccbf49-dhwd5\" (UID: \"c546f1bc-ad95-41f2-988e-23868a5ab5dd\") " pod="openshift-console/console-79ddccbf49-dhwd5" Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.598853 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c546f1bc-ad95-41f2-988e-23868a5ab5dd-console-config\") pod \"console-79ddccbf49-dhwd5\" (UID: \"c546f1bc-ad95-41f2-988e-23868a5ab5dd\") " pod="openshift-console/console-79ddccbf49-dhwd5" Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.598883 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c546f1bc-ad95-41f2-988e-23868a5ab5dd-service-ca\") pod \"console-79ddccbf49-dhwd5\" (UID: \"c546f1bc-ad95-41f2-988e-23868a5ab5dd\") " pod="openshift-console/console-79ddccbf49-dhwd5" Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.667854 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-j8rw5"] Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.700272 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c546f1bc-ad95-41f2-988e-23868a5ab5dd-oauth-serving-cert\") pod \"console-79ddccbf49-dhwd5\" (UID: \"c546f1bc-ad95-41f2-988e-23868a5ab5dd\") " pod="openshift-console/console-79ddccbf49-dhwd5" Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.700348 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c546f1bc-ad95-41f2-988e-23868a5ab5dd-console-oauth-config\") pod \"console-79ddccbf49-dhwd5\" (UID: \"c546f1bc-ad95-41f2-988e-23868a5ab5dd\") " pod="openshift-console/console-79ddccbf49-dhwd5" Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.700400 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c546f1bc-ad95-41f2-988e-23868a5ab5dd-console-serving-cert\") pod \"console-79ddccbf49-dhwd5\" (UID: \"c546f1bc-ad95-41f2-988e-23868a5ab5dd\") " pod="openshift-console/console-79ddccbf49-dhwd5" Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.700447 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbfpw\" (UniqueName: \"kubernetes.io/projected/c546f1bc-ad95-41f2-988e-23868a5ab5dd-kube-api-access-cbfpw\") pod \"console-79ddccbf49-dhwd5\" (UID: \"c546f1bc-ad95-41f2-988e-23868a5ab5dd\") " pod="openshift-console/console-79ddccbf49-dhwd5" Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.700493 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c546f1bc-ad95-41f2-988e-23868a5ab5dd-console-config\") pod \"console-79ddccbf49-dhwd5\" (UID: \"c546f1bc-ad95-41f2-988e-23868a5ab5dd\") " pod="openshift-console/console-79ddccbf49-dhwd5" Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.700530 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c546f1bc-ad95-41f2-988e-23868a5ab5dd-service-ca\") pod \"console-79ddccbf49-dhwd5\" (UID: \"c546f1bc-ad95-41f2-988e-23868a5ab5dd\") " pod="openshift-console/console-79ddccbf49-dhwd5" Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.700630 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c546f1bc-ad95-41f2-988e-23868a5ab5dd-trusted-ca-bundle\") pod \"console-79ddccbf49-dhwd5\" (UID: \"c546f1bc-ad95-41f2-988e-23868a5ab5dd\") " pod="openshift-console/console-79ddccbf49-dhwd5" Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.702036 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c546f1bc-ad95-41f2-988e-23868a5ab5dd-trusted-ca-bundle\") pod \"console-79ddccbf49-dhwd5\" (UID: \"c546f1bc-ad95-41f2-988e-23868a5ab5dd\") " pod="openshift-console/console-79ddccbf49-dhwd5" Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.702185 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c546f1bc-ad95-41f2-988e-23868a5ab5dd-oauth-serving-cert\") pod \"console-79ddccbf49-dhwd5\" (UID: \"c546f1bc-ad95-41f2-988e-23868a5ab5dd\") " pod="openshift-console/console-79ddccbf49-dhwd5" Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.703213 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c546f1bc-ad95-41f2-988e-23868a5ab5dd-service-ca\") pod \"console-79ddccbf49-dhwd5\" (UID: \"c546f1bc-ad95-41f2-988e-23868a5ab5dd\") " pod="openshift-console/console-79ddccbf49-dhwd5" Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.703225 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c546f1bc-ad95-41f2-988e-23868a5ab5dd-console-config\") pod \"console-79ddccbf49-dhwd5\" (UID: \"c546f1bc-ad95-41f2-988e-23868a5ab5dd\") " pod="openshift-console/console-79ddccbf49-dhwd5" Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.707215 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c546f1bc-ad95-41f2-988e-23868a5ab5dd-console-oauth-config\") pod \"console-79ddccbf49-dhwd5\" (UID: \"c546f1bc-ad95-41f2-988e-23868a5ab5dd\") " pod="openshift-console/console-79ddccbf49-dhwd5" Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.710203 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c546f1bc-ad95-41f2-988e-23868a5ab5dd-console-serving-cert\") pod \"console-79ddccbf49-dhwd5\" (UID: \"c546f1bc-ad95-41f2-988e-23868a5ab5dd\") " pod="openshift-console/console-79ddccbf49-dhwd5" Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.717754 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbfpw\" (UniqueName: \"kubernetes.io/projected/c546f1bc-ad95-41f2-988e-23868a5ab5dd-kube-api-access-cbfpw\") pod \"console-79ddccbf49-dhwd5\" (UID: \"c546f1bc-ad95-41f2-988e-23868a5ab5dd\") " pod="openshift-console/console-79ddccbf49-dhwd5" Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.742015 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-vz75q"] Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.801714 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-79ddccbf49-dhwd5" Feb 17 16:08:22 crc kubenswrapper[4808]: I0217 16:08:22.003950 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/2c731526-11bd-4ef9-bb62-eb3a0512ff1d-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-48n66\" (UID: \"2c731526-11bd-4ef9-bb62-eb3a0512ff1d\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-48n66" Feb 17 16:08:22 crc kubenswrapper[4808]: I0217 16:08:22.008461 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/2c731526-11bd-4ef9-bb62-eb3a0512ff1d-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-48n66\" (UID: \"2c731526-11bd-4ef9-bb62-eb3a0512ff1d\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-48n66" Feb 17 16:08:22 crc kubenswrapper[4808]: I0217 16:08:22.033855 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-79ddccbf49-dhwd5"] Feb 17 16:08:22 crc kubenswrapper[4808]: W0217 16:08:22.039494 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc546f1bc_ad95_41f2_988e_23868a5ab5dd.slice/crio-6a2c2be583e7fc4d4260010e22f703471a3cc6881ba1f1d18a9f2fc8c3f08ff3 WatchSource:0}: Error finding container 6a2c2be583e7fc4d4260010e22f703471a3cc6881ba1f1d18a9f2fc8c3f08ff3: Status 404 returned error can't find the container with id 6a2c2be583e7fc4d4260010e22f703471a3cc6881ba1f1d18a9f2fc8c3f08ff3 Feb 17 16:08:22 crc kubenswrapper[4808]: I0217 16:08:22.056625 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-79ddccbf49-dhwd5" event={"ID":"c546f1bc-ad95-41f2-988e-23868a5ab5dd","Type":"ContainerStarted","Data":"6a2c2be583e7fc4d4260010e22f703471a3cc6881ba1f1d18a9f2fc8c3f08ff3"} Feb 17 16:08:22 crc kubenswrapper[4808]: I0217 16:08:22.058945 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-vz75q" event={"ID":"9f2e1846-9112-48fb-b69e-0a12393c62e6","Type":"ContainerStarted","Data":"b2bf587e03f35613767dd4dab19285199930b9d831e0acb900993dc1090d0405"} Feb 17 16:08:22 crc kubenswrapper[4808]: I0217 16:08:22.059822 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-q5xs9" event={"ID":"16498191-a001-4403-af35-b76104720e91","Type":"ContainerStarted","Data":"f3f3d4e51eb70ecd36943694b53b6fe16de56f082a2662b348f39fc036736fab"} Feb 17 16:08:22 crc kubenswrapper[4808]: I0217 16:08:22.060811 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-j8rw5" event={"ID":"56fb3ff0-71b6-4792-acdf-33edb0cb23b4","Type":"ContainerStarted","Data":"06a2f8058ee07cc6a58e11dd4d8e8b4d02e37fcc4b43fb38751ea191f8767c36"} Feb 17 16:08:22 crc kubenswrapper[4808]: I0217 16:08:22.136092 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-48n66" Feb 17 16:08:22 crc kubenswrapper[4808]: I0217 16:08:22.605259 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-48n66"] Feb 17 16:08:22 crc kubenswrapper[4808]: W0217 16:08:22.618371 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2c731526_11bd_4ef9_bb62_eb3a0512ff1d.slice/crio-c153c68d2519e32f92364798f2da80c5e00d52f642655c80c136aefa4bf59114 WatchSource:0}: Error finding container c153c68d2519e32f92364798f2da80c5e00d52f642655c80c136aefa4bf59114: Status 404 returned error can't find the container with id c153c68d2519e32f92364798f2da80c5e00d52f642655c80c136aefa4bf59114 Feb 17 16:08:23 crc kubenswrapper[4808]: I0217 16:08:23.072920 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-48n66" event={"ID":"2c731526-11bd-4ef9-bb62-eb3a0512ff1d","Type":"ContainerStarted","Data":"c153c68d2519e32f92364798f2da80c5e00d52f642655c80c136aefa4bf59114"} Feb 17 16:08:23 crc kubenswrapper[4808]: I0217 16:08:23.074488 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-79ddccbf49-dhwd5" event={"ID":"c546f1bc-ad95-41f2-988e-23868a5ab5dd","Type":"ContainerStarted","Data":"c7305c9670bf9677188ae30700d505027d60190bcc2199580b39ceb50994b7ba"} Feb 17 16:08:25 crc kubenswrapper[4808]: I0217 16:08:25.088485 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-j8rw5" event={"ID":"56fb3ff0-71b6-4792-acdf-33edb0cb23b4","Type":"ContainerStarted","Data":"f0562f74d693ba6b0b6a602bb3975ed95eb9636ba5661ed4317dc335ad58c81a"} Feb 17 16:08:25 crc kubenswrapper[4808]: I0217 16:08:25.090086 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-vz75q" event={"ID":"9f2e1846-9112-48fb-b69e-0a12393c62e6","Type":"ContainerStarted","Data":"5586b0d7a9493a03c5037cb79dac2bce1b44f9432738c8b202515093df790730"} Feb 17 16:08:25 crc kubenswrapper[4808]: I0217 16:08:25.090239 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-vz75q" Feb 17 16:08:25 crc kubenswrapper[4808]: I0217 16:08:25.092118 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-q5xs9" event={"ID":"16498191-a001-4403-af35-b76104720e91","Type":"ContainerStarted","Data":"a70b40991af1b76c8bcb0c03b7cd5e5719ac8cc120015d60128daf23f5eebc12"} Feb 17 16:08:25 crc kubenswrapper[4808]: I0217 16:08:25.092251 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-q5xs9" Feb 17 16:08:25 crc kubenswrapper[4808]: I0217 16:08:25.107472 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-vz75q" podStartSLOduration=1.5875267389999999 podStartE2EDuration="4.107423952s" podCreationTimestamp="2026-02-17 16:08:21 +0000 UTC" firstStartedPulling="2026-02-17 16:08:21.738944167 +0000 UTC m=+865.255303240" lastFinishedPulling="2026-02-17 16:08:24.25884138 +0000 UTC m=+867.775200453" observedRunningTime="2026-02-17 16:08:25.101825032 +0000 UTC m=+868.618184105" watchObservedRunningTime="2026-02-17 16:08:25.107423952 +0000 UTC m=+868.623783035" Feb 17 16:08:25 crc kubenswrapper[4808]: I0217 16:08:25.112891 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-79ddccbf49-dhwd5" podStartSLOduration=4.112872259 podStartE2EDuration="4.112872259s" podCreationTimestamp="2026-02-17 16:08:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:08:23.092546144 +0000 UTC m=+866.608905217" watchObservedRunningTime="2026-02-17 16:08:25.112872259 +0000 UTC m=+868.629231352" Feb 17 16:08:25 crc kubenswrapper[4808]: I0217 16:08:25.125604 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-q5xs9" podStartSLOduration=1.462694416 podStartE2EDuration="4.125558269s" podCreationTimestamp="2026-02-17 16:08:21 +0000 UTC" firstStartedPulling="2026-02-17 16:08:21.553382723 +0000 UTC m=+865.069741796" lastFinishedPulling="2026-02-17 16:08:24.216246576 +0000 UTC m=+867.732605649" observedRunningTime="2026-02-17 16:08:25.11851814 +0000 UTC m=+868.634877223" watchObservedRunningTime="2026-02-17 16:08:25.125558269 +0000 UTC m=+868.641917382" Feb 17 16:08:27 crc kubenswrapper[4808]: I0217 16:08:27.116232 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-48n66" event={"ID":"2c731526-11bd-4ef9-bb62-eb3a0512ff1d","Type":"ContainerStarted","Data":"c385c0ad3263aa669d2ee036cf635c1fe4f9a5f1d34e898afa387b928eb4d0f7"} Feb 17 16:08:27 crc kubenswrapper[4808]: I0217 16:08:27.135709 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-48n66" podStartSLOduration=2.802678918 podStartE2EDuration="6.13568975s" podCreationTimestamp="2026-02-17 16:08:21 +0000 UTC" firstStartedPulling="2026-02-17 16:08:22.623392053 +0000 UTC m=+866.139751116" lastFinishedPulling="2026-02-17 16:08:25.956402875 +0000 UTC m=+869.472761948" observedRunningTime="2026-02-17 16:08:27.131552358 +0000 UTC m=+870.647911521" watchObservedRunningTime="2026-02-17 16:08:27.13568975 +0000 UTC m=+870.652048853" Feb 17 16:08:28 crc kubenswrapper[4808]: I0217 16:08:28.128894 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-j8rw5" event={"ID":"56fb3ff0-71b6-4792-acdf-33edb0cb23b4","Type":"ContainerStarted","Data":"26ea35ed20769ac33f935f55683b9f8b7d7629a05eccaa4080c9185da2abd222"} Feb 17 16:08:28 crc kubenswrapper[4808]: I0217 16:08:28.151324 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-58c85c668d-j8rw5" podStartSLOduration=1.621908873 podStartE2EDuration="7.151296418s" podCreationTimestamp="2026-02-17 16:08:21 +0000 UTC" firstStartedPulling="2026-02-17 16:08:21.677142777 +0000 UTC m=+865.193501840" lastFinishedPulling="2026-02-17 16:08:27.206530312 +0000 UTC m=+870.722889385" observedRunningTime="2026-02-17 16:08:28.150665511 +0000 UTC m=+871.667024614" watchObservedRunningTime="2026-02-17 16:08:28.151296418 +0000 UTC m=+871.667655521" Feb 17 16:08:31 crc kubenswrapper[4808]: I0217 16:08:31.502367 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-q5xs9" Feb 17 16:08:31 crc kubenswrapper[4808]: I0217 16:08:31.801850 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-79ddccbf49-dhwd5" Feb 17 16:08:31 crc kubenswrapper[4808]: I0217 16:08:31.803009 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-79ddccbf49-dhwd5" Feb 17 16:08:31 crc kubenswrapper[4808]: I0217 16:08:31.811313 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-79ddccbf49-dhwd5" Feb 17 16:08:32 crc kubenswrapper[4808]: I0217 16:08:32.165706 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-79ddccbf49-dhwd5" Feb 17 16:08:32 crc kubenswrapper[4808]: I0217 16:08:32.252711 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-hdg74"] Feb 17 16:08:41 crc kubenswrapper[4808]: I0217 16:08:41.411428 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-vz75q" Feb 17 16:08:51 crc kubenswrapper[4808]: I0217 16:08:51.592525 4808 patch_prober.go:28] interesting pod/machine-config-daemon-k8v8k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:08:51 crc kubenswrapper[4808]: I0217 16:08:51.593822 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:08:57 crc kubenswrapper[4808]: I0217 16:08:57.141085 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kj6bw"] Feb 17 16:08:57 crc kubenswrapper[4808]: I0217 16:08:57.143433 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kj6bw" Feb 17 16:08:57 crc kubenswrapper[4808]: I0217 16:08:57.151629 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 17 16:08:57 crc kubenswrapper[4808]: I0217 16:08:57.164969 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kj6bw"] Feb 17 16:08:57 crc kubenswrapper[4808]: I0217 16:08:57.243192 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/df1cf40f-e7a2-40b1-8adb-45d2b5205584-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kj6bw\" (UID: \"df1cf40f-e7a2-40b1-8adb-45d2b5205584\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kj6bw" Feb 17 16:08:57 crc kubenswrapper[4808]: I0217 16:08:57.243400 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vz4b\" (UniqueName: \"kubernetes.io/projected/df1cf40f-e7a2-40b1-8adb-45d2b5205584-kube-api-access-8vz4b\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kj6bw\" (UID: \"df1cf40f-e7a2-40b1-8adb-45d2b5205584\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kj6bw" Feb 17 16:08:57 crc kubenswrapper[4808]: I0217 16:08:57.243649 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/df1cf40f-e7a2-40b1-8adb-45d2b5205584-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kj6bw\" (UID: \"df1cf40f-e7a2-40b1-8adb-45d2b5205584\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kj6bw" Feb 17 16:08:57 crc kubenswrapper[4808]: I0217 16:08:57.324080 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-hdg74" podUID="e489a46b-9123-44c6-94e0-692621760dd6" containerName="console" containerID="cri-o://5fa014756fd5fd80eb6b1fdbbf3d68e06eb937cbb5c5ef91970212b3ef06613a" gracePeriod=15 Feb 17 16:08:57 crc kubenswrapper[4808]: I0217 16:08:57.345438 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/df1cf40f-e7a2-40b1-8adb-45d2b5205584-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kj6bw\" (UID: \"df1cf40f-e7a2-40b1-8adb-45d2b5205584\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kj6bw" Feb 17 16:08:57 crc kubenswrapper[4808]: I0217 16:08:57.345535 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vz4b\" (UniqueName: \"kubernetes.io/projected/df1cf40f-e7a2-40b1-8adb-45d2b5205584-kube-api-access-8vz4b\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kj6bw\" (UID: \"df1cf40f-e7a2-40b1-8adb-45d2b5205584\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kj6bw" Feb 17 16:08:57 crc kubenswrapper[4808]: I0217 16:08:57.345612 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/df1cf40f-e7a2-40b1-8adb-45d2b5205584-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kj6bw\" (UID: \"df1cf40f-e7a2-40b1-8adb-45d2b5205584\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kj6bw" Feb 17 16:08:57 crc kubenswrapper[4808]: I0217 16:08:57.345953 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/df1cf40f-e7a2-40b1-8adb-45d2b5205584-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kj6bw\" (UID: \"df1cf40f-e7a2-40b1-8adb-45d2b5205584\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kj6bw" Feb 17 16:08:57 crc kubenswrapper[4808]: I0217 16:08:57.346376 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/df1cf40f-e7a2-40b1-8adb-45d2b5205584-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kj6bw\" (UID: \"df1cf40f-e7a2-40b1-8adb-45d2b5205584\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kj6bw" Feb 17 16:08:57 crc kubenswrapper[4808]: I0217 16:08:57.363833 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vz4b\" (UniqueName: \"kubernetes.io/projected/df1cf40f-e7a2-40b1-8adb-45d2b5205584-kube-api-access-8vz4b\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kj6bw\" (UID: \"df1cf40f-e7a2-40b1-8adb-45d2b5205584\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kj6bw" Feb 17 16:08:57 crc kubenswrapper[4808]: I0217 16:08:57.484769 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kj6bw" Feb 17 16:08:57 crc kubenswrapper[4808]: I0217 16:08:57.678344 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-hdg74_e489a46b-9123-44c6-94e0-692621760dd6/console/0.log" Feb 17 16:08:57 crc kubenswrapper[4808]: I0217 16:08:57.678677 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-hdg74" Feb 17 16:08:57 crc kubenswrapper[4808]: I0217 16:08:57.762721 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e489a46b-9123-44c6-94e0-692621760dd6-service-ca\") pod \"e489a46b-9123-44c6-94e0-692621760dd6\" (UID: \"e489a46b-9123-44c6-94e0-692621760dd6\") " Feb 17 16:08:57 crc kubenswrapper[4808]: I0217 16:08:57.762774 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e489a46b-9123-44c6-94e0-692621760dd6-console-oauth-config\") pod \"e489a46b-9123-44c6-94e0-692621760dd6\" (UID: \"e489a46b-9123-44c6-94e0-692621760dd6\") " Feb 17 16:08:57 crc kubenswrapper[4808]: I0217 16:08:57.762840 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e489a46b-9123-44c6-94e0-692621760dd6-trusted-ca-bundle\") pod \"e489a46b-9123-44c6-94e0-692621760dd6\" (UID: \"e489a46b-9123-44c6-94e0-692621760dd6\") " Feb 17 16:08:57 crc kubenswrapper[4808]: I0217 16:08:57.762894 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e489a46b-9123-44c6-94e0-692621760dd6-oauth-serving-cert\") pod \"e489a46b-9123-44c6-94e0-692621760dd6\" (UID: \"e489a46b-9123-44c6-94e0-692621760dd6\") " Feb 17 16:08:57 crc kubenswrapper[4808]: I0217 16:08:57.762926 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6lnfm\" (UniqueName: \"kubernetes.io/projected/e489a46b-9123-44c6-94e0-692621760dd6-kube-api-access-6lnfm\") pod \"e489a46b-9123-44c6-94e0-692621760dd6\" (UID: \"e489a46b-9123-44c6-94e0-692621760dd6\") " Feb 17 16:08:57 crc kubenswrapper[4808]: I0217 16:08:57.762976 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e489a46b-9123-44c6-94e0-692621760dd6-console-serving-cert\") pod \"e489a46b-9123-44c6-94e0-692621760dd6\" (UID: \"e489a46b-9123-44c6-94e0-692621760dd6\") " Feb 17 16:08:57 crc kubenswrapper[4808]: I0217 16:08:57.763028 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e489a46b-9123-44c6-94e0-692621760dd6-console-config\") pod \"e489a46b-9123-44c6-94e0-692621760dd6\" (UID: \"e489a46b-9123-44c6-94e0-692621760dd6\") " Feb 17 16:08:57 crc kubenswrapper[4808]: I0217 16:08:57.763776 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e489a46b-9123-44c6-94e0-692621760dd6-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "e489a46b-9123-44c6-94e0-692621760dd6" (UID: "e489a46b-9123-44c6-94e0-692621760dd6"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:08:57 crc kubenswrapper[4808]: I0217 16:08:57.763811 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e489a46b-9123-44c6-94e0-692621760dd6-service-ca" (OuterVolumeSpecName: "service-ca") pod "e489a46b-9123-44c6-94e0-692621760dd6" (UID: "e489a46b-9123-44c6-94e0-692621760dd6"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:08:57 crc kubenswrapper[4808]: I0217 16:08:57.763858 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e489a46b-9123-44c6-94e0-692621760dd6-console-config" (OuterVolumeSpecName: "console-config") pod "e489a46b-9123-44c6-94e0-692621760dd6" (UID: "e489a46b-9123-44c6-94e0-692621760dd6"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:08:57 crc kubenswrapper[4808]: I0217 16:08:57.763919 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e489a46b-9123-44c6-94e0-692621760dd6-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "e489a46b-9123-44c6-94e0-692621760dd6" (UID: "e489a46b-9123-44c6-94e0-692621760dd6"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:08:57 crc kubenswrapper[4808]: I0217 16:08:57.769255 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e489a46b-9123-44c6-94e0-692621760dd6-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "e489a46b-9123-44c6-94e0-692621760dd6" (UID: "e489a46b-9123-44c6-94e0-692621760dd6"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:08:57 crc kubenswrapper[4808]: I0217 16:08:57.769367 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e489a46b-9123-44c6-94e0-692621760dd6-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "e489a46b-9123-44c6-94e0-692621760dd6" (UID: "e489a46b-9123-44c6-94e0-692621760dd6"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:08:57 crc kubenswrapper[4808]: I0217 16:08:57.769374 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e489a46b-9123-44c6-94e0-692621760dd6-kube-api-access-6lnfm" (OuterVolumeSpecName: "kube-api-access-6lnfm") pod "e489a46b-9123-44c6-94e0-692621760dd6" (UID: "e489a46b-9123-44c6-94e0-692621760dd6"). InnerVolumeSpecName "kube-api-access-6lnfm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:08:57 crc kubenswrapper[4808]: I0217 16:08:57.864451 4808 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e489a46b-9123-44c6-94e0-692621760dd6-console-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:08:57 crc kubenswrapper[4808]: I0217 16:08:57.864499 4808 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e489a46b-9123-44c6-94e0-692621760dd6-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:08:57 crc kubenswrapper[4808]: I0217 16:08:57.864515 4808 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e489a46b-9123-44c6-94e0-692621760dd6-service-ca\") on node \"crc\" DevicePath \"\"" Feb 17 16:08:57 crc kubenswrapper[4808]: I0217 16:08:57.864526 4808 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e489a46b-9123-44c6-94e0-692621760dd6-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:08:57 crc kubenswrapper[4808]: I0217 16:08:57.864539 4808 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e489a46b-9123-44c6-94e0-692621760dd6-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 16:08:57 crc kubenswrapper[4808]: I0217 16:08:57.864601 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6lnfm\" (UniqueName: \"kubernetes.io/projected/e489a46b-9123-44c6-94e0-692621760dd6-kube-api-access-6lnfm\") on node \"crc\" DevicePath \"\"" Feb 17 16:08:57 crc kubenswrapper[4808]: I0217 16:08:57.864614 4808 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e489a46b-9123-44c6-94e0-692621760dd6-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 16:08:57 crc kubenswrapper[4808]: I0217 16:08:57.960807 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kj6bw"] Feb 17 16:08:58 crc kubenswrapper[4808]: I0217 16:08:58.388960 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-hdg74_e489a46b-9123-44c6-94e0-692621760dd6/console/0.log" Feb 17 16:08:58 crc kubenswrapper[4808]: I0217 16:08:58.389281 4808 generic.go:334] "Generic (PLEG): container finished" podID="e489a46b-9123-44c6-94e0-692621760dd6" containerID="5fa014756fd5fd80eb6b1fdbbf3d68e06eb937cbb5c5ef91970212b3ef06613a" exitCode=2 Feb 17 16:08:58 crc kubenswrapper[4808]: I0217 16:08:58.389354 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-hdg74" Feb 17 16:08:58 crc kubenswrapper[4808]: I0217 16:08:58.389373 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-hdg74" event={"ID":"e489a46b-9123-44c6-94e0-692621760dd6","Type":"ContainerDied","Data":"5fa014756fd5fd80eb6b1fdbbf3d68e06eb937cbb5c5ef91970212b3ef06613a"} Feb 17 16:08:58 crc kubenswrapper[4808]: I0217 16:08:58.389406 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-hdg74" event={"ID":"e489a46b-9123-44c6-94e0-692621760dd6","Type":"ContainerDied","Data":"0209add398700228e0fcc883ac99d37768a000d7cf9532764ef3bc88a5c87df2"} Feb 17 16:08:58 crc kubenswrapper[4808]: I0217 16:08:58.389428 4808 scope.go:117] "RemoveContainer" containerID="5fa014756fd5fd80eb6b1fdbbf3d68e06eb937cbb5c5ef91970212b3ef06613a" Feb 17 16:08:58 crc kubenswrapper[4808]: I0217 16:08:58.392911 4808 generic.go:334] "Generic (PLEG): container finished" podID="df1cf40f-e7a2-40b1-8adb-45d2b5205584" containerID="358da34cb13e59b5b2eea0ee50c08c53ee1042a95c8b7f0a5110b7c72d5bc6f1" exitCode=0 Feb 17 16:08:58 crc kubenswrapper[4808]: I0217 16:08:58.392944 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kj6bw" event={"ID":"df1cf40f-e7a2-40b1-8adb-45d2b5205584","Type":"ContainerDied","Data":"358da34cb13e59b5b2eea0ee50c08c53ee1042a95c8b7f0a5110b7c72d5bc6f1"} Feb 17 16:08:58 crc kubenswrapper[4808]: I0217 16:08:58.392967 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kj6bw" event={"ID":"df1cf40f-e7a2-40b1-8adb-45d2b5205584","Type":"ContainerStarted","Data":"c907e069585e21057ee27ea3d446789d6b432b4c4f506cfa3b13885254560849"} Feb 17 16:08:58 crc kubenswrapper[4808]: I0217 16:08:58.406297 4808 scope.go:117] "RemoveContainer" containerID="5fa014756fd5fd80eb6b1fdbbf3d68e06eb937cbb5c5ef91970212b3ef06613a" Feb 17 16:08:58 crc kubenswrapper[4808]: E0217 16:08:58.406839 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5fa014756fd5fd80eb6b1fdbbf3d68e06eb937cbb5c5ef91970212b3ef06613a\": container with ID starting with 5fa014756fd5fd80eb6b1fdbbf3d68e06eb937cbb5c5ef91970212b3ef06613a not found: ID does not exist" containerID="5fa014756fd5fd80eb6b1fdbbf3d68e06eb937cbb5c5ef91970212b3ef06613a" Feb 17 16:08:58 crc kubenswrapper[4808]: I0217 16:08:58.406886 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5fa014756fd5fd80eb6b1fdbbf3d68e06eb937cbb5c5ef91970212b3ef06613a"} err="failed to get container status \"5fa014756fd5fd80eb6b1fdbbf3d68e06eb937cbb5c5ef91970212b3ef06613a\": rpc error: code = NotFound desc = could not find container \"5fa014756fd5fd80eb6b1fdbbf3d68e06eb937cbb5c5ef91970212b3ef06613a\": container with ID starting with 5fa014756fd5fd80eb6b1fdbbf3d68e06eb937cbb5c5ef91970212b3ef06613a not found: ID does not exist" Feb 17 16:08:58 crc kubenswrapper[4808]: I0217 16:08:58.449110 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-hdg74"] Feb 17 16:08:58 crc kubenswrapper[4808]: I0217 16:08:58.453902 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-hdg74"] Feb 17 16:08:59 crc kubenswrapper[4808]: I0217 16:08:59.157287 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e489a46b-9123-44c6-94e0-692621760dd6" path="/var/lib/kubelet/pods/e489a46b-9123-44c6-94e0-692621760dd6/volumes" Feb 17 16:09:00 crc kubenswrapper[4808]: I0217 16:09:00.412424 4808 generic.go:334] "Generic (PLEG): container finished" podID="df1cf40f-e7a2-40b1-8adb-45d2b5205584" containerID="4aee38a5e736b972166cae78dfa52d80c5ca2e3b48fff8ca6436228cc635549b" exitCode=0 Feb 17 16:09:00 crc kubenswrapper[4808]: I0217 16:09:00.412478 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kj6bw" event={"ID":"df1cf40f-e7a2-40b1-8adb-45d2b5205584","Type":"ContainerDied","Data":"4aee38a5e736b972166cae78dfa52d80c5ca2e3b48fff8ca6436228cc635549b"} Feb 17 16:09:01 crc kubenswrapper[4808]: I0217 16:09:01.420814 4808 generic.go:334] "Generic (PLEG): container finished" podID="df1cf40f-e7a2-40b1-8adb-45d2b5205584" containerID="151b383ff6053e07c90cf0ea55e4844dc94db57808fcbc6f44f253fc98c01395" exitCode=0 Feb 17 16:09:01 crc kubenswrapper[4808]: I0217 16:09:01.420954 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kj6bw" event={"ID":"df1cf40f-e7a2-40b1-8adb-45d2b5205584","Type":"ContainerDied","Data":"151b383ff6053e07c90cf0ea55e4844dc94db57808fcbc6f44f253fc98c01395"} Feb 17 16:09:02 crc kubenswrapper[4808]: I0217 16:09:02.662480 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kj6bw" Feb 17 16:09:02 crc kubenswrapper[4808]: I0217 16:09:02.728518 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/df1cf40f-e7a2-40b1-8adb-45d2b5205584-util\") pod \"df1cf40f-e7a2-40b1-8adb-45d2b5205584\" (UID: \"df1cf40f-e7a2-40b1-8adb-45d2b5205584\") " Feb 17 16:09:02 crc kubenswrapper[4808]: I0217 16:09:02.728734 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/df1cf40f-e7a2-40b1-8adb-45d2b5205584-bundle\") pod \"df1cf40f-e7a2-40b1-8adb-45d2b5205584\" (UID: \"df1cf40f-e7a2-40b1-8adb-45d2b5205584\") " Feb 17 16:09:02 crc kubenswrapper[4808]: I0217 16:09:02.728839 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8vz4b\" (UniqueName: \"kubernetes.io/projected/df1cf40f-e7a2-40b1-8adb-45d2b5205584-kube-api-access-8vz4b\") pod \"df1cf40f-e7a2-40b1-8adb-45d2b5205584\" (UID: \"df1cf40f-e7a2-40b1-8adb-45d2b5205584\") " Feb 17 16:09:02 crc kubenswrapper[4808]: I0217 16:09:02.729846 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df1cf40f-e7a2-40b1-8adb-45d2b5205584-bundle" (OuterVolumeSpecName: "bundle") pod "df1cf40f-e7a2-40b1-8adb-45d2b5205584" (UID: "df1cf40f-e7a2-40b1-8adb-45d2b5205584"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:09:02 crc kubenswrapper[4808]: I0217 16:09:02.737339 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df1cf40f-e7a2-40b1-8adb-45d2b5205584-kube-api-access-8vz4b" (OuterVolumeSpecName: "kube-api-access-8vz4b") pod "df1cf40f-e7a2-40b1-8adb-45d2b5205584" (UID: "df1cf40f-e7a2-40b1-8adb-45d2b5205584"). InnerVolumeSpecName "kube-api-access-8vz4b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:09:02 crc kubenswrapper[4808]: I0217 16:09:02.742477 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df1cf40f-e7a2-40b1-8adb-45d2b5205584-util" (OuterVolumeSpecName: "util") pod "df1cf40f-e7a2-40b1-8adb-45d2b5205584" (UID: "df1cf40f-e7a2-40b1-8adb-45d2b5205584"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:09:02 crc kubenswrapper[4808]: I0217 16:09:02.830521 4808 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/df1cf40f-e7a2-40b1-8adb-45d2b5205584-util\") on node \"crc\" DevicePath \"\"" Feb 17 16:09:02 crc kubenswrapper[4808]: I0217 16:09:02.830570 4808 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/df1cf40f-e7a2-40b1-8adb-45d2b5205584-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:09:02 crc kubenswrapper[4808]: I0217 16:09:02.830598 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8vz4b\" (UniqueName: \"kubernetes.io/projected/df1cf40f-e7a2-40b1-8adb-45d2b5205584-kube-api-access-8vz4b\") on node \"crc\" DevicePath \"\"" Feb 17 16:09:03 crc kubenswrapper[4808]: I0217 16:09:03.432951 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kj6bw" event={"ID":"df1cf40f-e7a2-40b1-8adb-45d2b5205584","Type":"ContainerDied","Data":"c907e069585e21057ee27ea3d446789d6b432b4c4f506cfa3b13885254560849"} Feb 17 16:09:03 crc kubenswrapper[4808]: I0217 16:09:03.433006 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c907e069585e21057ee27ea3d446789d6b432b4c4f506cfa3b13885254560849" Feb 17 16:09:03 crc kubenswrapper[4808]: I0217 16:09:03.433093 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kj6bw" Feb 17 16:09:12 crc kubenswrapper[4808]: I0217 16:09:12.211647 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-6655d59788-74j79"] Feb 17 16:09:12 crc kubenswrapper[4808]: E0217 16:09:12.212447 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df1cf40f-e7a2-40b1-8adb-45d2b5205584" containerName="pull" Feb 17 16:09:12 crc kubenswrapper[4808]: I0217 16:09:12.212464 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="df1cf40f-e7a2-40b1-8adb-45d2b5205584" containerName="pull" Feb 17 16:09:12 crc kubenswrapper[4808]: E0217 16:09:12.212484 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df1cf40f-e7a2-40b1-8adb-45d2b5205584" containerName="extract" Feb 17 16:09:12 crc kubenswrapper[4808]: I0217 16:09:12.212492 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="df1cf40f-e7a2-40b1-8adb-45d2b5205584" containerName="extract" Feb 17 16:09:12 crc kubenswrapper[4808]: E0217 16:09:12.212501 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e489a46b-9123-44c6-94e0-692621760dd6" containerName="console" Feb 17 16:09:12 crc kubenswrapper[4808]: I0217 16:09:12.212508 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="e489a46b-9123-44c6-94e0-692621760dd6" containerName="console" Feb 17 16:09:12 crc kubenswrapper[4808]: E0217 16:09:12.212525 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df1cf40f-e7a2-40b1-8adb-45d2b5205584" containerName="util" Feb 17 16:09:12 crc kubenswrapper[4808]: I0217 16:09:12.212534 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="df1cf40f-e7a2-40b1-8adb-45d2b5205584" containerName="util" Feb 17 16:09:12 crc kubenswrapper[4808]: I0217 16:09:12.212675 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="df1cf40f-e7a2-40b1-8adb-45d2b5205584" containerName="extract" Feb 17 16:09:12 crc kubenswrapper[4808]: I0217 16:09:12.212693 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="e489a46b-9123-44c6-94e0-692621760dd6" containerName="console" Feb 17 16:09:12 crc kubenswrapper[4808]: I0217 16:09:12.213200 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6655d59788-74j79" Feb 17 16:09:12 crc kubenswrapper[4808]: I0217 16:09:12.215493 4808 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Feb 17 16:09:12 crc kubenswrapper[4808]: I0217 16:09:12.215542 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Feb 17 16:09:12 crc kubenswrapper[4808]: I0217 16:09:12.215700 4808 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-jfr66" Feb 17 16:09:12 crc kubenswrapper[4808]: I0217 16:09:12.218761 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Feb 17 16:09:12 crc kubenswrapper[4808]: I0217 16:09:12.218847 4808 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Feb 17 16:09:12 crc kubenswrapper[4808]: I0217 16:09:12.223640 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6655d59788-74j79"] Feb 17 16:09:12 crc kubenswrapper[4808]: I0217 16:09:12.369645 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxrcp\" (UniqueName: \"kubernetes.io/projected/d90f3d87-35f4-4c7d-b157-424ee7b502cd-kube-api-access-fxrcp\") pod \"metallb-operator-controller-manager-6655d59788-74j79\" (UID: \"d90f3d87-35f4-4c7d-b157-424ee7b502cd\") " pod="metallb-system/metallb-operator-controller-manager-6655d59788-74j79" Feb 17 16:09:12 crc kubenswrapper[4808]: I0217 16:09:12.369785 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d90f3d87-35f4-4c7d-b157-424ee7b502cd-webhook-cert\") pod \"metallb-operator-controller-manager-6655d59788-74j79\" (UID: \"d90f3d87-35f4-4c7d-b157-424ee7b502cd\") " pod="metallb-system/metallb-operator-controller-manager-6655d59788-74j79" Feb 17 16:09:12 crc kubenswrapper[4808]: I0217 16:09:12.369863 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d90f3d87-35f4-4c7d-b157-424ee7b502cd-apiservice-cert\") pod \"metallb-operator-controller-manager-6655d59788-74j79\" (UID: \"d90f3d87-35f4-4c7d-b157-424ee7b502cd\") " pod="metallb-system/metallb-operator-controller-manager-6655d59788-74j79" Feb 17 16:09:12 crc kubenswrapper[4808]: I0217 16:09:12.471521 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d90f3d87-35f4-4c7d-b157-424ee7b502cd-webhook-cert\") pod \"metallb-operator-controller-manager-6655d59788-74j79\" (UID: \"d90f3d87-35f4-4c7d-b157-424ee7b502cd\") " pod="metallb-system/metallb-operator-controller-manager-6655d59788-74j79" Feb 17 16:09:12 crc kubenswrapper[4808]: I0217 16:09:12.471652 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d90f3d87-35f4-4c7d-b157-424ee7b502cd-apiservice-cert\") pod \"metallb-operator-controller-manager-6655d59788-74j79\" (UID: \"d90f3d87-35f4-4c7d-b157-424ee7b502cd\") " pod="metallb-system/metallb-operator-controller-manager-6655d59788-74j79" Feb 17 16:09:12 crc kubenswrapper[4808]: I0217 16:09:12.471705 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxrcp\" (UniqueName: \"kubernetes.io/projected/d90f3d87-35f4-4c7d-b157-424ee7b502cd-kube-api-access-fxrcp\") pod \"metallb-operator-controller-manager-6655d59788-74j79\" (UID: \"d90f3d87-35f4-4c7d-b157-424ee7b502cd\") " pod="metallb-system/metallb-operator-controller-manager-6655d59788-74j79" Feb 17 16:09:12 crc kubenswrapper[4808]: I0217 16:09:12.483440 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d90f3d87-35f4-4c7d-b157-424ee7b502cd-apiservice-cert\") pod \"metallb-operator-controller-manager-6655d59788-74j79\" (UID: \"d90f3d87-35f4-4c7d-b157-424ee7b502cd\") " pod="metallb-system/metallb-operator-controller-manager-6655d59788-74j79" Feb 17 16:09:12 crc kubenswrapper[4808]: I0217 16:09:12.490365 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d90f3d87-35f4-4c7d-b157-424ee7b502cd-webhook-cert\") pod \"metallb-operator-controller-manager-6655d59788-74j79\" (UID: \"d90f3d87-35f4-4c7d-b157-424ee7b502cd\") " pod="metallb-system/metallb-operator-controller-manager-6655d59788-74j79" Feb 17 16:09:12 crc kubenswrapper[4808]: I0217 16:09:12.495339 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxrcp\" (UniqueName: \"kubernetes.io/projected/d90f3d87-35f4-4c7d-b157-424ee7b502cd-kube-api-access-fxrcp\") pod \"metallb-operator-controller-manager-6655d59788-74j79\" (UID: \"d90f3d87-35f4-4c7d-b157-424ee7b502cd\") " pod="metallb-system/metallb-operator-controller-manager-6655d59788-74j79" Feb 17 16:09:12 crc kubenswrapper[4808]: I0217 16:09:12.527847 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6655d59788-74j79" Feb 17 16:09:12 crc kubenswrapper[4808]: I0217 16:09:12.533965 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-5f74458966-dhjp5"] Feb 17 16:09:12 crc kubenswrapper[4808]: I0217 16:09:12.535991 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-5f74458966-dhjp5" Feb 17 16:09:12 crc kubenswrapper[4808]: I0217 16:09:12.539614 4808 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Feb 17 16:09:12 crc kubenswrapper[4808]: I0217 16:09:12.540013 4808 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 17 16:09:12 crc kubenswrapper[4808]: I0217 16:09:12.540215 4808 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-48gnn" Feb 17 16:09:12 crc kubenswrapper[4808]: I0217 16:09:12.575068 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-5f74458966-dhjp5"] Feb 17 16:09:12 crc kubenswrapper[4808]: I0217 16:09:12.674367 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jgrz\" (UniqueName: \"kubernetes.io/projected/6de38240-7d75-47a0-b5c1-788f619bb8ff-kube-api-access-4jgrz\") pod \"metallb-operator-webhook-server-5f74458966-dhjp5\" (UID: \"6de38240-7d75-47a0-b5c1-788f619bb8ff\") " pod="metallb-system/metallb-operator-webhook-server-5f74458966-dhjp5" Feb 17 16:09:12 crc kubenswrapper[4808]: I0217 16:09:12.674957 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6de38240-7d75-47a0-b5c1-788f619bb8ff-apiservice-cert\") pod \"metallb-operator-webhook-server-5f74458966-dhjp5\" (UID: \"6de38240-7d75-47a0-b5c1-788f619bb8ff\") " pod="metallb-system/metallb-operator-webhook-server-5f74458966-dhjp5" Feb 17 16:09:12 crc kubenswrapper[4808]: I0217 16:09:12.675026 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6de38240-7d75-47a0-b5c1-788f619bb8ff-webhook-cert\") pod \"metallb-operator-webhook-server-5f74458966-dhjp5\" (UID: \"6de38240-7d75-47a0-b5c1-788f619bb8ff\") " pod="metallb-system/metallb-operator-webhook-server-5f74458966-dhjp5" Feb 17 16:09:12 crc kubenswrapper[4808]: I0217 16:09:12.776482 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jgrz\" (UniqueName: \"kubernetes.io/projected/6de38240-7d75-47a0-b5c1-788f619bb8ff-kube-api-access-4jgrz\") pod \"metallb-operator-webhook-server-5f74458966-dhjp5\" (UID: \"6de38240-7d75-47a0-b5c1-788f619bb8ff\") " pod="metallb-system/metallb-operator-webhook-server-5f74458966-dhjp5" Feb 17 16:09:12 crc kubenswrapper[4808]: I0217 16:09:12.776553 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6de38240-7d75-47a0-b5c1-788f619bb8ff-apiservice-cert\") pod \"metallb-operator-webhook-server-5f74458966-dhjp5\" (UID: \"6de38240-7d75-47a0-b5c1-788f619bb8ff\") " pod="metallb-system/metallb-operator-webhook-server-5f74458966-dhjp5" Feb 17 16:09:12 crc kubenswrapper[4808]: I0217 16:09:12.776633 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6de38240-7d75-47a0-b5c1-788f619bb8ff-webhook-cert\") pod \"metallb-operator-webhook-server-5f74458966-dhjp5\" (UID: \"6de38240-7d75-47a0-b5c1-788f619bb8ff\") " pod="metallb-system/metallb-operator-webhook-server-5f74458966-dhjp5" Feb 17 16:09:12 crc kubenswrapper[4808]: I0217 16:09:12.785288 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6de38240-7d75-47a0-b5c1-788f619bb8ff-apiservice-cert\") pod \"metallb-operator-webhook-server-5f74458966-dhjp5\" (UID: \"6de38240-7d75-47a0-b5c1-788f619bb8ff\") " pod="metallb-system/metallb-operator-webhook-server-5f74458966-dhjp5" Feb 17 16:09:12 crc kubenswrapper[4808]: I0217 16:09:12.790333 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6de38240-7d75-47a0-b5c1-788f619bb8ff-webhook-cert\") pod \"metallb-operator-webhook-server-5f74458966-dhjp5\" (UID: \"6de38240-7d75-47a0-b5c1-788f619bb8ff\") " pod="metallb-system/metallb-operator-webhook-server-5f74458966-dhjp5" Feb 17 16:09:12 crc kubenswrapper[4808]: I0217 16:09:12.798467 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jgrz\" (UniqueName: \"kubernetes.io/projected/6de38240-7d75-47a0-b5c1-788f619bb8ff-kube-api-access-4jgrz\") pod \"metallb-operator-webhook-server-5f74458966-dhjp5\" (UID: \"6de38240-7d75-47a0-b5c1-788f619bb8ff\") " pod="metallb-system/metallb-operator-webhook-server-5f74458966-dhjp5" Feb 17 16:09:12 crc kubenswrapper[4808]: I0217 16:09:12.893554 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-5f74458966-dhjp5" Feb 17 16:09:13 crc kubenswrapper[4808]: I0217 16:09:13.008498 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6655d59788-74j79"] Feb 17 16:09:13 crc kubenswrapper[4808]: I0217 16:09:13.346300 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-5f74458966-dhjp5"] Feb 17 16:09:13 crc kubenswrapper[4808]: W0217 16:09:13.349622 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6de38240_7d75_47a0_b5c1_788f619bb8ff.slice/crio-d39cfb10b69da9281fec7f30ff037c622a952c51ba336001fe03a8e0cb197f3b WatchSource:0}: Error finding container d39cfb10b69da9281fec7f30ff037c622a952c51ba336001fe03a8e0cb197f3b: Status 404 returned error can't find the container with id d39cfb10b69da9281fec7f30ff037c622a952c51ba336001fe03a8e0cb197f3b Feb 17 16:09:13 crc kubenswrapper[4808]: I0217 16:09:13.509124 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-5f74458966-dhjp5" event={"ID":"6de38240-7d75-47a0-b5c1-788f619bb8ff","Type":"ContainerStarted","Data":"d39cfb10b69da9281fec7f30ff037c622a952c51ba336001fe03a8e0cb197f3b"} Feb 17 16:09:13 crc kubenswrapper[4808]: I0217 16:09:13.510393 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6655d59788-74j79" event={"ID":"d90f3d87-35f4-4c7d-b157-424ee7b502cd","Type":"ContainerStarted","Data":"4478b218d0ad7e3093f89a991338f003131248ba05caf80774289a5a0217225e"} Feb 17 16:09:16 crc kubenswrapper[4808]: I0217 16:09:16.529740 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6655d59788-74j79" event={"ID":"d90f3d87-35f4-4c7d-b157-424ee7b502cd","Type":"ContainerStarted","Data":"cac9827299cc1e54eb326fa720d07b869e929b2649828ceb750b4ea695a830c2"} Feb 17 16:09:16 crc kubenswrapper[4808]: I0217 16:09:16.530072 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-6655d59788-74j79" Feb 17 16:09:16 crc kubenswrapper[4808]: I0217 16:09:16.556999 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-6655d59788-74j79" podStartSLOduration=1.405463738 podStartE2EDuration="4.556974183s" podCreationTimestamp="2026-02-17 16:09:12 +0000 UTC" firstStartedPulling="2026-02-17 16:09:13.033286558 +0000 UTC m=+916.549645631" lastFinishedPulling="2026-02-17 16:09:16.184797003 +0000 UTC m=+919.701156076" observedRunningTime="2026-02-17 16:09:16.547513649 +0000 UTC m=+920.063872722" watchObservedRunningTime="2026-02-17 16:09:16.556974183 +0000 UTC m=+920.073333256" Feb 17 16:09:19 crc kubenswrapper[4808]: I0217 16:09:19.549538 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-5f74458966-dhjp5" event={"ID":"6de38240-7d75-47a0-b5c1-788f619bb8ff","Type":"ContainerStarted","Data":"dae0476d672b70da7819e921ccf8551ca4ce92513ca4582e28bfb122e5e57564"} Feb 17 16:09:19 crc kubenswrapper[4808]: I0217 16:09:19.549931 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-5f74458966-dhjp5" Feb 17 16:09:19 crc kubenswrapper[4808]: I0217 16:09:19.571112 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-5f74458966-dhjp5" podStartSLOduration=2.460133964 podStartE2EDuration="7.571094934s" podCreationTimestamp="2026-02-17 16:09:12 +0000 UTC" firstStartedPulling="2026-02-17 16:09:13.353489299 +0000 UTC m=+916.869848372" lastFinishedPulling="2026-02-17 16:09:18.464450269 +0000 UTC m=+921.980809342" observedRunningTime="2026-02-17 16:09:19.569032979 +0000 UTC m=+923.085392052" watchObservedRunningTime="2026-02-17 16:09:19.571094934 +0000 UTC m=+923.087454007" Feb 17 16:09:21 crc kubenswrapper[4808]: I0217 16:09:21.592040 4808 patch_prober.go:28] interesting pod/machine-config-daemon-k8v8k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:09:21 crc kubenswrapper[4808]: I0217 16:09:21.592309 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:09:32 crc kubenswrapper[4808]: I0217 16:09:32.902993 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-5f74458966-dhjp5" Feb 17 16:09:36 crc kubenswrapper[4808]: I0217 16:09:36.118266 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-pgghj"] Feb 17 16:09:36 crc kubenswrapper[4808]: I0217 16:09:36.121986 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pgghj" Feb 17 16:09:36 crc kubenswrapper[4808]: I0217 16:09:36.131057 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pgghj"] Feb 17 16:09:36 crc kubenswrapper[4808]: I0217 16:09:36.206418 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b0c9cdb-4343-4e20-b099-0f1d04243839-utilities\") pod \"certified-operators-pgghj\" (UID: \"7b0c9cdb-4343-4e20-b099-0f1d04243839\") " pod="openshift-marketplace/certified-operators-pgghj" Feb 17 16:09:36 crc kubenswrapper[4808]: I0217 16:09:36.206890 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcqrp\" (UniqueName: \"kubernetes.io/projected/7b0c9cdb-4343-4e20-b099-0f1d04243839-kube-api-access-fcqrp\") pod \"certified-operators-pgghj\" (UID: \"7b0c9cdb-4343-4e20-b099-0f1d04243839\") " pod="openshift-marketplace/certified-operators-pgghj" Feb 17 16:09:36 crc kubenswrapper[4808]: I0217 16:09:36.206972 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b0c9cdb-4343-4e20-b099-0f1d04243839-catalog-content\") pod \"certified-operators-pgghj\" (UID: \"7b0c9cdb-4343-4e20-b099-0f1d04243839\") " pod="openshift-marketplace/certified-operators-pgghj" Feb 17 16:09:36 crc kubenswrapper[4808]: I0217 16:09:36.308808 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fcqrp\" (UniqueName: \"kubernetes.io/projected/7b0c9cdb-4343-4e20-b099-0f1d04243839-kube-api-access-fcqrp\") pod \"certified-operators-pgghj\" (UID: \"7b0c9cdb-4343-4e20-b099-0f1d04243839\") " pod="openshift-marketplace/certified-operators-pgghj" Feb 17 16:09:36 crc kubenswrapper[4808]: I0217 16:09:36.308864 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b0c9cdb-4343-4e20-b099-0f1d04243839-catalog-content\") pod \"certified-operators-pgghj\" (UID: \"7b0c9cdb-4343-4e20-b099-0f1d04243839\") " pod="openshift-marketplace/certified-operators-pgghj" Feb 17 16:09:36 crc kubenswrapper[4808]: I0217 16:09:36.308904 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b0c9cdb-4343-4e20-b099-0f1d04243839-utilities\") pod \"certified-operators-pgghj\" (UID: \"7b0c9cdb-4343-4e20-b099-0f1d04243839\") " pod="openshift-marketplace/certified-operators-pgghj" Feb 17 16:09:36 crc kubenswrapper[4808]: I0217 16:09:36.309500 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b0c9cdb-4343-4e20-b099-0f1d04243839-utilities\") pod \"certified-operators-pgghj\" (UID: \"7b0c9cdb-4343-4e20-b099-0f1d04243839\") " pod="openshift-marketplace/certified-operators-pgghj" Feb 17 16:09:36 crc kubenswrapper[4808]: I0217 16:09:36.309539 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b0c9cdb-4343-4e20-b099-0f1d04243839-catalog-content\") pod \"certified-operators-pgghj\" (UID: \"7b0c9cdb-4343-4e20-b099-0f1d04243839\") " pod="openshift-marketplace/certified-operators-pgghj" Feb 17 16:09:36 crc kubenswrapper[4808]: I0217 16:09:36.332865 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcqrp\" (UniqueName: \"kubernetes.io/projected/7b0c9cdb-4343-4e20-b099-0f1d04243839-kube-api-access-fcqrp\") pod \"certified-operators-pgghj\" (UID: \"7b0c9cdb-4343-4e20-b099-0f1d04243839\") " pod="openshift-marketplace/certified-operators-pgghj" Feb 17 16:09:36 crc kubenswrapper[4808]: I0217 16:09:36.460363 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pgghj" Feb 17 16:09:36 crc kubenswrapper[4808]: I0217 16:09:36.925321 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pgghj"] Feb 17 16:09:36 crc kubenswrapper[4808]: W0217 16:09:36.931191 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7b0c9cdb_4343_4e20_b099_0f1d04243839.slice/crio-16e7d45fd7584c772b1e16c27a75902a934207aef996ae70892ce0bff673d42e WatchSource:0}: Error finding container 16e7d45fd7584c772b1e16c27a75902a934207aef996ae70892ce0bff673d42e: Status 404 returned error can't find the container with id 16e7d45fd7584c772b1e16c27a75902a934207aef996ae70892ce0bff673d42e Feb 17 16:09:37 crc kubenswrapper[4808]: I0217 16:09:37.695953 4808 generic.go:334] "Generic (PLEG): container finished" podID="7b0c9cdb-4343-4e20-b099-0f1d04243839" containerID="6ab42f795e41b23d736cc04bc17597fc7e2831ad620d3bee81bc62edb18ba793" exitCode=0 Feb 17 16:09:37 crc kubenswrapper[4808]: I0217 16:09:37.696014 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pgghj" event={"ID":"7b0c9cdb-4343-4e20-b099-0f1d04243839","Type":"ContainerDied","Data":"6ab42f795e41b23d736cc04bc17597fc7e2831ad620d3bee81bc62edb18ba793"} Feb 17 16:09:37 crc kubenswrapper[4808]: I0217 16:09:37.696438 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pgghj" event={"ID":"7b0c9cdb-4343-4e20-b099-0f1d04243839","Type":"ContainerStarted","Data":"16e7d45fd7584c772b1e16c27a75902a934207aef996ae70892ce0bff673d42e"} Feb 17 16:09:43 crc kubenswrapper[4808]: I0217 16:09:43.747317 4808 generic.go:334] "Generic (PLEG): container finished" podID="7b0c9cdb-4343-4e20-b099-0f1d04243839" containerID="cbe1f42567236516ed0b2046eb2e20a27d612bd9627db97fd1dd6d4521f09c3b" exitCode=0 Feb 17 16:09:43 crc kubenswrapper[4808]: I0217 16:09:43.747381 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pgghj" event={"ID":"7b0c9cdb-4343-4e20-b099-0f1d04243839","Type":"ContainerDied","Data":"cbe1f42567236516ed0b2046eb2e20a27d612bd9627db97fd1dd6d4521f09c3b"} Feb 17 16:09:44 crc kubenswrapper[4808]: I0217 16:09:44.757794 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pgghj" event={"ID":"7b0c9cdb-4343-4e20-b099-0f1d04243839","Type":"ContainerStarted","Data":"fa6447c11c7841f9d4b11d7f9dd185523aa37804b33f9cdaa631b5ae0354a92c"} Feb 17 16:09:44 crc kubenswrapper[4808]: I0217 16:09:44.781761 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-pgghj" podStartSLOduration=2.352741556 podStartE2EDuration="8.781738103s" podCreationTimestamp="2026-02-17 16:09:36 +0000 UTC" firstStartedPulling="2026-02-17 16:09:37.697456607 +0000 UTC m=+941.213815680" lastFinishedPulling="2026-02-17 16:09:44.126453154 +0000 UTC m=+947.642812227" observedRunningTime="2026-02-17 16:09:44.777769886 +0000 UTC m=+948.294128979" watchObservedRunningTime="2026-02-17 16:09:44.781738103 +0000 UTC m=+948.298097176" Feb 17 16:09:46 crc kubenswrapper[4808]: I0217 16:09:46.460493 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-pgghj" Feb 17 16:09:46 crc kubenswrapper[4808]: I0217 16:09:46.460914 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-pgghj" Feb 17 16:09:46 crc kubenswrapper[4808]: I0217 16:09:46.545376 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-pgghj" Feb 17 16:09:51 crc kubenswrapper[4808]: I0217 16:09:51.592023 4808 patch_prober.go:28] interesting pod/machine-config-daemon-k8v8k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:09:51 crc kubenswrapper[4808]: I0217 16:09:51.592421 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:09:51 crc kubenswrapper[4808]: I0217 16:09:51.592482 4808 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" Feb 17 16:09:51 crc kubenswrapper[4808]: I0217 16:09:51.593242 4808 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"284430f1fb330ef6ae53b6d6dd49c2af767ae61ae02d682d5cba6dbd7c4ce02d"} pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 16:09:51 crc kubenswrapper[4808]: I0217 16:09:51.593318 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" containerID="cri-o://284430f1fb330ef6ae53b6d6dd49c2af767ae61ae02d682d5cba6dbd7c4ce02d" gracePeriod=600 Feb 17 16:09:51 crc kubenswrapper[4808]: I0217 16:09:51.811460 4808 generic.go:334] "Generic (PLEG): container finished" podID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerID="284430f1fb330ef6ae53b6d6dd49c2af767ae61ae02d682d5cba6dbd7c4ce02d" exitCode=0 Feb 17 16:09:51 crc kubenswrapper[4808]: I0217 16:09:51.811531 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" event={"ID":"ca38b6e7-b21c-453d-8b6c-a163dac84b35","Type":"ContainerDied","Data":"284430f1fb330ef6ae53b6d6dd49c2af767ae61ae02d682d5cba6dbd7c4ce02d"} Feb 17 16:09:51 crc kubenswrapper[4808]: I0217 16:09:51.811857 4808 scope.go:117] "RemoveContainer" containerID="51dff3d704e9a98a9fc5f37394f1d0157cc8cebcc4571b1aa78c7b9262eeb36c" Feb 17 16:09:52 crc kubenswrapper[4808]: I0217 16:09:52.531223 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-6655d59788-74j79" Feb 17 16:09:52 crc kubenswrapper[4808]: I0217 16:09:52.819416 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" event={"ID":"ca38b6e7-b21c-453d-8b6c-a163dac84b35","Type":"ContainerStarted","Data":"12b4e957316b11ee081f9acecacedfdbabeee0248dc83ade7fe5f8b084a798ba"} Feb 17 16:09:53 crc kubenswrapper[4808]: I0217 16:09:53.332431 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-c58vl"] Feb 17 16:09:53 crc kubenswrapper[4808]: I0217 16:09:53.336089 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-c58vl" Feb 17 16:09:53 crc kubenswrapper[4808]: I0217 16:09:53.339133 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Feb 17 16:09:53 crc kubenswrapper[4808]: I0217 16:09:53.339490 4808 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Feb 17 16:09:53 crc kubenswrapper[4808]: I0217 16:09:53.351176 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-zvr84"] Feb 17 16:09:53 crc kubenswrapper[4808]: I0217 16:09:53.354158 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-zvr84" Feb 17 16:09:53 crc kubenswrapper[4808]: I0217 16:09:53.354761 4808 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-vpq7s" Feb 17 16:09:53 crc kubenswrapper[4808]: I0217 16:09:53.357405 4808 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Feb 17 16:09:53 crc kubenswrapper[4808]: I0217 16:09:53.374467 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-zvr84"] Feb 17 16:09:53 crc kubenswrapper[4808]: I0217 16:09:53.451219 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-2hrgh"] Feb 17 16:09:53 crc kubenswrapper[4808]: I0217 16:09:53.452432 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-2hrgh" Feb 17 16:09:53 crc kubenswrapper[4808]: I0217 16:09:53.456304 4808 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Feb 17 16:09:53 crc kubenswrapper[4808]: I0217 16:09:53.456470 4808 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Feb 17 16:09:53 crc kubenswrapper[4808]: I0217 16:09:53.456473 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Feb 17 16:09:53 crc kubenswrapper[4808]: I0217 16:09:53.456661 4808 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-mlfcz" Feb 17 16:09:53 crc kubenswrapper[4808]: I0217 16:09:53.459134 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/42711d14-278f-41eb-80ce-2e67add356b9-metrics\") pod \"frr-k8s-c58vl\" (UID: \"42711d14-278f-41eb-80ce-2e67add356b9\") " pod="metallb-system/frr-k8s-c58vl" Feb 17 16:09:53 crc kubenswrapper[4808]: I0217 16:09:53.459535 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpk8m\" (UniqueName: \"kubernetes.io/projected/42711d14-278f-41eb-80ce-2e67add356b9-kube-api-access-tpk8m\") pod \"frr-k8s-c58vl\" (UID: \"42711d14-278f-41eb-80ce-2e67add356b9\") " pod="metallb-system/frr-k8s-c58vl" Feb 17 16:09:53 crc kubenswrapper[4808]: I0217 16:09:53.459669 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r22qn\" (UniqueName: \"kubernetes.io/projected/b55883d0-d8e0-4609-8b1a-033d6808ab56-kube-api-access-r22qn\") pod \"frr-k8s-webhook-server-78b44bf5bb-zvr84\" (UID: \"b55883d0-d8e0-4609-8b1a-033d6808ab56\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-zvr84" Feb 17 16:09:53 crc kubenswrapper[4808]: I0217 16:09:53.459755 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/42711d14-278f-41eb-80ce-2e67add356b9-frr-conf\") pod \"frr-k8s-c58vl\" (UID: \"42711d14-278f-41eb-80ce-2e67add356b9\") " pod="metallb-system/frr-k8s-c58vl" Feb 17 16:09:53 crc kubenswrapper[4808]: I0217 16:09:53.459854 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/42711d14-278f-41eb-80ce-2e67add356b9-reloader\") pod \"frr-k8s-c58vl\" (UID: \"42711d14-278f-41eb-80ce-2e67add356b9\") " pod="metallb-system/frr-k8s-c58vl" Feb 17 16:09:53 crc kubenswrapper[4808]: I0217 16:09:53.459927 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b55883d0-d8e0-4609-8b1a-033d6808ab56-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-zvr84\" (UID: \"b55883d0-d8e0-4609-8b1a-033d6808ab56\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-zvr84" Feb 17 16:09:53 crc kubenswrapper[4808]: I0217 16:09:53.460004 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/42711d14-278f-41eb-80ce-2e67add356b9-metrics-certs\") pod \"frr-k8s-c58vl\" (UID: \"42711d14-278f-41eb-80ce-2e67add356b9\") " pod="metallb-system/frr-k8s-c58vl" Feb 17 16:09:53 crc kubenswrapper[4808]: I0217 16:09:53.460075 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/42711d14-278f-41eb-80ce-2e67add356b9-frr-startup\") pod \"frr-k8s-c58vl\" (UID: \"42711d14-278f-41eb-80ce-2e67add356b9\") " pod="metallb-system/frr-k8s-c58vl" Feb 17 16:09:53 crc kubenswrapper[4808]: I0217 16:09:53.460155 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/42711d14-278f-41eb-80ce-2e67add356b9-frr-sockets\") pod \"frr-k8s-c58vl\" (UID: \"42711d14-278f-41eb-80ce-2e67add356b9\") " pod="metallb-system/frr-k8s-c58vl" Feb 17 16:09:53 crc kubenswrapper[4808]: I0217 16:09:53.495032 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-69bbfbf88f-jvlrt"] Feb 17 16:09:53 crc kubenswrapper[4808]: I0217 16:09:53.498670 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-jvlrt" Feb 17 16:09:53 crc kubenswrapper[4808]: I0217 16:09:53.501208 4808 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Feb 17 16:09:53 crc kubenswrapper[4808]: I0217 16:09:53.507413 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-jvlrt"] Feb 17 16:09:53 crc kubenswrapper[4808]: I0217 16:09:53.561374 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/c8e5bfe8-d4de-4863-b830-db146a4f0bd8-metallb-excludel2\") pod \"speaker-2hrgh\" (UID: \"c8e5bfe8-d4de-4863-b830-db146a4f0bd8\") " pod="metallb-system/speaker-2hrgh" Feb 17 16:09:53 crc kubenswrapper[4808]: I0217 16:09:53.561492 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/42711d14-278f-41eb-80ce-2e67add356b9-metrics\") pod \"frr-k8s-c58vl\" (UID: \"42711d14-278f-41eb-80ce-2e67add356b9\") " pod="metallb-system/frr-k8s-c58vl" Feb 17 16:09:53 crc kubenswrapper[4808]: I0217 16:09:53.561516 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tpk8m\" (UniqueName: \"kubernetes.io/projected/42711d14-278f-41eb-80ce-2e67add356b9-kube-api-access-tpk8m\") pod \"frr-k8s-c58vl\" (UID: \"42711d14-278f-41eb-80ce-2e67add356b9\") " pod="metallb-system/frr-k8s-c58vl" Feb 17 16:09:53 crc kubenswrapper[4808]: I0217 16:09:53.561538 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r22qn\" (UniqueName: \"kubernetes.io/projected/b55883d0-d8e0-4609-8b1a-033d6808ab56-kube-api-access-r22qn\") pod \"frr-k8s-webhook-server-78b44bf5bb-zvr84\" (UID: \"b55883d0-d8e0-4609-8b1a-033d6808ab56\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-zvr84" Feb 17 16:09:53 crc kubenswrapper[4808]: I0217 16:09:53.561558 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/42711d14-278f-41eb-80ce-2e67add356b9-frr-conf\") pod \"frr-k8s-c58vl\" (UID: \"42711d14-278f-41eb-80ce-2e67add356b9\") " pod="metallb-system/frr-k8s-c58vl" Feb 17 16:09:53 crc kubenswrapper[4808]: I0217 16:09:53.561701 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c8e5bfe8-d4de-4863-b830-db146a4f0bd8-metrics-certs\") pod \"speaker-2hrgh\" (UID: \"c8e5bfe8-d4de-4863-b830-db146a4f0bd8\") " pod="metallb-system/speaker-2hrgh" Feb 17 16:09:53 crc kubenswrapper[4808]: I0217 16:09:53.561774 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/86420ee7-2594-4ef8-8b9d-05a073118389-metrics-certs\") pod \"controller-69bbfbf88f-jvlrt\" (UID: \"86420ee7-2594-4ef8-8b9d-05a073118389\") " pod="metallb-system/controller-69bbfbf88f-jvlrt" Feb 17 16:09:53 crc kubenswrapper[4808]: I0217 16:09:53.561804 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/c8e5bfe8-d4de-4863-b830-db146a4f0bd8-memberlist\") pod \"speaker-2hrgh\" (UID: \"c8e5bfe8-d4de-4863-b830-db146a4f0bd8\") " pod="metallb-system/speaker-2hrgh" Feb 17 16:09:53 crc kubenswrapper[4808]: I0217 16:09:53.561822 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/86420ee7-2594-4ef8-8b9d-05a073118389-cert\") pod \"controller-69bbfbf88f-jvlrt\" (UID: \"86420ee7-2594-4ef8-8b9d-05a073118389\") " pod="metallb-system/controller-69bbfbf88f-jvlrt" Feb 17 16:09:53 crc kubenswrapper[4808]: I0217 16:09:53.561839 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hk4hm\" (UniqueName: \"kubernetes.io/projected/c8e5bfe8-d4de-4863-b830-db146a4f0bd8-kube-api-access-hk4hm\") pod \"speaker-2hrgh\" (UID: \"c8e5bfe8-d4de-4863-b830-db146a4f0bd8\") " pod="metallb-system/speaker-2hrgh" Feb 17 16:09:53 crc kubenswrapper[4808]: I0217 16:09:53.561857 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lzbh\" (UniqueName: \"kubernetes.io/projected/86420ee7-2594-4ef8-8b9d-05a073118389-kube-api-access-8lzbh\") pod \"controller-69bbfbf88f-jvlrt\" (UID: \"86420ee7-2594-4ef8-8b9d-05a073118389\") " pod="metallb-system/controller-69bbfbf88f-jvlrt" Feb 17 16:09:53 crc kubenswrapper[4808]: I0217 16:09:53.561874 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/42711d14-278f-41eb-80ce-2e67add356b9-reloader\") pod \"frr-k8s-c58vl\" (UID: \"42711d14-278f-41eb-80ce-2e67add356b9\") " pod="metallb-system/frr-k8s-c58vl" Feb 17 16:09:53 crc kubenswrapper[4808]: I0217 16:09:53.561894 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b55883d0-d8e0-4609-8b1a-033d6808ab56-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-zvr84\" (UID: \"b55883d0-d8e0-4609-8b1a-033d6808ab56\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-zvr84" Feb 17 16:09:53 crc kubenswrapper[4808]: I0217 16:09:53.561917 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/42711d14-278f-41eb-80ce-2e67add356b9-metrics-certs\") pod \"frr-k8s-c58vl\" (UID: \"42711d14-278f-41eb-80ce-2e67add356b9\") " pod="metallb-system/frr-k8s-c58vl" Feb 17 16:09:53 crc kubenswrapper[4808]: I0217 16:09:53.561935 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/42711d14-278f-41eb-80ce-2e67add356b9-frr-startup\") pod \"frr-k8s-c58vl\" (UID: \"42711d14-278f-41eb-80ce-2e67add356b9\") " pod="metallb-system/frr-k8s-c58vl" Feb 17 16:09:53 crc kubenswrapper[4808]: I0217 16:09:53.561955 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/42711d14-278f-41eb-80ce-2e67add356b9-frr-sockets\") pod \"frr-k8s-c58vl\" (UID: \"42711d14-278f-41eb-80ce-2e67add356b9\") " pod="metallb-system/frr-k8s-c58vl" Feb 17 16:09:53 crc kubenswrapper[4808]: I0217 16:09:53.562371 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/42711d14-278f-41eb-80ce-2e67add356b9-frr-sockets\") pod \"frr-k8s-c58vl\" (UID: \"42711d14-278f-41eb-80ce-2e67add356b9\") " pod="metallb-system/frr-k8s-c58vl" Feb 17 16:09:53 crc kubenswrapper[4808]: I0217 16:09:53.562864 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/42711d14-278f-41eb-80ce-2e67add356b9-reloader\") pod \"frr-k8s-c58vl\" (UID: \"42711d14-278f-41eb-80ce-2e67add356b9\") " pod="metallb-system/frr-k8s-c58vl" Feb 17 16:09:53 crc kubenswrapper[4808]: I0217 16:09:53.563051 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/42711d14-278f-41eb-80ce-2e67add356b9-frr-conf\") pod \"frr-k8s-c58vl\" (UID: \"42711d14-278f-41eb-80ce-2e67add356b9\") " pod="metallb-system/frr-k8s-c58vl" Feb 17 16:09:53 crc kubenswrapper[4808]: I0217 16:09:53.563688 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/42711d14-278f-41eb-80ce-2e67add356b9-frr-startup\") pod \"frr-k8s-c58vl\" (UID: \"42711d14-278f-41eb-80ce-2e67add356b9\") " pod="metallb-system/frr-k8s-c58vl" Feb 17 16:09:53 crc kubenswrapper[4808]: I0217 16:09:53.563882 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/42711d14-278f-41eb-80ce-2e67add356b9-metrics\") pod \"frr-k8s-c58vl\" (UID: \"42711d14-278f-41eb-80ce-2e67add356b9\") " pod="metallb-system/frr-k8s-c58vl" Feb 17 16:09:53 crc kubenswrapper[4808]: I0217 16:09:53.567824 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/42711d14-278f-41eb-80ce-2e67add356b9-metrics-certs\") pod \"frr-k8s-c58vl\" (UID: \"42711d14-278f-41eb-80ce-2e67add356b9\") " pod="metallb-system/frr-k8s-c58vl" Feb 17 16:09:53 crc kubenswrapper[4808]: I0217 16:09:53.576724 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tpk8m\" (UniqueName: \"kubernetes.io/projected/42711d14-278f-41eb-80ce-2e67add356b9-kube-api-access-tpk8m\") pod \"frr-k8s-c58vl\" (UID: \"42711d14-278f-41eb-80ce-2e67add356b9\") " pod="metallb-system/frr-k8s-c58vl" Feb 17 16:09:53 crc kubenswrapper[4808]: I0217 16:09:53.577336 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b55883d0-d8e0-4609-8b1a-033d6808ab56-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-zvr84\" (UID: \"b55883d0-d8e0-4609-8b1a-033d6808ab56\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-zvr84" Feb 17 16:09:53 crc kubenswrapper[4808]: I0217 16:09:53.580671 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r22qn\" (UniqueName: \"kubernetes.io/projected/b55883d0-d8e0-4609-8b1a-033d6808ab56-kube-api-access-r22qn\") pod \"frr-k8s-webhook-server-78b44bf5bb-zvr84\" (UID: \"b55883d0-d8e0-4609-8b1a-033d6808ab56\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-zvr84" Feb 17 16:09:53 crc kubenswrapper[4808]: I0217 16:09:53.662646 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hk4hm\" (UniqueName: \"kubernetes.io/projected/c8e5bfe8-d4de-4863-b830-db146a4f0bd8-kube-api-access-hk4hm\") pod \"speaker-2hrgh\" (UID: \"c8e5bfe8-d4de-4863-b830-db146a4f0bd8\") " pod="metallb-system/speaker-2hrgh" Feb 17 16:09:53 crc kubenswrapper[4808]: I0217 16:09:53.662702 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8lzbh\" (UniqueName: \"kubernetes.io/projected/86420ee7-2594-4ef8-8b9d-05a073118389-kube-api-access-8lzbh\") pod \"controller-69bbfbf88f-jvlrt\" (UID: \"86420ee7-2594-4ef8-8b9d-05a073118389\") " pod="metallb-system/controller-69bbfbf88f-jvlrt" Feb 17 16:09:53 crc kubenswrapper[4808]: I0217 16:09:53.662758 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/c8e5bfe8-d4de-4863-b830-db146a4f0bd8-metallb-excludel2\") pod \"speaker-2hrgh\" (UID: \"c8e5bfe8-d4de-4863-b830-db146a4f0bd8\") " pod="metallb-system/speaker-2hrgh" Feb 17 16:09:53 crc kubenswrapper[4808]: I0217 16:09:53.662804 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c8e5bfe8-d4de-4863-b830-db146a4f0bd8-metrics-certs\") pod \"speaker-2hrgh\" (UID: \"c8e5bfe8-d4de-4863-b830-db146a4f0bd8\") " pod="metallb-system/speaker-2hrgh" Feb 17 16:09:53 crc kubenswrapper[4808]: I0217 16:09:53.662829 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/86420ee7-2594-4ef8-8b9d-05a073118389-metrics-certs\") pod \"controller-69bbfbf88f-jvlrt\" (UID: \"86420ee7-2594-4ef8-8b9d-05a073118389\") " pod="metallb-system/controller-69bbfbf88f-jvlrt" Feb 17 16:09:53 crc kubenswrapper[4808]: I0217 16:09:53.662850 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/c8e5bfe8-d4de-4863-b830-db146a4f0bd8-memberlist\") pod \"speaker-2hrgh\" (UID: \"c8e5bfe8-d4de-4863-b830-db146a4f0bd8\") " pod="metallb-system/speaker-2hrgh" Feb 17 16:09:53 crc kubenswrapper[4808]: I0217 16:09:53.662867 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/86420ee7-2594-4ef8-8b9d-05a073118389-cert\") pod \"controller-69bbfbf88f-jvlrt\" (UID: \"86420ee7-2594-4ef8-8b9d-05a073118389\") " pod="metallb-system/controller-69bbfbf88f-jvlrt" Feb 17 16:09:53 crc kubenswrapper[4808]: E0217 16:09:53.663416 4808 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Feb 17 16:09:53 crc kubenswrapper[4808]: E0217 16:09:53.663470 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/86420ee7-2594-4ef8-8b9d-05a073118389-metrics-certs podName:86420ee7-2594-4ef8-8b9d-05a073118389 nodeName:}" failed. No retries permitted until 2026-02-17 16:09:54.163453681 +0000 UTC m=+957.679812754 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/86420ee7-2594-4ef8-8b9d-05a073118389-metrics-certs") pod "controller-69bbfbf88f-jvlrt" (UID: "86420ee7-2594-4ef8-8b9d-05a073118389") : secret "controller-certs-secret" not found Feb 17 16:09:53 crc kubenswrapper[4808]: E0217 16:09:53.663600 4808 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Feb 17 16:09:53 crc kubenswrapper[4808]: E0217 16:09:53.663636 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c8e5bfe8-d4de-4863-b830-db146a4f0bd8-metrics-certs podName:c8e5bfe8-d4de-4863-b830-db146a4f0bd8 nodeName:}" failed. No retries permitted until 2026-02-17 16:09:54.163627146 +0000 UTC m=+957.679986219 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c8e5bfe8-d4de-4863-b830-db146a4f0bd8-metrics-certs") pod "speaker-2hrgh" (UID: "c8e5bfe8-d4de-4863-b830-db146a4f0bd8") : secret "speaker-certs-secret" not found Feb 17 16:09:53 crc kubenswrapper[4808]: E0217 16:09:53.663695 4808 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 17 16:09:53 crc kubenswrapper[4808]: E0217 16:09:53.663719 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c8e5bfe8-d4de-4863-b830-db146a4f0bd8-memberlist podName:c8e5bfe8-d4de-4863-b830-db146a4f0bd8 nodeName:}" failed. No retries permitted until 2026-02-17 16:09:54.163712008 +0000 UTC m=+957.680071081 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/c8e5bfe8-d4de-4863-b830-db146a4f0bd8-memberlist") pod "speaker-2hrgh" (UID: "c8e5bfe8-d4de-4863-b830-db146a4f0bd8") : secret "metallb-memberlist" not found Feb 17 16:09:53 crc kubenswrapper[4808]: I0217 16:09:53.664147 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/c8e5bfe8-d4de-4863-b830-db146a4f0bd8-metallb-excludel2\") pod \"speaker-2hrgh\" (UID: \"c8e5bfe8-d4de-4863-b830-db146a4f0bd8\") " pod="metallb-system/speaker-2hrgh" Feb 17 16:09:53 crc kubenswrapper[4808]: I0217 16:09:53.665858 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/86420ee7-2594-4ef8-8b9d-05a073118389-cert\") pod \"controller-69bbfbf88f-jvlrt\" (UID: \"86420ee7-2594-4ef8-8b9d-05a073118389\") " pod="metallb-system/controller-69bbfbf88f-jvlrt" Feb 17 16:09:53 crc kubenswrapper[4808]: I0217 16:09:53.668944 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-c58vl" Feb 17 16:09:53 crc kubenswrapper[4808]: I0217 16:09:53.686068 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-zvr84" Feb 17 16:09:53 crc kubenswrapper[4808]: I0217 16:09:53.687385 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hk4hm\" (UniqueName: \"kubernetes.io/projected/c8e5bfe8-d4de-4863-b830-db146a4f0bd8-kube-api-access-hk4hm\") pod \"speaker-2hrgh\" (UID: \"c8e5bfe8-d4de-4863-b830-db146a4f0bd8\") " pod="metallb-system/speaker-2hrgh" Feb 17 16:09:53 crc kubenswrapper[4808]: I0217 16:09:53.690327 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8lzbh\" (UniqueName: \"kubernetes.io/projected/86420ee7-2594-4ef8-8b9d-05a073118389-kube-api-access-8lzbh\") pod \"controller-69bbfbf88f-jvlrt\" (UID: \"86420ee7-2594-4ef8-8b9d-05a073118389\") " pod="metallb-system/controller-69bbfbf88f-jvlrt" Feb 17 16:09:53 crc kubenswrapper[4808]: I0217 16:09:53.828260 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-c58vl" event={"ID":"42711d14-278f-41eb-80ce-2e67add356b9","Type":"ContainerStarted","Data":"c3c771a49af0bcbd3469553c9741cea6dc96fd7ff92fccbd9ecc8bccb1075e16"} Feb 17 16:09:54 crc kubenswrapper[4808]: I0217 16:09:54.130137 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-zvr84"] Feb 17 16:09:54 crc kubenswrapper[4808]: W0217 16:09:54.137792 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb55883d0_d8e0_4609_8b1a_033d6808ab56.slice/crio-b11c0b1c79ac784de52e2ec6f226913c0c5e08fb25f9f8efceeabd92dfa6feac WatchSource:0}: Error finding container b11c0b1c79ac784de52e2ec6f226913c0c5e08fb25f9f8efceeabd92dfa6feac: Status 404 returned error can't find the container with id b11c0b1c79ac784de52e2ec6f226913c0c5e08fb25f9f8efceeabd92dfa6feac Feb 17 16:09:54 crc kubenswrapper[4808]: I0217 16:09:54.171985 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c8e5bfe8-d4de-4863-b830-db146a4f0bd8-metrics-certs\") pod \"speaker-2hrgh\" (UID: \"c8e5bfe8-d4de-4863-b830-db146a4f0bd8\") " pod="metallb-system/speaker-2hrgh" Feb 17 16:09:54 crc kubenswrapper[4808]: I0217 16:09:54.172105 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/86420ee7-2594-4ef8-8b9d-05a073118389-metrics-certs\") pod \"controller-69bbfbf88f-jvlrt\" (UID: \"86420ee7-2594-4ef8-8b9d-05a073118389\") " pod="metallb-system/controller-69bbfbf88f-jvlrt" Feb 17 16:09:54 crc kubenswrapper[4808]: I0217 16:09:54.172198 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/c8e5bfe8-d4de-4863-b830-db146a4f0bd8-memberlist\") pod \"speaker-2hrgh\" (UID: \"c8e5bfe8-d4de-4863-b830-db146a4f0bd8\") " pod="metallb-system/speaker-2hrgh" Feb 17 16:09:54 crc kubenswrapper[4808]: E0217 16:09:54.172528 4808 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 17 16:09:54 crc kubenswrapper[4808]: E0217 16:09:54.172709 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c8e5bfe8-d4de-4863-b830-db146a4f0bd8-memberlist podName:c8e5bfe8-d4de-4863-b830-db146a4f0bd8 nodeName:}" failed. No retries permitted until 2026-02-17 16:09:55.172675725 +0000 UTC m=+958.689034978 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/c8e5bfe8-d4de-4863-b830-db146a4f0bd8-memberlist") pod "speaker-2hrgh" (UID: "c8e5bfe8-d4de-4863-b830-db146a4f0bd8") : secret "metallb-memberlist" not found Feb 17 16:09:54 crc kubenswrapper[4808]: I0217 16:09:54.178163 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c8e5bfe8-d4de-4863-b830-db146a4f0bd8-metrics-certs\") pod \"speaker-2hrgh\" (UID: \"c8e5bfe8-d4de-4863-b830-db146a4f0bd8\") " pod="metallb-system/speaker-2hrgh" Feb 17 16:09:54 crc kubenswrapper[4808]: I0217 16:09:54.178266 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/86420ee7-2594-4ef8-8b9d-05a073118389-metrics-certs\") pod \"controller-69bbfbf88f-jvlrt\" (UID: \"86420ee7-2594-4ef8-8b9d-05a073118389\") " pod="metallb-system/controller-69bbfbf88f-jvlrt" Feb 17 16:09:54 crc kubenswrapper[4808]: I0217 16:09:54.420364 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-jvlrt" Feb 17 16:09:54 crc kubenswrapper[4808]: I0217 16:09:54.836104 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-zvr84" event={"ID":"b55883d0-d8e0-4609-8b1a-033d6808ab56","Type":"ContainerStarted","Data":"b11c0b1c79ac784de52e2ec6f226913c0c5e08fb25f9f8efceeabd92dfa6feac"} Feb 17 16:09:54 crc kubenswrapper[4808]: I0217 16:09:54.920092 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-jvlrt"] Feb 17 16:09:55 crc kubenswrapper[4808]: I0217 16:09:55.187283 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/c8e5bfe8-d4de-4863-b830-db146a4f0bd8-memberlist\") pod \"speaker-2hrgh\" (UID: \"c8e5bfe8-d4de-4863-b830-db146a4f0bd8\") " pod="metallb-system/speaker-2hrgh" Feb 17 16:09:55 crc kubenswrapper[4808]: E0217 16:09:55.187466 4808 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 17 16:09:55 crc kubenswrapper[4808]: E0217 16:09:55.187549 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c8e5bfe8-d4de-4863-b830-db146a4f0bd8-memberlist podName:c8e5bfe8-d4de-4863-b830-db146a4f0bd8 nodeName:}" failed. No retries permitted until 2026-02-17 16:09:57.187532037 +0000 UTC m=+960.703891110 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/c8e5bfe8-d4de-4863-b830-db146a4f0bd8-memberlist") pod "speaker-2hrgh" (UID: "c8e5bfe8-d4de-4863-b830-db146a4f0bd8") : secret "metallb-memberlist" not found Feb 17 16:09:55 crc kubenswrapper[4808]: I0217 16:09:55.848545 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-jvlrt" event={"ID":"86420ee7-2594-4ef8-8b9d-05a073118389","Type":"ContainerStarted","Data":"7c12a784d887fa8d0736db135fac58f27ed0e52fd1b88b44692c071f55a837b5"} Feb 17 16:09:55 crc kubenswrapper[4808]: I0217 16:09:55.848864 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-jvlrt" event={"ID":"86420ee7-2594-4ef8-8b9d-05a073118389","Type":"ContainerStarted","Data":"de6d0c78fbe7f4242a4e07f1cbab2a12bf2a822ee3675c882bd8464fc6e2384b"} Feb 17 16:09:55 crc kubenswrapper[4808]: I0217 16:09:55.848876 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-jvlrt" event={"ID":"86420ee7-2594-4ef8-8b9d-05a073118389","Type":"ContainerStarted","Data":"2e3c264fbc1a73ebb1149c5181116f75cc5e2d92265d82d4dc9d6b03e1cdcd72"} Feb 17 16:09:55 crc kubenswrapper[4808]: I0217 16:09:55.848889 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-69bbfbf88f-jvlrt" Feb 17 16:09:55 crc kubenswrapper[4808]: I0217 16:09:55.872114 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-69bbfbf88f-jvlrt" podStartSLOduration=2.872071908 podStartE2EDuration="2.872071908s" podCreationTimestamp="2026-02-17 16:09:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:09:55.869185731 +0000 UTC m=+959.385544804" watchObservedRunningTime="2026-02-17 16:09:55.872071908 +0000 UTC m=+959.388430981" Feb 17 16:09:56 crc kubenswrapper[4808]: I0217 16:09:56.535246 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-pgghj" Feb 17 16:09:56 crc kubenswrapper[4808]: I0217 16:09:56.642120 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pgghj"] Feb 17 16:09:56 crc kubenswrapper[4808]: I0217 16:09:56.696422 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jqtsg"] Feb 17 16:09:56 crc kubenswrapper[4808]: I0217 16:09:56.697006 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-jqtsg" podUID="7cdb188e-770b-4b77-8396-a2422be880a4" containerName="registry-server" containerID="cri-o://2d9bae86441156ea0978a61aa55e3e05d2e584ec61842c859e61158d7e3209d1" gracePeriod=2 Feb 17 16:09:56 crc kubenswrapper[4808]: I0217 16:09:56.858438 4808 generic.go:334] "Generic (PLEG): container finished" podID="7cdb188e-770b-4b77-8396-a2422be880a4" containerID="2d9bae86441156ea0978a61aa55e3e05d2e584ec61842c859e61158d7e3209d1" exitCode=0 Feb 17 16:09:56 crc kubenswrapper[4808]: I0217 16:09:56.858537 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jqtsg" event={"ID":"7cdb188e-770b-4b77-8396-a2422be880a4","Type":"ContainerDied","Data":"2d9bae86441156ea0978a61aa55e3e05d2e584ec61842c859e61158d7e3209d1"} Feb 17 16:09:57 crc kubenswrapper[4808]: I0217 16:09:57.213257 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jqtsg" Feb 17 16:09:57 crc kubenswrapper[4808]: I0217 16:09:57.226408 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/c8e5bfe8-d4de-4863-b830-db146a4f0bd8-memberlist\") pod \"speaker-2hrgh\" (UID: \"c8e5bfe8-d4de-4863-b830-db146a4f0bd8\") " pod="metallb-system/speaker-2hrgh" Feb 17 16:09:57 crc kubenswrapper[4808]: I0217 16:09:57.232877 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/c8e5bfe8-d4de-4863-b830-db146a4f0bd8-memberlist\") pod \"speaker-2hrgh\" (UID: \"c8e5bfe8-d4de-4863-b830-db146a4f0bd8\") " pod="metallb-system/speaker-2hrgh" Feb 17 16:09:57 crc kubenswrapper[4808]: I0217 16:09:57.328241 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7cdb188e-770b-4b77-8396-a2422be880a4-utilities\") pod \"7cdb188e-770b-4b77-8396-a2422be880a4\" (UID: \"7cdb188e-770b-4b77-8396-a2422be880a4\") " Feb 17 16:09:57 crc kubenswrapper[4808]: I0217 16:09:57.328346 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gmplc\" (UniqueName: \"kubernetes.io/projected/7cdb188e-770b-4b77-8396-a2422be880a4-kube-api-access-gmplc\") pod \"7cdb188e-770b-4b77-8396-a2422be880a4\" (UID: \"7cdb188e-770b-4b77-8396-a2422be880a4\") " Feb 17 16:09:57 crc kubenswrapper[4808]: I0217 16:09:57.328376 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7cdb188e-770b-4b77-8396-a2422be880a4-catalog-content\") pod \"7cdb188e-770b-4b77-8396-a2422be880a4\" (UID: \"7cdb188e-770b-4b77-8396-a2422be880a4\") " Feb 17 16:09:57 crc kubenswrapper[4808]: I0217 16:09:57.330087 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7cdb188e-770b-4b77-8396-a2422be880a4-utilities" (OuterVolumeSpecName: "utilities") pod "7cdb188e-770b-4b77-8396-a2422be880a4" (UID: "7cdb188e-770b-4b77-8396-a2422be880a4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:09:57 crc kubenswrapper[4808]: I0217 16:09:57.341867 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7cdb188e-770b-4b77-8396-a2422be880a4-kube-api-access-gmplc" (OuterVolumeSpecName: "kube-api-access-gmplc") pod "7cdb188e-770b-4b77-8396-a2422be880a4" (UID: "7cdb188e-770b-4b77-8396-a2422be880a4"). InnerVolumeSpecName "kube-api-access-gmplc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:09:57 crc kubenswrapper[4808]: I0217 16:09:57.374163 4808 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-mlfcz" Feb 17 16:09:57 crc kubenswrapper[4808]: I0217 16:09:57.388821 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-2hrgh" Feb 17 16:09:57 crc kubenswrapper[4808]: I0217 16:09:57.392332 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7cdb188e-770b-4b77-8396-a2422be880a4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7cdb188e-770b-4b77-8396-a2422be880a4" (UID: "7cdb188e-770b-4b77-8396-a2422be880a4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:09:57 crc kubenswrapper[4808]: W0217 16:09:57.420879 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc8e5bfe8_d4de_4863_b830_db146a4f0bd8.slice/crio-9dc08dcc0c5641f62390b9bcd9f1ec1ac1aac7a5024ae461de08318c227a34e1 WatchSource:0}: Error finding container 9dc08dcc0c5641f62390b9bcd9f1ec1ac1aac7a5024ae461de08318c227a34e1: Status 404 returned error can't find the container with id 9dc08dcc0c5641f62390b9bcd9f1ec1ac1aac7a5024ae461de08318c227a34e1 Feb 17 16:09:57 crc kubenswrapper[4808]: I0217 16:09:57.429776 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7cdb188e-770b-4b77-8396-a2422be880a4-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:09:57 crc kubenswrapper[4808]: I0217 16:09:57.429809 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gmplc\" (UniqueName: \"kubernetes.io/projected/7cdb188e-770b-4b77-8396-a2422be880a4-kube-api-access-gmplc\") on node \"crc\" DevicePath \"\"" Feb 17 16:09:57 crc kubenswrapper[4808]: I0217 16:09:57.429821 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7cdb188e-770b-4b77-8396-a2422be880a4-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:09:57 crc kubenswrapper[4808]: I0217 16:09:57.673815 4808 scope.go:117] "RemoveContainer" containerID="2d9bae86441156ea0978a61aa55e3e05d2e584ec61842c859e61158d7e3209d1" Feb 17 16:09:57 crc kubenswrapper[4808]: I0217 16:09:57.696019 4808 scope.go:117] "RemoveContainer" containerID="90673874b32c0b13b6c696df3d7ec418349328c7a6d184134dcf0c00617dcaee" Feb 17 16:09:57 crc kubenswrapper[4808]: I0217 16:09:57.713477 4808 scope.go:117] "RemoveContainer" containerID="47a3ebdb89ce68c6b02152046e0104b05bde9ba746322e9e754da8447f0e2b5b" Feb 17 16:09:57 crc kubenswrapper[4808]: I0217 16:09:57.866745 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jqtsg" Feb 17 16:09:57 crc kubenswrapper[4808]: I0217 16:09:57.868714 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-2hrgh" event={"ID":"c8e5bfe8-d4de-4863-b830-db146a4f0bd8","Type":"ContainerStarted","Data":"9dc08dcc0c5641f62390b9bcd9f1ec1ac1aac7a5024ae461de08318c227a34e1"} Feb 17 16:09:57 crc kubenswrapper[4808]: I0217 16:09:57.868769 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jqtsg" event={"ID":"7cdb188e-770b-4b77-8396-a2422be880a4","Type":"ContainerDied","Data":"ef844668f5d5756ff7b1ef705f4ea124e4d7a7bd509d8e67479cb418a27a08a4"} Feb 17 16:09:57 crc kubenswrapper[4808]: I0217 16:09:57.912959 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jqtsg"] Feb 17 16:09:57 crc kubenswrapper[4808]: I0217 16:09:57.918108 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-jqtsg"] Feb 17 16:09:58 crc kubenswrapper[4808]: I0217 16:09:58.876536 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-2hrgh" event={"ID":"c8e5bfe8-d4de-4863-b830-db146a4f0bd8","Type":"ContainerStarted","Data":"22f85ac8d5c3800c41c82c02ba4371d06f3f484ada503e831c8b840c81e7a06c"} Feb 17 16:09:59 crc kubenswrapper[4808]: I0217 16:09:59.155278 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7cdb188e-770b-4b77-8396-a2422be880a4" path="/var/lib/kubelet/pods/7cdb188e-770b-4b77-8396-a2422be880a4/volumes" Feb 17 16:09:59 crc kubenswrapper[4808]: I0217 16:09:59.889143 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-2hrgh" event={"ID":"c8e5bfe8-d4de-4863-b830-db146a4f0bd8","Type":"ContainerStarted","Data":"38dd71844541127279da98403b0903521a13b00e192825a9d7e29548457789ba"} Feb 17 16:09:59 crc kubenswrapper[4808]: I0217 16:09:59.889391 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-2hrgh" Feb 17 16:09:59 crc kubenswrapper[4808]: I0217 16:09:59.913553 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-2hrgh" podStartSLOduration=6.913533755 podStartE2EDuration="6.913533755s" podCreationTimestamp="2026-02-17 16:09:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:09:59.907987886 +0000 UTC m=+963.424346959" watchObservedRunningTime="2026-02-17 16:09:59.913533755 +0000 UTC m=+963.429892818" Feb 17 16:10:03 crc kubenswrapper[4808]: I0217 16:10:03.917786 4808 generic.go:334] "Generic (PLEG): container finished" podID="42711d14-278f-41eb-80ce-2e67add356b9" containerID="100e4ef4b2f2ab83c6d70346f4353427fef8930a51342fb983dcc3630a173e9e" exitCode=0 Feb 17 16:10:03 crc kubenswrapper[4808]: I0217 16:10:03.917923 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-c58vl" event={"ID":"42711d14-278f-41eb-80ce-2e67add356b9","Type":"ContainerDied","Data":"100e4ef4b2f2ab83c6d70346f4353427fef8930a51342fb983dcc3630a173e9e"} Feb 17 16:10:03 crc kubenswrapper[4808]: I0217 16:10:03.922562 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-zvr84" event={"ID":"b55883d0-d8e0-4609-8b1a-033d6808ab56","Type":"ContainerStarted","Data":"62a40b9d296b95dcdf2a1c11152b1ea4cb0672bb35ba8e4b44359b3d966e54d1"} Feb 17 16:10:03 crc kubenswrapper[4808]: I0217 16:10:03.922763 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-zvr84" Feb 17 16:10:03 crc kubenswrapper[4808]: I0217 16:10:03.954906 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-zvr84" podStartSLOduration=2.271503333 podStartE2EDuration="10.954889689s" podCreationTimestamp="2026-02-17 16:09:53 +0000 UTC" firstStartedPulling="2026-02-17 16:09:54.140315961 +0000 UTC m=+957.656675034" lastFinishedPulling="2026-02-17 16:10:02.823702277 +0000 UTC m=+966.340061390" observedRunningTime="2026-02-17 16:10:03.952124436 +0000 UTC m=+967.468483509" watchObservedRunningTime="2026-02-17 16:10:03.954889689 +0000 UTC m=+967.471248762" Feb 17 16:10:04 crc kubenswrapper[4808]: I0217 16:10:04.930013 4808 generic.go:334] "Generic (PLEG): container finished" podID="42711d14-278f-41eb-80ce-2e67add356b9" containerID="fbf44e61aabf63de03154baaba818c6e4afefb871dc6642842828d5e075d169d" exitCode=0 Feb 17 16:10:04 crc kubenswrapper[4808]: I0217 16:10:04.930179 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-c58vl" event={"ID":"42711d14-278f-41eb-80ce-2e67add356b9","Type":"ContainerDied","Data":"fbf44e61aabf63de03154baaba818c6e4afefb871dc6642842828d5e075d169d"} Feb 17 16:10:05 crc kubenswrapper[4808]: I0217 16:10:05.942733 4808 generic.go:334] "Generic (PLEG): container finished" podID="42711d14-278f-41eb-80ce-2e67add356b9" containerID="f6095819d9cf06e5da0bac1456811b9d743389d3b95aba5c0568a280f9a26e65" exitCode=0 Feb 17 16:10:05 crc kubenswrapper[4808]: I0217 16:10:05.942838 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-c58vl" event={"ID":"42711d14-278f-41eb-80ce-2e67add356b9","Type":"ContainerDied","Data":"f6095819d9cf06e5da0bac1456811b9d743389d3b95aba5c0568a280f9a26e65"} Feb 17 16:10:06 crc kubenswrapper[4808]: I0217 16:10:06.979387 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-c58vl" event={"ID":"42711d14-278f-41eb-80ce-2e67add356b9","Type":"ContainerStarted","Data":"e8624e2c142931f20e19390ce6be8cc6d6f8c6116d64fcd2ec7b2085945fd8a3"} Feb 17 16:10:06 crc kubenswrapper[4808]: I0217 16:10:06.979715 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-c58vl" event={"ID":"42711d14-278f-41eb-80ce-2e67add356b9","Type":"ContainerStarted","Data":"5bb9daff2c4f52b8d2b423730b2cb8deebab166cf2cd799d545d3ef0a857b2cd"} Feb 17 16:10:06 crc kubenswrapper[4808]: I0217 16:10:06.979731 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-c58vl" event={"ID":"42711d14-278f-41eb-80ce-2e67add356b9","Type":"ContainerStarted","Data":"cde5c2c2753bf8283f3f7824ac8948dc7bac72507eb30e1ea30820438d3e8b29"} Feb 17 16:10:06 crc kubenswrapper[4808]: I0217 16:10:06.979741 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-c58vl" event={"ID":"42711d14-278f-41eb-80ce-2e67add356b9","Type":"ContainerStarted","Data":"4d62b7eea66b4c3e49c633262db57a4de4fb4268d5877d855b89e0dd26877731"} Feb 17 16:10:06 crc kubenswrapper[4808]: I0217 16:10:06.979751 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-c58vl" event={"ID":"42711d14-278f-41eb-80ce-2e67add356b9","Type":"ContainerStarted","Data":"6646c5670cb8cc216a5fff5945b2086ec4bad170625cf3909b865cc07cca6080"} Feb 17 16:10:07 crc kubenswrapper[4808]: I0217 16:10:07.997320 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-c58vl" event={"ID":"42711d14-278f-41eb-80ce-2e67add356b9","Type":"ContainerStarted","Data":"911eb6d8e00e4ee6440bab53d779a0cfa05bcb524777535451c47c556dd43f06"} Feb 17 16:10:07 crc kubenswrapper[4808]: I0217 16:10:07.997627 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-c58vl" Feb 17 16:10:08 crc kubenswrapper[4808]: I0217 16:10:08.046866 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-c58vl" podStartSLOduration=6.026997754 podStartE2EDuration="15.046838386s" podCreationTimestamp="2026-02-17 16:09:53 +0000 UTC" firstStartedPulling="2026-02-17 16:09:53.823158781 +0000 UTC m=+957.339517854" lastFinishedPulling="2026-02-17 16:10:02.842999383 +0000 UTC m=+966.359358486" observedRunningTime="2026-02-17 16:10:08.035333379 +0000 UTC m=+971.551692492" watchObservedRunningTime="2026-02-17 16:10:08.046838386 +0000 UTC m=+971.563197489" Feb 17 16:10:08 crc kubenswrapper[4808]: I0217 16:10:08.669810 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-c58vl" Feb 17 16:10:08 crc kubenswrapper[4808]: I0217 16:10:08.708560 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-c58vl" Feb 17 16:10:09 crc kubenswrapper[4808]: I0217 16:10:09.382829 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-b22t4"] Feb 17 16:10:09 crc kubenswrapper[4808]: E0217 16:10:09.383813 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cdb188e-770b-4b77-8396-a2422be880a4" containerName="extract-utilities" Feb 17 16:10:09 crc kubenswrapper[4808]: I0217 16:10:09.383837 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cdb188e-770b-4b77-8396-a2422be880a4" containerName="extract-utilities" Feb 17 16:10:09 crc kubenswrapper[4808]: E0217 16:10:09.383866 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cdb188e-770b-4b77-8396-a2422be880a4" containerName="extract-content" Feb 17 16:10:09 crc kubenswrapper[4808]: I0217 16:10:09.383875 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cdb188e-770b-4b77-8396-a2422be880a4" containerName="extract-content" Feb 17 16:10:09 crc kubenswrapper[4808]: E0217 16:10:09.383888 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cdb188e-770b-4b77-8396-a2422be880a4" containerName="registry-server" Feb 17 16:10:09 crc kubenswrapper[4808]: I0217 16:10:09.383897 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cdb188e-770b-4b77-8396-a2422be880a4" containerName="registry-server" Feb 17 16:10:09 crc kubenswrapper[4808]: I0217 16:10:09.384075 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="7cdb188e-770b-4b77-8396-a2422be880a4" containerName="registry-server" Feb 17 16:10:09 crc kubenswrapper[4808]: I0217 16:10:09.385497 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b22t4" Feb 17 16:10:09 crc kubenswrapper[4808]: I0217 16:10:09.399190 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-b22t4"] Feb 17 16:10:09 crc kubenswrapper[4808]: I0217 16:10:09.411235 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a3e6872-b5f3-4b28-abb5-3f721a69d3ab-catalog-content\") pod \"redhat-marketplace-b22t4\" (UID: \"6a3e6872-b5f3-4b28-abb5-3f721a69d3ab\") " pod="openshift-marketplace/redhat-marketplace-b22t4" Feb 17 16:10:09 crc kubenswrapper[4808]: I0217 16:10:09.411358 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a3e6872-b5f3-4b28-abb5-3f721a69d3ab-utilities\") pod \"redhat-marketplace-b22t4\" (UID: \"6a3e6872-b5f3-4b28-abb5-3f721a69d3ab\") " pod="openshift-marketplace/redhat-marketplace-b22t4" Feb 17 16:10:09 crc kubenswrapper[4808]: I0217 16:10:09.411432 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cdlt\" (UniqueName: \"kubernetes.io/projected/6a3e6872-b5f3-4b28-abb5-3f721a69d3ab-kube-api-access-5cdlt\") pod \"redhat-marketplace-b22t4\" (UID: \"6a3e6872-b5f3-4b28-abb5-3f721a69d3ab\") " pod="openshift-marketplace/redhat-marketplace-b22t4" Feb 17 16:10:09 crc kubenswrapper[4808]: I0217 16:10:09.512732 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a3e6872-b5f3-4b28-abb5-3f721a69d3ab-catalog-content\") pod \"redhat-marketplace-b22t4\" (UID: \"6a3e6872-b5f3-4b28-abb5-3f721a69d3ab\") " pod="openshift-marketplace/redhat-marketplace-b22t4" Feb 17 16:10:09 crc kubenswrapper[4808]: I0217 16:10:09.512803 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a3e6872-b5f3-4b28-abb5-3f721a69d3ab-utilities\") pod \"redhat-marketplace-b22t4\" (UID: \"6a3e6872-b5f3-4b28-abb5-3f721a69d3ab\") " pod="openshift-marketplace/redhat-marketplace-b22t4" Feb 17 16:10:09 crc kubenswrapper[4808]: I0217 16:10:09.512834 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5cdlt\" (UniqueName: \"kubernetes.io/projected/6a3e6872-b5f3-4b28-abb5-3f721a69d3ab-kube-api-access-5cdlt\") pod \"redhat-marketplace-b22t4\" (UID: \"6a3e6872-b5f3-4b28-abb5-3f721a69d3ab\") " pod="openshift-marketplace/redhat-marketplace-b22t4" Feb 17 16:10:09 crc kubenswrapper[4808]: I0217 16:10:09.513351 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a3e6872-b5f3-4b28-abb5-3f721a69d3ab-catalog-content\") pod \"redhat-marketplace-b22t4\" (UID: \"6a3e6872-b5f3-4b28-abb5-3f721a69d3ab\") " pod="openshift-marketplace/redhat-marketplace-b22t4" Feb 17 16:10:09 crc kubenswrapper[4808]: I0217 16:10:09.513467 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a3e6872-b5f3-4b28-abb5-3f721a69d3ab-utilities\") pod \"redhat-marketplace-b22t4\" (UID: \"6a3e6872-b5f3-4b28-abb5-3f721a69d3ab\") " pod="openshift-marketplace/redhat-marketplace-b22t4" Feb 17 16:10:09 crc kubenswrapper[4808]: I0217 16:10:09.534418 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5cdlt\" (UniqueName: \"kubernetes.io/projected/6a3e6872-b5f3-4b28-abb5-3f721a69d3ab-kube-api-access-5cdlt\") pod \"redhat-marketplace-b22t4\" (UID: \"6a3e6872-b5f3-4b28-abb5-3f721a69d3ab\") " pod="openshift-marketplace/redhat-marketplace-b22t4" Feb 17 16:10:09 crc kubenswrapper[4808]: I0217 16:10:09.707830 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b22t4" Feb 17 16:10:09 crc kubenswrapper[4808]: I0217 16:10:09.974241 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-b22t4"] Feb 17 16:10:10 crc kubenswrapper[4808]: I0217 16:10:10.027709 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b22t4" event={"ID":"6a3e6872-b5f3-4b28-abb5-3f721a69d3ab","Type":"ContainerStarted","Data":"0a8484881de1d70ec07925f19c53404a1c36bb6e619b19475afa5fa460840f39"} Feb 17 16:10:11 crc kubenswrapper[4808]: I0217 16:10:11.041419 4808 generic.go:334] "Generic (PLEG): container finished" podID="6a3e6872-b5f3-4b28-abb5-3f721a69d3ab" containerID="ca6dad098d98000904ee193800e5cff6af216019d831be3c7082c77fd328066f" exitCode=0 Feb 17 16:10:11 crc kubenswrapper[4808]: I0217 16:10:11.041541 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b22t4" event={"ID":"6a3e6872-b5f3-4b28-abb5-3f721a69d3ab","Type":"ContainerDied","Data":"ca6dad098d98000904ee193800e5cff6af216019d831be3c7082c77fd328066f"} Feb 17 16:10:12 crc kubenswrapper[4808]: I0217 16:10:12.057969 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b22t4" event={"ID":"6a3e6872-b5f3-4b28-abb5-3f721a69d3ab","Type":"ContainerStarted","Data":"55a1ebe71976ac4d4cadff189408da31cb22e90fad0ba07ebd0b581c8feed71f"} Feb 17 16:10:13 crc kubenswrapper[4808]: I0217 16:10:13.069373 4808 generic.go:334] "Generic (PLEG): container finished" podID="6a3e6872-b5f3-4b28-abb5-3f721a69d3ab" containerID="55a1ebe71976ac4d4cadff189408da31cb22e90fad0ba07ebd0b581c8feed71f" exitCode=0 Feb 17 16:10:13 crc kubenswrapper[4808]: I0217 16:10:13.069439 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b22t4" event={"ID":"6a3e6872-b5f3-4b28-abb5-3f721a69d3ab","Type":"ContainerDied","Data":"55a1ebe71976ac4d4cadff189408da31cb22e90fad0ba07ebd0b581c8feed71f"} Feb 17 16:10:13 crc kubenswrapper[4808]: I0217 16:10:13.700052 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-zvr84" Feb 17 16:10:14 crc kubenswrapper[4808]: I0217 16:10:14.080717 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b22t4" event={"ID":"6a3e6872-b5f3-4b28-abb5-3f721a69d3ab","Type":"ContainerStarted","Data":"4b75f60011cf28fb63cb77cf2df3af9aa65761ca0c8e7f1ad61a06e169e399ec"} Feb 17 16:10:14 crc kubenswrapper[4808]: I0217 16:10:14.103659 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-b22t4" podStartSLOduration=2.67489184 podStartE2EDuration="5.103641552s" podCreationTimestamp="2026-02-17 16:10:09 +0000 UTC" firstStartedPulling="2026-02-17 16:10:11.044457966 +0000 UTC m=+974.560817079" lastFinishedPulling="2026-02-17 16:10:13.473207678 +0000 UTC m=+976.989566791" observedRunningTime="2026-02-17 16:10:14.101400322 +0000 UTC m=+977.617759405" watchObservedRunningTime="2026-02-17 16:10:14.103641552 +0000 UTC m=+977.620000635" Feb 17 16:10:14 crc kubenswrapper[4808]: I0217 16:10:14.426106 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-69bbfbf88f-jvlrt" Feb 17 16:10:17 crc kubenswrapper[4808]: I0217 16:10:17.393759 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-2hrgh" Feb 17 16:10:19 crc kubenswrapper[4808]: I0217 16:10:19.708992 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-b22t4" Feb 17 16:10:19 crc kubenswrapper[4808]: I0217 16:10:19.709891 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-b22t4" Feb 17 16:10:19 crc kubenswrapper[4808]: I0217 16:10:19.773631 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-b22t4" Feb 17 16:10:20 crc kubenswrapper[4808]: I0217 16:10:20.190350 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-qnxrh"] Feb 17 16:10:20 crc kubenswrapper[4808]: I0217 16:10:20.193706 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-qnxrh" Feb 17 16:10:20 crc kubenswrapper[4808]: I0217 16:10:20.200142 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-qnxrh"] Feb 17 16:10:20 crc kubenswrapper[4808]: I0217 16:10:20.203869 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Feb 17 16:10:20 crc kubenswrapper[4808]: I0217 16:10:20.204093 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Feb 17 16:10:20 crc kubenswrapper[4808]: I0217 16:10:20.208512 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-ms6kq" Feb 17 16:10:20 crc kubenswrapper[4808]: I0217 16:10:20.236349 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-b22t4" Feb 17 16:10:20 crc kubenswrapper[4808]: I0217 16:10:20.277011 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxsv8\" (UniqueName: \"kubernetes.io/projected/0ac34750-b7bc-47ce-b128-10bfc5e9c8cf-kube-api-access-jxsv8\") pod \"openstack-operator-index-qnxrh\" (UID: \"0ac34750-b7bc-47ce-b128-10bfc5e9c8cf\") " pod="openstack-operators/openstack-operator-index-qnxrh" Feb 17 16:10:20 crc kubenswrapper[4808]: I0217 16:10:20.379102 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxsv8\" (UniqueName: \"kubernetes.io/projected/0ac34750-b7bc-47ce-b128-10bfc5e9c8cf-kube-api-access-jxsv8\") pod \"openstack-operator-index-qnxrh\" (UID: \"0ac34750-b7bc-47ce-b128-10bfc5e9c8cf\") " pod="openstack-operators/openstack-operator-index-qnxrh" Feb 17 16:10:20 crc kubenswrapper[4808]: I0217 16:10:20.396398 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxsv8\" (UniqueName: \"kubernetes.io/projected/0ac34750-b7bc-47ce-b128-10bfc5e9c8cf-kube-api-access-jxsv8\") pod \"openstack-operator-index-qnxrh\" (UID: \"0ac34750-b7bc-47ce-b128-10bfc5e9c8cf\") " pod="openstack-operators/openstack-operator-index-qnxrh" Feb 17 16:10:20 crc kubenswrapper[4808]: I0217 16:10:20.518324 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-qnxrh" Feb 17 16:10:20 crc kubenswrapper[4808]: I0217 16:10:20.963040 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-qnxrh"] Feb 17 16:10:20 crc kubenswrapper[4808]: W0217 16:10:20.979034 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0ac34750_b7bc_47ce_b128_10bfc5e9c8cf.slice/crio-7dfd596ecefcb9b7f65cea8307ce1e80d8368db6e2b668556763a24fcc94dd30 WatchSource:0}: Error finding container 7dfd596ecefcb9b7f65cea8307ce1e80d8368db6e2b668556763a24fcc94dd30: Status 404 returned error can't find the container with id 7dfd596ecefcb9b7f65cea8307ce1e80d8368db6e2b668556763a24fcc94dd30 Feb 17 16:10:21 crc kubenswrapper[4808]: I0217 16:10:21.156227 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-qnxrh" event={"ID":"0ac34750-b7bc-47ce-b128-10bfc5e9c8cf","Type":"ContainerStarted","Data":"7dfd596ecefcb9b7f65cea8307ce1e80d8368db6e2b668556763a24fcc94dd30"} Feb 17 16:10:23 crc kubenswrapper[4808]: I0217 16:10:23.675466 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-c58vl" Feb 17 16:10:24 crc kubenswrapper[4808]: I0217 16:10:24.187069 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-qnxrh" event={"ID":"0ac34750-b7bc-47ce-b128-10bfc5e9c8cf","Type":"ContainerStarted","Data":"5fd188d3caa8e18b174683f34f9ee94fa3c92333f2404e05b31941a90f76d47b"} Feb 17 16:10:24 crc kubenswrapper[4808]: I0217 16:10:24.216288 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-qnxrh" podStartSLOduration=1.346360115 podStartE2EDuration="4.21626352s" podCreationTimestamp="2026-02-17 16:10:20 +0000 UTC" firstStartedPulling="2026-02-17 16:10:20.982968599 +0000 UTC m=+984.499327682" lastFinishedPulling="2026-02-17 16:10:23.852872004 +0000 UTC m=+987.369231087" observedRunningTime="2026-02-17 16:10:24.199350258 +0000 UTC m=+987.715709361" watchObservedRunningTime="2026-02-17 16:10:24.21626352 +0000 UTC m=+987.732622623" Feb 17 16:10:24 crc kubenswrapper[4808]: I0217 16:10:24.813604 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-b22t4"] Feb 17 16:10:24 crc kubenswrapper[4808]: I0217 16:10:24.813865 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-b22t4" podUID="6a3e6872-b5f3-4b28-abb5-3f721a69d3ab" containerName="registry-server" containerID="cri-o://4b75f60011cf28fb63cb77cf2df3af9aa65761ca0c8e7f1ad61a06e169e399ec" gracePeriod=2 Feb 17 16:10:25 crc kubenswrapper[4808]: I0217 16:10:25.019563 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-qnxrh"] Feb 17 16:10:25 crc kubenswrapper[4808]: I0217 16:10:25.200201 4808 generic.go:334] "Generic (PLEG): container finished" podID="6a3e6872-b5f3-4b28-abb5-3f721a69d3ab" containerID="4b75f60011cf28fb63cb77cf2df3af9aa65761ca0c8e7f1ad61a06e169e399ec" exitCode=0 Feb 17 16:10:25 crc kubenswrapper[4808]: I0217 16:10:25.200245 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b22t4" event={"ID":"6a3e6872-b5f3-4b28-abb5-3f721a69d3ab","Type":"ContainerDied","Data":"4b75f60011cf28fb63cb77cf2df3af9aa65761ca0c8e7f1ad61a06e169e399ec"} Feb 17 16:10:25 crc kubenswrapper[4808]: I0217 16:10:25.273477 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b22t4" Feb 17 16:10:25 crc kubenswrapper[4808]: I0217 16:10:25.354226 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a3e6872-b5f3-4b28-abb5-3f721a69d3ab-catalog-content\") pod \"6a3e6872-b5f3-4b28-abb5-3f721a69d3ab\" (UID: \"6a3e6872-b5f3-4b28-abb5-3f721a69d3ab\") " Feb 17 16:10:25 crc kubenswrapper[4808]: I0217 16:10:25.354316 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5cdlt\" (UniqueName: \"kubernetes.io/projected/6a3e6872-b5f3-4b28-abb5-3f721a69d3ab-kube-api-access-5cdlt\") pod \"6a3e6872-b5f3-4b28-abb5-3f721a69d3ab\" (UID: \"6a3e6872-b5f3-4b28-abb5-3f721a69d3ab\") " Feb 17 16:10:25 crc kubenswrapper[4808]: I0217 16:10:25.354568 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a3e6872-b5f3-4b28-abb5-3f721a69d3ab-utilities\") pod \"6a3e6872-b5f3-4b28-abb5-3f721a69d3ab\" (UID: \"6a3e6872-b5f3-4b28-abb5-3f721a69d3ab\") " Feb 17 16:10:25 crc kubenswrapper[4808]: I0217 16:10:25.355769 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a3e6872-b5f3-4b28-abb5-3f721a69d3ab-utilities" (OuterVolumeSpecName: "utilities") pod "6a3e6872-b5f3-4b28-abb5-3f721a69d3ab" (UID: "6a3e6872-b5f3-4b28-abb5-3f721a69d3ab"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:10:25 crc kubenswrapper[4808]: I0217 16:10:25.364139 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a3e6872-b5f3-4b28-abb5-3f721a69d3ab-kube-api-access-5cdlt" (OuterVolumeSpecName: "kube-api-access-5cdlt") pod "6a3e6872-b5f3-4b28-abb5-3f721a69d3ab" (UID: "6a3e6872-b5f3-4b28-abb5-3f721a69d3ab"). InnerVolumeSpecName "kube-api-access-5cdlt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:10:25 crc kubenswrapper[4808]: I0217 16:10:25.389163 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a3e6872-b5f3-4b28-abb5-3f721a69d3ab-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6a3e6872-b5f3-4b28-abb5-3f721a69d3ab" (UID: "6a3e6872-b5f3-4b28-abb5-3f721a69d3ab"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:10:25 crc kubenswrapper[4808]: I0217 16:10:25.458462 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a3e6872-b5f3-4b28-abb5-3f721a69d3ab-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:10:25 crc kubenswrapper[4808]: I0217 16:10:25.458523 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a3e6872-b5f3-4b28-abb5-3f721a69d3ab-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:10:25 crc kubenswrapper[4808]: I0217 16:10:25.458546 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5cdlt\" (UniqueName: \"kubernetes.io/projected/6a3e6872-b5f3-4b28-abb5-3f721a69d3ab-kube-api-access-5cdlt\") on node \"crc\" DevicePath \"\"" Feb 17 16:10:25 crc kubenswrapper[4808]: I0217 16:10:25.821698 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-75t5f"] Feb 17 16:10:25 crc kubenswrapper[4808]: E0217 16:10:25.822135 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a3e6872-b5f3-4b28-abb5-3f721a69d3ab" containerName="extract-content" Feb 17 16:10:25 crc kubenswrapper[4808]: I0217 16:10:25.822169 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a3e6872-b5f3-4b28-abb5-3f721a69d3ab" containerName="extract-content" Feb 17 16:10:25 crc kubenswrapper[4808]: E0217 16:10:25.822198 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a3e6872-b5f3-4b28-abb5-3f721a69d3ab" containerName="extract-utilities" Feb 17 16:10:25 crc kubenswrapper[4808]: I0217 16:10:25.822215 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a3e6872-b5f3-4b28-abb5-3f721a69d3ab" containerName="extract-utilities" Feb 17 16:10:25 crc kubenswrapper[4808]: E0217 16:10:25.822240 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a3e6872-b5f3-4b28-abb5-3f721a69d3ab" containerName="registry-server" Feb 17 16:10:25 crc kubenswrapper[4808]: I0217 16:10:25.822262 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a3e6872-b5f3-4b28-abb5-3f721a69d3ab" containerName="registry-server" Feb 17 16:10:25 crc kubenswrapper[4808]: I0217 16:10:25.822524 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a3e6872-b5f3-4b28-abb5-3f721a69d3ab" containerName="registry-server" Feb 17 16:10:25 crc kubenswrapper[4808]: I0217 16:10:25.823265 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-75t5f" Feb 17 16:10:25 crc kubenswrapper[4808]: I0217 16:10:25.837519 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-75t5f"] Feb 17 16:10:25 crc kubenswrapper[4808]: I0217 16:10:25.880384 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlh5x\" (UniqueName: \"kubernetes.io/projected/aa72ff82-f411-42f6-8144-937ca196211b-kube-api-access-mlh5x\") pod \"openstack-operator-index-75t5f\" (UID: \"aa72ff82-f411-42f6-8144-937ca196211b\") " pod="openstack-operators/openstack-operator-index-75t5f" Feb 17 16:10:25 crc kubenswrapper[4808]: I0217 16:10:25.982547 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mlh5x\" (UniqueName: \"kubernetes.io/projected/aa72ff82-f411-42f6-8144-937ca196211b-kube-api-access-mlh5x\") pod \"openstack-operator-index-75t5f\" (UID: \"aa72ff82-f411-42f6-8144-937ca196211b\") " pod="openstack-operators/openstack-operator-index-75t5f" Feb 17 16:10:26 crc kubenswrapper[4808]: I0217 16:10:26.007709 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlh5x\" (UniqueName: \"kubernetes.io/projected/aa72ff82-f411-42f6-8144-937ca196211b-kube-api-access-mlh5x\") pod \"openstack-operator-index-75t5f\" (UID: \"aa72ff82-f411-42f6-8144-937ca196211b\") " pod="openstack-operators/openstack-operator-index-75t5f" Feb 17 16:10:26 crc kubenswrapper[4808]: I0217 16:10:26.191127 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-75t5f" Feb 17 16:10:26 crc kubenswrapper[4808]: I0217 16:10:26.213405 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b22t4" event={"ID":"6a3e6872-b5f3-4b28-abb5-3f721a69d3ab","Type":"ContainerDied","Data":"0a8484881de1d70ec07925f19c53404a1c36bb6e619b19475afa5fa460840f39"} Feb 17 16:10:26 crc kubenswrapper[4808]: I0217 16:10:26.213488 4808 scope.go:117] "RemoveContainer" containerID="4b75f60011cf28fb63cb77cf2df3af9aa65761ca0c8e7f1ad61a06e169e399ec" Feb 17 16:10:26 crc kubenswrapper[4808]: I0217 16:10:26.213515 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-qnxrh" podUID="0ac34750-b7bc-47ce-b128-10bfc5e9c8cf" containerName="registry-server" containerID="cri-o://5fd188d3caa8e18b174683f34f9ee94fa3c92333f2404e05b31941a90f76d47b" gracePeriod=2 Feb 17 16:10:26 crc kubenswrapper[4808]: I0217 16:10:26.213421 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b22t4" Feb 17 16:10:26 crc kubenswrapper[4808]: I0217 16:10:26.411339 4808 scope.go:117] "RemoveContainer" containerID="55a1ebe71976ac4d4cadff189408da31cb22e90fad0ba07ebd0b581c8feed71f" Feb 17 16:10:26 crc kubenswrapper[4808]: I0217 16:10:26.416774 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-b22t4"] Feb 17 16:10:26 crc kubenswrapper[4808]: I0217 16:10:26.432187 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-b22t4"] Feb 17 16:10:26 crc kubenswrapper[4808]: I0217 16:10:26.452874 4808 scope.go:117] "RemoveContainer" containerID="ca6dad098d98000904ee193800e5cff6af216019d831be3c7082c77fd328066f" Feb 17 16:10:26 crc kubenswrapper[4808]: I0217 16:10:26.644542 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-qnxrh" Feb 17 16:10:26 crc kubenswrapper[4808]: I0217 16:10:26.723000 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-75t5f"] Feb 17 16:10:26 crc kubenswrapper[4808]: I0217 16:10:26.795901 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jxsv8\" (UniqueName: \"kubernetes.io/projected/0ac34750-b7bc-47ce-b128-10bfc5e9c8cf-kube-api-access-jxsv8\") pod \"0ac34750-b7bc-47ce-b128-10bfc5e9c8cf\" (UID: \"0ac34750-b7bc-47ce-b128-10bfc5e9c8cf\") " Feb 17 16:10:26 crc kubenswrapper[4808]: I0217 16:10:26.803937 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ac34750-b7bc-47ce-b128-10bfc5e9c8cf-kube-api-access-jxsv8" (OuterVolumeSpecName: "kube-api-access-jxsv8") pod "0ac34750-b7bc-47ce-b128-10bfc5e9c8cf" (UID: "0ac34750-b7bc-47ce-b128-10bfc5e9c8cf"). InnerVolumeSpecName "kube-api-access-jxsv8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:10:26 crc kubenswrapper[4808]: I0217 16:10:26.897398 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jxsv8\" (UniqueName: \"kubernetes.io/projected/0ac34750-b7bc-47ce-b128-10bfc5e9c8cf-kube-api-access-jxsv8\") on node \"crc\" DevicePath \"\"" Feb 17 16:10:27 crc kubenswrapper[4808]: I0217 16:10:27.159670 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a3e6872-b5f3-4b28-abb5-3f721a69d3ab" path="/var/lib/kubelet/pods/6a3e6872-b5f3-4b28-abb5-3f721a69d3ab/volumes" Feb 17 16:10:27 crc kubenswrapper[4808]: I0217 16:10:27.222292 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-75t5f" event={"ID":"aa72ff82-f411-42f6-8144-937ca196211b","Type":"ContainerStarted","Data":"4b501886246d01696f527c3a9eef623152d64357a2a30f6ac7df3bb823cc2733"} Feb 17 16:10:27 crc kubenswrapper[4808]: I0217 16:10:27.222351 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-75t5f" event={"ID":"aa72ff82-f411-42f6-8144-937ca196211b","Type":"ContainerStarted","Data":"e480968f2a83761fae875c8aed263cfc3dbabd013086bca5f6877d0d3a930751"} Feb 17 16:10:27 crc kubenswrapper[4808]: I0217 16:10:27.224843 4808 generic.go:334] "Generic (PLEG): container finished" podID="0ac34750-b7bc-47ce-b128-10bfc5e9c8cf" containerID="5fd188d3caa8e18b174683f34f9ee94fa3c92333f2404e05b31941a90f76d47b" exitCode=0 Feb 17 16:10:27 crc kubenswrapper[4808]: I0217 16:10:27.224904 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-qnxrh" event={"ID":"0ac34750-b7bc-47ce-b128-10bfc5e9c8cf","Type":"ContainerDied","Data":"5fd188d3caa8e18b174683f34f9ee94fa3c92333f2404e05b31941a90f76d47b"} Feb 17 16:10:27 crc kubenswrapper[4808]: I0217 16:10:27.224918 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-qnxrh" Feb 17 16:10:27 crc kubenswrapper[4808]: I0217 16:10:27.224945 4808 scope.go:117] "RemoveContainer" containerID="5fd188d3caa8e18b174683f34f9ee94fa3c92333f2404e05b31941a90f76d47b" Feb 17 16:10:27 crc kubenswrapper[4808]: I0217 16:10:27.224934 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-qnxrh" event={"ID":"0ac34750-b7bc-47ce-b128-10bfc5e9c8cf","Type":"ContainerDied","Data":"7dfd596ecefcb9b7f65cea8307ce1e80d8368db6e2b668556763a24fcc94dd30"} Feb 17 16:10:27 crc kubenswrapper[4808]: I0217 16:10:27.250042 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-75t5f" podStartSLOduration=2.206566054 podStartE2EDuration="2.250002336s" podCreationTimestamp="2026-02-17 16:10:25 +0000 UTC" firstStartedPulling="2026-02-17 16:10:26.729920111 +0000 UTC m=+990.246279194" lastFinishedPulling="2026-02-17 16:10:26.773356383 +0000 UTC m=+990.289715476" observedRunningTime="2026-02-17 16:10:27.243643945 +0000 UTC m=+990.760003108" watchObservedRunningTime="2026-02-17 16:10:27.250002336 +0000 UTC m=+990.766361409" Feb 17 16:10:27 crc kubenswrapper[4808]: I0217 16:10:27.252815 4808 scope.go:117] "RemoveContainer" containerID="5fd188d3caa8e18b174683f34f9ee94fa3c92333f2404e05b31941a90f76d47b" Feb 17 16:10:27 crc kubenswrapper[4808]: E0217 16:10:27.253593 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5fd188d3caa8e18b174683f34f9ee94fa3c92333f2404e05b31941a90f76d47b\": container with ID starting with 5fd188d3caa8e18b174683f34f9ee94fa3c92333f2404e05b31941a90f76d47b not found: ID does not exist" containerID="5fd188d3caa8e18b174683f34f9ee94fa3c92333f2404e05b31941a90f76d47b" Feb 17 16:10:27 crc kubenswrapper[4808]: I0217 16:10:27.253652 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5fd188d3caa8e18b174683f34f9ee94fa3c92333f2404e05b31941a90f76d47b"} err="failed to get container status \"5fd188d3caa8e18b174683f34f9ee94fa3c92333f2404e05b31941a90f76d47b\": rpc error: code = NotFound desc = could not find container \"5fd188d3caa8e18b174683f34f9ee94fa3c92333f2404e05b31941a90f76d47b\": container with ID starting with 5fd188d3caa8e18b174683f34f9ee94fa3c92333f2404e05b31941a90f76d47b not found: ID does not exist" Feb 17 16:10:27 crc kubenswrapper[4808]: I0217 16:10:27.268715 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-qnxrh"] Feb 17 16:10:27 crc kubenswrapper[4808]: I0217 16:10:27.274131 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-qnxrh"] Feb 17 16:10:29 crc kubenswrapper[4808]: I0217 16:10:29.160117 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ac34750-b7bc-47ce-b128-10bfc5e9c8cf" path="/var/lib/kubelet/pods/0ac34750-b7bc-47ce-b128-10bfc5e9c8cf/volumes" Feb 17 16:10:36 crc kubenswrapper[4808]: I0217 16:10:36.191528 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-75t5f" Feb 17 16:10:36 crc kubenswrapper[4808]: I0217 16:10:36.192200 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-75t5f" Feb 17 16:10:36 crc kubenswrapper[4808]: I0217 16:10:36.229714 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-75t5f" Feb 17 16:10:36 crc kubenswrapper[4808]: I0217 16:10:36.351841 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-75t5f" Feb 17 16:10:39 crc kubenswrapper[4808]: I0217 16:10:39.482238 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4vwgr6"] Feb 17 16:10:39 crc kubenswrapper[4808]: E0217 16:10:39.483211 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ac34750-b7bc-47ce-b128-10bfc5e9c8cf" containerName="registry-server" Feb 17 16:10:39 crc kubenswrapper[4808]: I0217 16:10:39.483238 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ac34750-b7bc-47ce-b128-10bfc5e9c8cf" containerName="registry-server" Feb 17 16:10:39 crc kubenswrapper[4808]: I0217 16:10:39.483484 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ac34750-b7bc-47ce-b128-10bfc5e9c8cf" containerName="registry-server" Feb 17 16:10:39 crc kubenswrapper[4808]: I0217 16:10:39.485069 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4vwgr6" Feb 17 16:10:39 crc kubenswrapper[4808]: I0217 16:10:39.493060 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-tgxbp" Feb 17 16:10:39 crc kubenswrapper[4808]: I0217 16:10:39.497680 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4vwgr6"] Feb 17 16:10:39 crc kubenswrapper[4808]: I0217 16:10:39.604496 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bb0fef44-0d18-499b-bfd1-c684136b5095-util\") pod \"3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4vwgr6\" (UID: \"bb0fef44-0d18-499b-bfd1-c684136b5095\") " pod="openstack-operators/3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4vwgr6" Feb 17 16:10:39 crc kubenswrapper[4808]: I0217 16:10:39.604914 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9vk9\" (UniqueName: \"kubernetes.io/projected/bb0fef44-0d18-499b-bfd1-c684136b5095-kube-api-access-v9vk9\") pod \"3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4vwgr6\" (UID: \"bb0fef44-0d18-499b-bfd1-c684136b5095\") " pod="openstack-operators/3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4vwgr6" Feb 17 16:10:39 crc kubenswrapper[4808]: I0217 16:10:39.605081 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bb0fef44-0d18-499b-bfd1-c684136b5095-bundle\") pod \"3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4vwgr6\" (UID: \"bb0fef44-0d18-499b-bfd1-c684136b5095\") " pod="openstack-operators/3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4vwgr6" Feb 17 16:10:39 crc kubenswrapper[4808]: I0217 16:10:39.707676 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9vk9\" (UniqueName: \"kubernetes.io/projected/bb0fef44-0d18-499b-bfd1-c684136b5095-kube-api-access-v9vk9\") pod \"3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4vwgr6\" (UID: \"bb0fef44-0d18-499b-bfd1-c684136b5095\") " pod="openstack-operators/3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4vwgr6" Feb 17 16:10:39 crc kubenswrapper[4808]: I0217 16:10:39.707751 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bb0fef44-0d18-499b-bfd1-c684136b5095-bundle\") pod \"3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4vwgr6\" (UID: \"bb0fef44-0d18-499b-bfd1-c684136b5095\") " pod="openstack-operators/3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4vwgr6" Feb 17 16:10:39 crc kubenswrapper[4808]: I0217 16:10:39.707783 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bb0fef44-0d18-499b-bfd1-c684136b5095-util\") pod \"3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4vwgr6\" (UID: \"bb0fef44-0d18-499b-bfd1-c684136b5095\") " pod="openstack-operators/3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4vwgr6" Feb 17 16:10:39 crc kubenswrapper[4808]: I0217 16:10:39.708338 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bb0fef44-0d18-499b-bfd1-c684136b5095-util\") pod \"3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4vwgr6\" (UID: \"bb0fef44-0d18-499b-bfd1-c684136b5095\") " pod="openstack-operators/3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4vwgr6" Feb 17 16:10:39 crc kubenswrapper[4808]: I0217 16:10:39.708429 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bb0fef44-0d18-499b-bfd1-c684136b5095-bundle\") pod \"3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4vwgr6\" (UID: \"bb0fef44-0d18-499b-bfd1-c684136b5095\") " pod="openstack-operators/3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4vwgr6" Feb 17 16:10:39 crc kubenswrapper[4808]: I0217 16:10:39.737862 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9vk9\" (UniqueName: \"kubernetes.io/projected/bb0fef44-0d18-499b-bfd1-c684136b5095-kube-api-access-v9vk9\") pod \"3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4vwgr6\" (UID: \"bb0fef44-0d18-499b-bfd1-c684136b5095\") " pod="openstack-operators/3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4vwgr6" Feb 17 16:10:39 crc kubenswrapper[4808]: I0217 16:10:39.825104 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4vwgr6" Feb 17 16:10:40 crc kubenswrapper[4808]: I0217 16:10:40.032801 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4vwgr6"] Feb 17 16:10:40 crc kubenswrapper[4808]: W0217 16:10:40.037835 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbb0fef44_0d18_499b_bfd1_c684136b5095.slice/crio-10e5456f99a362bdb2cccb0bef512371f03322ceb2c84d4693eab11d788303e0 WatchSource:0}: Error finding container 10e5456f99a362bdb2cccb0bef512371f03322ceb2c84d4693eab11d788303e0: Status 404 returned error can't find the container with id 10e5456f99a362bdb2cccb0bef512371f03322ceb2c84d4693eab11d788303e0 Feb 17 16:10:40 crc kubenswrapper[4808]: I0217 16:10:40.346811 4808 generic.go:334] "Generic (PLEG): container finished" podID="bb0fef44-0d18-499b-bfd1-c684136b5095" containerID="6798e3bf9c5690ebcf16cfb39c9e927164cf9e99a1661245cb27ffb486e54af4" exitCode=0 Feb 17 16:10:40 crc kubenswrapper[4808]: I0217 16:10:40.346873 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4vwgr6" event={"ID":"bb0fef44-0d18-499b-bfd1-c684136b5095","Type":"ContainerDied","Data":"6798e3bf9c5690ebcf16cfb39c9e927164cf9e99a1661245cb27ffb486e54af4"} Feb 17 16:10:40 crc kubenswrapper[4808]: I0217 16:10:40.346917 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4vwgr6" event={"ID":"bb0fef44-0d18-499b-bfd1-c684136b5095","Type":"ContainerStarted","Data":"10e5456f99a362bdb2cccb0bef512371f03322ceb2c84d4693eab11d788303e0"} Feb 17 16:10:41 crc kubenswrapper[4808]: I0217 16:10:41.357956 4808 generic.go:334] "Generic (PLEG): container finished" podID="bb0fef44-0d18-499b-bfd1-c684136b5095" containerID="288ee85f720f8808086f4f8617e718281d80757ee7bb3a062de0a4491fa40350" exitCode=0 Feb 17 16:10:41 crc kubenswrapper[4808]: I0217 16:10:41.358070 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4vwgr6" event={"ID":"bb0fef44-0d18-499b-bfd1-c684136b5095","Type":"ContainerDied","Data":"288ee85f720f8808086f4f8617e718281d80757ee7bb3a062de0a4491fa40350"} Feb 17 16:10:42 crc kubenswrapper[4808]: I0217 16:10:42.369140 4808 generic.go:334] "Generic (PLEG): container finished" podID="bb0fef44-0d18-499b-bfd1-c684136b5095" containerID="1eacd106543c3b1eed89107e6d211095f090e29dbc2bc301fc76abb05f46fa29" exitCode=0 Feb 17 16:10:42 crc kubenswrapper[4808]: I0217 16:10:42.369226 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4vwgr6" event={"ID":"bb0fef44-0d18-499b-bfd1-c684136b5095","Type":"ContainerDied","Data":"1eacd106543c3b1eed89107e6d211095f090e29dbc2bc301fc76abb05f46fa29"} Feb 17 16:10:43 crc kubenswrapper[4808]: I0217 16:10:43.695137 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4vwgr6" Feb 17 16:10:43 crc kubenswrapper[4808]: I0217 16:10:43.877686 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bb0fef44-0d18-499b-bfd1-c684136b5095-bundle\") pod \"bb0fef44-0d18-499b-bfd1-c684136b5095\" (UID: \"bb0fef44-0d18-499b-bfd1-c684136b5095\") " Feb 17 16:10:43 crc kubenswrapper[4808]: I0217 16:10:43.877758 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v9vk9\" (UniqueName: \"kubernetes.io/projected/bb0fef44-0d18-499b-bfd1-c684136b5095-kube-api-access-v9vk9\") pod \"bb0fef44-0d18-499b-bfd1-c684136b5095\" (UID: \"bb0fef44-0d18-499b-bfd1-c684136b5095\") " Feb 17 16:10:43 crc kubenswrapper[4808]: I0217 16:10:43.877833 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bb0fef44-0d18-499b-bfd1-c684136b5095-util\") pod \"bb0fef44-0d18-499b-bfd1-c684136b5095\" (UID: \"bb0fef44-0d18-499b-bfd1-c684136b5095\") " Feb 17 16:10:43 crc kubenswrapper[4808]: I0217 16:10:43.878468 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb0fef44-0d18-499b-bfd1-c684136b5095-bundle" (OuterVolumeSpecName: "bundle") pod "bb0fef44-0d18-499b-bfd1-c684136b5095" (UID: "bb0fef44-0d18-499b-bfd1-c684136b5095"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:10:43 crc kubenswrapper[4808]: I0217 16:10:43.886744 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb0fef44-0d18-499b-bfd1-c684136b5095-kube-api-access-v9vk9" (OuterVolumeSpecName: "kube-api-access-v9vk9") pod "bb0fef44-0d18-499b-bfd1-c684136b5095" (UID: "bb0fef44-0d18-499b-bfd1-c684136b5095"). InnerVolumeSpecName "kube-api-access-v9vk9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:10:43 crc kubenswrapper[4808]: I0217 16:10:43.898931 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb0fef44-0d18-499b-bfd1-c684136b5095-util" (OuterVolumeSpecName: "util") pod "bb0fef44-0d18-499b-bfd1-c684136b5095" (UID: "bb0fef44-0d18-499b-bfd1-c684136b5095"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:10:43 crc kubenswrapper[4808]: I0217 16:10:43.980282 4808 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bb0fef44-0d18-499b-bfd1-c684136b5095-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:10:43 crc kubenswrapper[4808]: I0217 16:10:43.980339 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v9vk9\" (UniqueName: \"kubernetes.io/projected/bb0fef44-0d18-499b-bfd1-c684136b5095-kube-api-access-v9vk9\") on node \"crc\" DevicePath \"\"" Feb 17 16:10:43 crc kubenswrapper[4808]: I0217 16:10:43.980359 4808 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bb0fef44-0d18-499b-bfd1-c684136b5095-util\") on node \"crc\" DevicePath \"\"" Feb 17 16:10:44 crc kubenswrapper[4808]: I0217 16:10:44.389477 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4vwgr6" event={"ID":"bb0fef44-0d18-499b-bfd1-c684136b5095","Type":"ContainerDied","Data":"10e5456f99a362bdb2cccb0bef512371f03322ceb2c84d4693eab11d788303e0"} Feb 17 16:10:44 crc kubenswrapper[4808]: I0217 16:10:44.389540 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="10e5456f99a362bdb2cccb0bef512371f03322ceb2c84d4693eab11d788303e0" Feb 17 16:10:44 crc kubenswrapper[4808]: I0217 16:10:44.389563 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4vwgr6" Feb 17 16:10:48 crc kubenswrapper[4808]: I0217 16:10:48.174107 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-64549bfd8b-rwgq9"] Feb 17 16:10:48 crc kubenswrapper[4808]: E0217 16:10:48.175009 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb0fef44-0d18-499b-bfd1-c684136b5095" containerName="pull" Feb 17 16:10:48 crc kubenswrapper[4808]: I0217 16:10:48.175027 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb0fef44-0d18-499b-bfd1-c684136b5095" containerName="pull" Feb 17 16:10:48 crc kubenswrapper[4808]: E0217 16:10:48.175057 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb0fef44-0d18-499b-bfd1-c684136b5095" containerName="extract" Feb 17 16:10:48 crc kubenswrapper[4808]: I0217 16:10:48.175066 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb0fef44-0d18-499b-bfd1-c684136b5095" containerName="extract" Feb 17 16:10:48 crc kubenswrapper[4808]: E0217 16:10:48.175078 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb0fef44-0d18-499b-bfd1-c684136b5095" containerName="util" Feb 17 16:10:48 crc kubenswrapper[4808]: I0217 16:10:48.175087 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb0fef44-0d18-499b-bfd1-c684136b5095" containerName="util" Feb 17 16:10:48 crc kubenswrapper[4808]: I0217 16:10:48.175218 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb0fef44-0d18-499b-bfd1-c684136b5095" containerName="extract" Feb 17 16:10:48 crc kubenswrapper[4808]: I0217 16:10:48.175770 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-64549bfd8b-rwgq9" Feb 17 16:10:48 crc kubenswrapper[4808]: I0217 16:10:48.181824 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-n8kv8" Feb 17 16:10:48 crc kubenswrapper[4808]: I0217 16:10:48.207555 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-64549bfd8b-rwgq9"] Feb 17 16:10:48 crc kubenswrapper[4808]: I0217 16:10:48.246280 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4lr2\" (UniqueName: \"kubernetes.io/projected/2db6cd8b-961f-442e-8bd4-ced98807709a-kube-api-access-m4lr2\") pod \"openstack-operator-controller-init-64549bfd8b-rwgq9\" (UID: \"2db6cd8b-961f-442e-8bd4-ced98807709a\") " pod="openstack-operators/openstack-operator-controller-init-64549bfd8b-rwgq9" Feb 17 16:10:48 crc kubenswrapper[4808]: I0217 16:10:48.347172 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4lr2\" (UniqueName: \"kubernetes.io/projected/2db6cd8b-961f-442e-8bd4-ced98807709a-kube-api-access-m4lr2\") pod \"openstack-operator-controller-init-64549bfd8b-rwgq9\" (UID: \"2db6cd8b-961f-442e-8bd4-ced98807709a\") " pod="openstack-operators/openstack-operator-controller-init-64549bfd8b-rwgq9" Feb 17 16:10:48 crc kubenswrapper[4808]: I0217 16:10:48.371233 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4lr2\" (UniqueName: \"kubernetes.io/projected/2db6cd8b-961f-442e-8bd4-ced98807709a-kube-api-access-m4lr2\") pod \"openstack-operator-controller-init-64549bfd8b-rwgq9\" (UID: \"2db6cd8b-961f-442e-8bd4-ced98807709a\") " pod="openstack-operators/openstack-operator-controller-init-64549bfd8b-rwgq9" Feb 17 16:10:48 crc kubenswrapper[4808]: I0217 16:10:48.499963 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-64549bfd8b-rwgq9" Feb 17 16:10:48 crc kubenswrapper[4808]: I0217 16:10:48.745650 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-64549bfd8b-rwgq9"] Feb 17 16:10:48 crc kubenswrapper[4808]: I0217 16:10:48.758156 4808 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 16:10:49 crc kubenswrapper[4808]: I0217 16:10:49.431550 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-64549bfd8b-rwgq9" event={"ID":"2db6cd8b-961f-442e-8bd4-ced98807709a","Type":"ContainerStarted","Data":"26f0e5f51901ff6ef8217fa5621b4138098c6548eb8cef1a8ec924e81786dfd1"} Feb 17 16:10:53 crc kubenswrapper[4808]: I0217 16:10:53.480140 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-64549bfd8b-rwgq9" event={"ID":"2db6cd8b-961f-442e-8bd4-ced98807709a","Type":"ContainerStarted","Data":"3d9a365357cef78af96c30126ed8d78286157969a21437877031df0b49d4f50f"} Feb 17 16:10:53 crc kubenswrapper[4808]: I0217 16:10:53.480933 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-64549bfd8b-rwgq9" Feb 17 16:10:53 crc kubenswrapper[4808]: I0217 16:10:53.532248 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-64549bfd8b-rwgq9" podStartSLOduration=1.791937088 podStartE2EDuration="5.532216413s" podCreationTimestamp="2026-02-17 16:10:48 +0000 UTC" firstStartedPulling="2026-02-17 16:10:48.757951244 +0000 UTC m=+1012.274310317" lastFinishedPulling="2026-02-17 16:10:52.498230569 +0000 UTC m=+1016.014589642" observedRunningTime="2026-02-17 16:10:53.523894701 +0000 UTC m=+1017.040253874" watchObservedRunningTime="2026-02-17 16:10:53.532216413 +0000 UTC m=+1017.048575526" Feb 17 16:10:58 crc kubenswrapper[4808]: I0217 16:10:58.503241 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-64549bfd8b-rwgq9" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.046015 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-cjh7p"] Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.048144 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-cjh7p" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.050813 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-4b5sk" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.061288 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-4cv77"] Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.062253 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-4cv77" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.067141 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-t9jrj" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.071871 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-4cv77"] Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.078186 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-cjh7p"] Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.102384 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xltm6\" (UniqueName: \"kubernetes.io/projected/3e657888-7f8f-4d5d-8ef3-7f7472a7e4fb-kube-api-access-xltm6\") pod \"barbican-operator-controller-manager-868647ff47-cjh7p\" (UID: \"3e657888-7f8f-4d5d-8ef3-7f7472a7e4fb\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-cjh7p" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.102470 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tm6jv\" (UniqueName: \"kubernetes.io/projected/77df5d1f-daff-4508-861a-335ab87f2366-kube-api-access-tm6jv\") pod \"cinder-operator-controller-manager-5d946d989d-4cv77\" (UID: \"77df5d1f-daff-4508-861a-335ab87f2366\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-4cv77" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.132137 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-gl97b"] Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.133146 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-gl97b" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.138117 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-qxn4p" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.145778 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-gl97b"] Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.181084 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-b7hkk"] Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.181939 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987464f4-b7hkk" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.186273 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-br5nd" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.201401 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-b7hkk"] Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.209212 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xltm6\" (UniqueName: \"kubernetes.io/projected/3e657888-7f8f-4d5d-8ef3-7f7472a7e4fb-kube-api-access-xltm6\") pod \"barbican-operator-controller-manager-868647ff47-cjh7p\" (UID: \"3e657888-7f8f-4d5d-8ef3-7f7472a7e4fb\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-cjh7p" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.209262 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tm6jv\" (UniqueName: \"kubernetes.io/projected/77df5d1f-daff-4508-861a-335ab87f2366-kube-api-access-tm6jv\") pod \"cinder-operator-controller-manager-5d946d989d-4cv77\" (UID: \"77df5d1f-daff-4508-861a-335ab87f2366\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-4cv77" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.209333 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5lz8\" (UniqueName: \"kubernetes.io/projected/e2e1b5f4-7ed2-4ab1-871b-1974a7559252-kube-api-access-b5lz8\") pod \"designate-operator-controller-manager-6d8bf5c495-gl97b\" (UID: \"e2e1b5f4-7ed2-4ab1-871b-1974a7559252\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-gl97b" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.246704 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xltm6\" (UniqueName: \"kubernetes.io/projected/3e657888-7f8f-4d5d-8ef3-7f7472a7e4fb-kube-api-access-xltm6\") pod \"barbican-operator-controller-manager-868647ff47-cjh7p\" (UID: \"3e657888-7f8f-4d5d-8ef3-7f7472a7e4fb\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-cjh7p" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.247153 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tm6jv\" (UniqueName: \"kubernetes.io/projected/77df5d1f-daff-4508-861a-335ab87f2366-kube-api-access-tm6jv\") pod \"cinder-operator-controller-manager-5d946d989d-4cv77\" (UID: \"77df5d1f-daff-4508-861a-335ab87f2366\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-4cv77" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.265322 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-xv924"] Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.274863 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-xv924" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.291784 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-plpr2"] Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.292704 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-plpr2" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.310839 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5lz8\" (UniqueName: \"kubernetes.io/projected/e2e1b5f4-7ed2-4ab1-871b-1974a7559252-kube-api-access-b5lz8\") pod \"designate-operator-controller-manager-6d8bf5c495-gl97b\" (UID: \"e2e1b5f4-7ed2-4ab1-871b-1974a7559252\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-gl97b" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.310972 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fb8xj\" (UniqueName: \"kubernetes.io/projected/b622bb16-c5b4-45ea-b493-e681d36d49ac-kube-api-access-fb8xj\") pod \"glance-operator-controller-manager-77987464f4-b7hkk\" (UID: \"b622bb16-c5b4-45ea-b493-e681d36d49ac\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-b7hkk" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.320953 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-nwh6w" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.321186 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-cgr2n" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.344632 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-xv924"] Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.357539 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-n6qxn"] Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.361148 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5lz8\" (UniqueName: \"kubernetes.io/projected/e2e1b5f4-7ed2-4ab1-871b-1974a7559252-kube-api-access-b5lz8\") pod \"designate-operator-controller-manager-6d8bf5c495-gl97b\" (UID: \"e2e1b5f4-7ed2-4ab1-871b-1974a7559252\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-gl97b" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.387859 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-cjh7p" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.388521 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-4cv77" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.395292 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-n6qxn"] Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.395332 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-plpr2"] Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.395350 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-thpj7"] Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.397170 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79d975b745-n6qxn" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.398640 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-thpj7" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.400289 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.400470 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-b2sv9" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.400639 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-4cfvg" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.409302 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-thpj7"] Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.412766 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4rqg\" (UniqueName: \"kubernetes.io/projected/681f334b-d0ac-43dc-babb-92d9cb7c0440-kube-api-access-v4rqg\") pod \"horizon-operator-controller-manager-5b9b8895d5-plpr2\" (UID: \"681f334b-d0ac-43dc-babb-92d9cb7c0440\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-plpr2" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.412858 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fb8xj\" (UniqueName: \"kubernetes.io/projected/b622bb16-c5b4-45ea-b493-e681d36d49ac-kube-api-access-fb8xj\") pod \"glance-operator-controller-manager-77987464f4-b7hkk\" (UID: \"b622bb16-c5b4-45ea-b493-e681d36d49ac\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-b7hkk" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.412897 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65c7l\" (UniqueName: \"kubernetes.io/projected/d4bd0818-617e-418a-b7c7-f70ba7ebc3d8-kube-api-access-65c7l\") pod \"heat-operator-controller-manager-69f49c598c-xv924\" (UID: \"d4bd0818-617e-418a-b7c7-f70ba7ebc3d8\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-xv924" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.453237 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fb8xj\" (UniqueName: \"kubernetes.io/projected/b622bb16-c5b4-45ea-b493-e681d36d49ac-kube-api-access-fb8xj\") pod \"glance-operator-controller-manager-77987464f4-b7hkk\" (UID: \"b622bb16-c5b4-45ea-b493-e681d36d49ac\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-b7hkk" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.453471 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-8xfc6"] Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.454439 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-8xfc6" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.457648 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-754r7" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.467329 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-gl97b" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.473051 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-8xfc6"] Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.494764 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-tkhr5"] Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.496059 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-tkhr5" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.504946 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-l4w9c" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.508797 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987464f4-b7hkk" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.522244 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65c7l\" (UniqueName: \"kubernetes.io/projected/d4bd0818-617e-418a-b7c7-f70ba7ebc3d8-kube-api-access-65c7l\") pod \"heat-operator-controller-manager-69f49c598c-xv924\" (UID: \"d4bd0818-617e-418a-b7c7-f70ba7ebc3d8\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-xv924" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.522307 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jgrh\" (UniqueName: \"kubernetes.io/projected/96baec58-63b9-49cd-9cf4-32639e58d4ac-kube-api-access-4jgrh\") pod \"keystone-operator-controller-manager-b4d948c87-8xfc6\" (UID: \"96baec58-63b9-49cd-9cf4-32639e58d4ac\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-8xfc6" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.522332 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6508a74d-2dba-4d1b-910c-95c9463c15a4-cert\") pod \"infra-operator-controller-manager-79d975b745-n6qxn\" (UID: \"6508a74d-2dba-4d1b-910c-95c9463c15a4\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-n6qxn" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.522384 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zfgx\" (UniqueName: \"kubernetes.io/projected/ace1fd54-7ff8-45b9-a77b-c3908044365e-kube-api-access-2zfgx\") pod \"ironic-operator-controller-manager-554564d7fc-thpj7\" (UID: \"ace1fd54-7ff8-45b9-a77b-c3908044365e\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-thpj7" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.522406 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4rqg\" (UniqueName: \"kubernetes.io/projected/681f334b-d0ac-43dc-babb-92d9cb7c0440-kube-api-access-v4rqg\") pod \"horizon-operator-controller-manager-5b9b8895d5-plpr2\" (UID: \"681f334b-d0ac-43dc-babb-92d9cb7c0440\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-plpr2" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.522451 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czn74\" (UniqueName: \"kubernetes.io/projected/6508a74d-2dba-4d1b-910c-95c9463c15a4-kube-api-access-czn74\") pod \"infra-operator-controller-manager-79d975b745-n6qxn\" (UID: \"6508a74d-2dba-4d1b-910c-95c9463c15a4\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-n6qxn" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.526656 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-vgbmj"] Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.527864 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-vgbmj" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.544633 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-tkhr5"] Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.544995 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-wkk2j" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.545747 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-vgbmj"] Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.571296 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4rqg\" (UniqueName: \"kubernetes.io/projected/681f334b-d0ac-43dc-babb-92d9cb7c0440-kube-api-access-v4rqg\") pod \"horizon-operator-controller-manager-5b9b8895d5-plpr2\" (UID: \"681f334b-d0ac-43dc-babb-92d9cb7c0440\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-plpr2" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.577115 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65c7l\" (UniqueName: \"kubernetes.io/projected/d4bd0818-617e-418a-b7c7-f70ba7ebc3d8-kube-api-access-65c7l\") pod \"heat-operator-controller-manager-69f49c598c-xv924\" (UID: \"d4bd0818-617e-418a-b7c7-f70ba7ebc3d8\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-xv924" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.582635 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-t9k25"] Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.583726 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-t9k25" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.585337 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-kg6xx"] Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.596663 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-jltzm" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.604327 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-kg6xx" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.608167 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-rrpvb" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.626879 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2zfgx\" (UniqueName: \"kubernetes.io/projected/ace1fd54-7ff8-45b9-a77b-c3908044365e-kube-api-access-2zfgx\") pod \"ironic-operator-controller-manager-554564d7fc-thpj7\" (UID: \"ace1fd54-7ff8-45b9-a77b-c3908044365e\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-thpj7" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.626961 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8dhp\" (UniqueName: \"kubernetes.io/projected/a6f8ca14-e1db-4dcc-a64d-7bf137105e80-kube-api-access-w8dhp\") pod \"nova-operator-controller-manager-567668f5cf-t9k25\" (UID: \"a6f8ca14-e1db-4dcc-a64d-7bf137105e80\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-t9k25" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.627024 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czn74\" (UniqueName: \"kubernetes.io/projected/6508a74d-2dba-4d1b-910c-95c9463c15a4-kube-api-access-czn74\") pod \"infra-operator-controller-manager-79d975b745-n6qxn\" (UID: \"6508a74d-2dba-4d1b-910c-95c9463c15a4\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-n6qxn" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.627058 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhp8l\" (UniqueName: \"kubernetes.io/projected/a40e52a1-9867-413a-81fb-324789e0a009-kube-api-access-dhp8l\") pod \"mariadb-operator-controller-manager-6994f66f48-vgbmj\" (UID: \"a40e52a1-9867-413a-81fb-324789e0a009\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-vgbmj" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.627100 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jgrh\" (UniqueName: \"kubernetes.io/projected/96baec58-63b9-49cd-9cf4-32639e58d4ac-kube-api-access-4jgrh\") pod \"keystone-operator-controller-manager-b4d948c87-8xfc6\" (UID: \"96baec58-63b9-49cd-9cf4-32639e58d4ac\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-8xfc6" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.627126 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6508a74d-2dba-4d1b-910c-95c9463c15a4-cert\") pod \"infra-operator-controller-manager-79d975b745-n6qxn\" (UID: \"6508a74d-2dba-4d1b-910c-95c9463c15a4\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-n6qxn" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.627158 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgkjh\" (UniqueName: \"kubernetes.io/projected/93278ccd-52fe-4848-9a46-3f47369d47ab-kube-api-access-hgkjh\") pod \"manila-operator-controller-manager-54f6768c69-tkhr5\" (UID: \"93278ccd-52fe-4848-9a46-3f47369d47ab\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-tkhr5" Feb 17 16:11:19 crc kubenswrapper[4808]: E0217 16:11:19.627684 4808 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 17 16:11:19 crc kubenswrapper[4808]: E0217 16:11:19.627751 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6508a74d-2dba-4d1b-910c-95c9463c15a4-cert podName:6508a74d-2dba-4d1b-910c-95c9463c15a4 nodeName:}" failed. No retries permitted until 2026-02-17 16:11:20.127732723 +0000 UTC m=+1043.644091796 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6508a74d-2dba-4d1b-910c-95c9463c15a4-cert") pod "infra-operator-controller-manager-79d975b745-n6qxn" (UID: "6508a74d-2dba-4d1b-910c-95c9463c15a4") : secret "infra-operator-webhook-server-cert" not found Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.635165 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-xp9sf"] Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.636005 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-xp9sf" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.640150 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-82s6w" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.643529 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-t9k25"] Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.649697 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-kg6xx"] Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.661382 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-xp9sf"] Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.682727 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czn74\" (UniqueName: \"kubernetes.io/projected/6508a74d-2dba-4d1b-910c-95c9463c15a4-kube-api-access-czn74\") pod \"infra-operator-controller-manager-79d975b745-n6qxn\" (UID: \"6508a74d-2dba-4d1b-910c-95c9463c15a4\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-n6qxn" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.684146 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jgrh\" (UniqueName: \"kubernetes.io/projected/96baec58-63b9-49cd-9cf4-32639e58d4ac-kube-api-access-4jgrh\") pod \"keystone-operator-controller-manager-b4d948c87-8xfc6\" (UID: \"96baec58-63b9-49cd-9cf4-32639e58d4ac\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-8xfc6" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.686756 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-8xfc6" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.696715 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9csf4ws"] Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.700483 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9csf4ws" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.704049 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-xv924" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.705932 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zfgx\" (UniqueName: \"kubernetes.io/projected/ace1fd54-7ff8-45b9-a77b-c3908044365e-kube-api-access-2zfgx\") pod \"ironic-operator-controller-manager-554564d7fc-thpj7\" (UID: \"ace1fd54-7ff8-45b9-a77b-c3908044365e\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-thpj7" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.706721 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-tkkqw" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.706834 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.729181 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgkjh\" (UniqueName: \"kubernetes.io/projected/93278ccd-52fe-4848-9a46-3f47369d47ab-kube-api-access-hgkjh\") pod \"manila-operator-controller-manager-54f6768c69-tkhr5\" (UID: \"93278ccd-52fe-4848-9a46-3f47369d47ab\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-tkhr5" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.732997 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2ec18a16-766f-4a0c-a393-0ca7a999011e-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9csf4ws\" (UID: \"2ec18a16-766f-4a0c-a393-0ca7a999011e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9csf4ws" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.733074 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jz9k7\" (UniqueName: \"kubernetes.io/projected/a2547c9d-80d6-491d-8517-26327e35a1f4-kube-api-access-jz9k7\") pod \"octavia-operator-controller-manager-69f8888797-xp9sf\" (UID: \"a2547c9d-80d6-491d-8517-26327e35a1f4\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-xp9sf" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.733179 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-plpr2" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.733849 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8dhp\" (UniqueName: \"kubernetes.io/projected/a6f8ca14-e1db-4dcc-a64d-7bf137105e80-kube-api-access-w8dhp\") pod \"nova-operator-controller-manager-567668f5cf-t9k25\" (UID: \"a6f8ca14-e1db-4dcc-a64d-7bf137105e80\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-t9k25" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.733965 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhp8l\" (UniqueName: \"kubernetes.io/projected/a40e52a1-9867-413a-81fb-324789e0a009-kube-api-access-dhp8l\") pod \"mariadb-operator-controller-manager-6994f66f48-vgbmj\" (UID: \"a40e52a1-9867-413a-81fb-324789e0a009\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-vgbmj" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.734016 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xt8q\" (UniqueName: \"kubernetes.io/projected/2ec18a16-766f-4a0c-a393-0ca7a999011e-kube-api-access-4xt8q\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9csf4ws\" (UID: \"2ec18a16-766f-4a0c-a393-0ca7a999011e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9csf4ws" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.734431 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6lvx\" (UniqueName: \"kubernetes.io/projected/8d4c91a6-8441-45a6-bb6a-7655ba464fb9-kube-api-access-s6lvx\") pod \"neutron-operator-controller-manager-64ddbf8bb-kg6xx\" (UID: \"8d4c91a6-8441-45a6-bb6a-7655ba464fb9\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-kg6xx" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.739551 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9csf4ws"] Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.766197 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgkjh\" (UniqueName: \"kubernetes.io/projected/93278ccd-52fe-4848-9a46-3f47369d47ab-kube-api-access-hgkjh\") pod \"manila-operator-controller-manager-54f6768c69-tkhr5\" (UID: \"93278ccd-52fe-4848-9a46-3f47369d47ab\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-tkhr5" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.789888 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-slw7s"] Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.812336 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhp8l\" (UniqueName: \"kubernetes.io/projected/a40e52a1-9867-413a-81fb-324789e0a009-kube-api-access-dhp8l\") pod \"mariadb-operator-controller-manager-6994f66f48-vgbmj\" (UID: \"a40e52a1-9867-413a-81fb-324789e0a009\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-vgbmj" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.812700 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8dhp\" (UniqueName: \"kubernetes.io/projected/a6f8ca14-e1db-4dcc-a64d-7bf137105e80-kube-api-access-w8dhp\") pod \"nova-operator-controller-manager-567668f5cf-t9k25\" (UID: \"a6f8ca14-e1db-4dcc-a64d-7bf137105e80\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-t9k25" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.813690 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-slw7s" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.817800 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-brxds" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.831479 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-z4vp8"] Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.832598 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-z4vp8" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.835908 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-s8wwm" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.838424 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xt8q\" (UniqueName: \"kubernetes.io/projected/2ec18a16-766f-4a0c-a393-0ca7a999011e-kube-api-access-4xt8q\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9csf4ws\" (UID: \"2ec18a16-766f-4a0c-a393-0ca7a999011e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9csf4ws" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.838467 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6lvx\" (UniqueName: \"kubernetes.io/projected/8d4c91a6-8441-45a6-bb6a-7655ba464fb9-kube-api-access-s6lvx\") pod \"neutron-operator-controller-manager-64ddbf8bb-kg6xx\" (UID: \"8d4c91a6-8441-45a6-bb6a-7655ba464fb9\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-kg6xx" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.838545 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2ec18a16-766f-4a0c-a393-0ca7a999011e-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9csf4ws\" (UID: \"2ec18a16-766f-4a0c-a393-0ca7a999011e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9csf4ws" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.838566 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jz9k7\" (UniqueName: \"kubernetes.io/projected/a2547c9d-80d6-491d-8517-26327e35a1f4-kube-api-access-jz9k7\") pod \"octavia-operator-controller-manager-69f8888797-xp9sf\" (UID: \"a2547c9d-80d6-491d-8517-26327e35a1f4\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-xp9sf" Feb 17 16:11:19 crc kubenswrapper[4808]: E0217 16:11:19.839740 4808 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 16:11:19 crc kubenswrapper[4808]: E0217 16:11:19.839785 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2ec18a16-766f-4a0c-a393-0ca7a999011e-cert podName:2ec18a16-766f-4a0c-a393-0ca7a999011e nodeName:}" failed. No retries permitted until 2026-02-17 16:11:20.339772393 +0000 UTC m=+1043.856131466 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2ec18a16-766f-4a0c-a393-0ca7a999011e-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9csf4ws" (UID: "2ec18a16-766f-4a0c-a393-0ca7a999011e") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.874671 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-5mm2j"] Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.875752 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-5mm2j" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.884145 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-fqrkp" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.884915 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jz9k7\" (UniqueName: \"kubernetes.io/projected/a2547c9d-80d6-491d-8517-26327e35a1f4-kube-api-access-jz9k7\") pod \"octavia-operator-controller-manager-69f8888797-xp9sf\" (UID: \"a2547c9d-80d6-491d-8517-26327e35a1f4\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-xp9sf" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.885562 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xt8q\" (UniqueName: \"kubernetes.io/projected/2ec18a16-766f-4a0c-a393-0ca7a999011e-kube-api-access-4xt8q\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9csf4ws\" (UID: \"2ec18a16-766f-4a0c-a393-0ca7a999011e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9csf4ws" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.895639 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-slw7s"] Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.906881 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6lvx\" (UniqueName: \"kubernetes.io/projected/8d4c91a6-8441-45a6-bb6a-7655ba464fb9-kube-api-access-s6lvx\") pod \"neutron-operator-controller-manager-64ddbf8bb-kg6xx\" (UID: \"8d4c91a6-8441-45a6-bb6a-7655ba464fb9\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-kg6xx" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.929639 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-z4vp8"] Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.943163 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-468w4\" (UniqueName: \"kubernetes.io/projected/0a170b4f-607d-4c7c-bd0c-ee6c29523b44-kube-api-access-468w4\") pod \"placement-operator-controller-manager-8497b45c89-5mm2j\" (UID: \"0a170b4f-607d-4c7c-bd0c-ee6c29523b44\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-5mm2j" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.943280 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggmbl\" (UniqueName: \"kubernetes.io/projected/74dda28c-8860-440c-b97c-b16bab985ff0-kube-api-access-ggmbl\") pod \"swift-operator-controller-manager-68f46476f-z4vp8\" (UID: \"74dda28c-8860-440c-b97c-b16bab985ff0\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-z4vp8" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.943304 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbdg6\" (UniqueName: \"kubernetes.io/projected/6764d3f3-5e9f-4635-973e-81324dbc8e34-kube-api-access-pbdg6\") pod \"ovn-operator-controller-manager-d44cf6b75-slw7s\" (UID: \"6764d3f3-5e9f-4635-973e-81324dbc8e34\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-slw7s" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.949221 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-5mm2j"] Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.960495 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-thpj7" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.963213 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-zxqhb"] Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.964217 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7866795846-zxqhb" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.968675 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-tnx2g" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.996879 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-66fcc5ff49-dnzp5"] Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.998085 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-66fcc5ff49-dnzp5" Feb 17 16:11:20 crc kubenswrapper[4808]: I0217 16:11:20.021813 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-m7nf7" Feb 17 16:11:20 crc kubenswrapper[4808]: I0217 16:11:20.040801 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-zxqhb"] Feb 17 16:11:20 crc kubenswrapper[4808]: I0217 16:11:20.041127 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-tkhr5" Feb 17 16:11:20 crc kubenswrapper[4808]: I0217 16:11:20.044142 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hx9l\" (UniqueName: \"kubernetes.io/projected/b42c0b9b-cca5-4ecb-908e-508fbf932dfe-kube-api-access-4hx9l\") pod \"test-operator-controller-manager-7866795846-zxqhb\" (UID: \"b42c0b9b-cca5-4ecb-908e-508fbf932dfe\") " pod="openstack-operators/test-operator-controller-manager-7866795846-zxqhb" Feb 17 16:11:20 crc kubenswrapper[4808]: I0217 16:11:20.044181 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggmbl\" (UniqueName: \"kubernetes.io/projected/74dda28c-8860-440c-b97c-b16bab985ff0-kube-api-access-ggmbl\") pod \"swift-operator-controller-manager-68f46476f-z4vp8\" (UID: \"74dda28c-8860-440c-b97c-b16bab985ff0\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-z4vp8" Feb 17 16:11:20 crc kubenswrapper[4808]: I0217 16:11:20.044267 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pbdg6\" (UniqueName: \"kubernetes.io/projected/6764d3f3-5e9f-4635-973e-81324dbc8e34-kube-api-access-pbdg6\") pod \"ovn-operator-controller-manager-d44cf6b75-slw7s\" (UID: \"6764d3f3-5e9f-4635-973e-81324dbc8e34\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-slw7s" Feb 17 16:11:20 crc kubenswrapper[4808]: I0217 16:11:20.044325 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-468w4\" (UniqueName: \"kubernetes.io/projected/0a170b4f-607d-4c7c-bd0c-ee6c29523b44-kube-api-access-468w4\") pod \"placement-operator-controller-manager-8497b45c89-5mm2j\" (UID: \"0a170b4f-607d-4c7c-bd0c-ee6c29523b44\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-5mm2j" Feb 17 16:11:20 crc kubenswrapper[4808]: I0217 16:11:20.044402 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9mh5\" (UniqueName: \"kubernetes.io/projected/bdd19f1d-df45-4dda-a2bd-b14da398e043-kube-api-access-b9mh5\") pod \"telemetry-operator-controller-manager-66fcc5ff49-dnzp5\" (UID: \"bdd19f1d-df45-4dda-a2bd-b14da398e043\") " pod="openstack-operators/telemetry-operator-controller-manager-66fcc5ff49-dnzp5" Feb 17 16:11:20 crc kubenswrapper[4808]: I0217 16:11:20.048212 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-5qkk2"] Feb 17 16:11:20 crc kubenswrapper[4808]: I0217 16:11:20.050605 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-5qkk2" Feb 17 16:11:20 crc kubenswrapper[4808]: I0217 16:11:20.054146 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-qpc64" Feb 17 16:11:20 crc kubenswrapper[4808]: I0217 16:11:20.066913 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-t9k25" Feb 17 16:11:20 crc kubenswrapper[4808]: I0217 16:11:20.071042 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-66fcc5ff49-dnzp5"] Feb 17 16:11:20 crc kubenswrapper[4808]: I0217 16:11:20.080300 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-468w4\" (UniqueName: \"kubernetes.io/projected/0a170b4f-607d-4c7c-bd0c-ee6c29523b44-kube-api-access-468w4\") pod \"placement-operator-controller-manager-8497b45c89-5mm2j\" (UID: \"0a170b4f-607d-4c7c-bd0c-ee6c29523b44\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-5mm2j" Feb 17 16:11:20 crc kubenswrapper[4808]: I0217 16:11:20.083509 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggmbl\" (UniqueName: \"kubernetes.io/projected/74dda28c-8860-440c-b97c-b16bab985ff0-kube-api-access-ggmbl\") pod \"swift-operator-controller-manager-68f46476f-z4vp8\" (UID: \"74dda28c-8860-440c-b97c-b16bab985ff0\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-z4vp8" Feb 17 16:11:20 crc kubenswrapper[4808]: I0217 16:11:20.084747 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pbdg6\" (UniqueName: \"kubernetes.io/projected/6764d3f3-5e9f-4635-973e-81324dbc8e34-kube-api-access-pbdg6\") pod \"ovn-operator-controller-manager-d44cf6b75-slw7s\" (UID: \"6764d3f3-5e9f-4635-973e-81324dbc8e34\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-slw7s" Feb 17 16:11:20 crc kubenswrapper[4808]: I0217 16:11:20.089727 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-5qkk2"] Feb 17 16:11:20 crc kubenswrapper[4808]: I0217 16:11:20.104459 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-vgbmj" Feb 17 16:11:20 crc kubenswrapper[4808]: I0217 16:11:20.121114 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-kg6xx" Feb 17 16:11:20 crc kubenswrapper[4808]: I0217 16:11:20.126948 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-546d579865-b8s4r"] Feb 17 16:11:20 crc kubenswrapper[4808]: I0217 16:11:20.128528 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-546d579865-b8s4r" Feb 17 16:11:20 crc kubenswrapper[4808]: I0217 16:11:20.135193 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Feb 17 16:11:20 crc kubenswrapper[4808]: I0217 16:11:20.135324 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-hvdj8" Feb 17 16:11:20 crc kubenswrapper[4808]: I0217 16:11:20.135520 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Feb 17 16:11:20 crc kubenswrapper[4808]: I0217 16:11:20.140158 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-xp9sf" Feb 17 16:11:20 crc kubenswrapper[4808]: I0217 16:11:20.150965 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sw4t4\" (UniqueName: \"kubernetes.io/projected/cde66c49-b3c4-4f4f-b614-c4343d1c3732-kube-api-access-sw4t4\") pod \"watcher-operator-controller-manager-5db88f68c-5qkk2\" (UID: \"cde66c49-b3c4-4f4f-b614-c4343d1c3732\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-5qkk2" Feb 17 16:11:20 crc kubenswrapper[4808]: I0217 16:11:20.151021 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hx9l\" (UniqueName: \"kubernetes.io/projected/b42c0b9b-cca5-4ecb-908e-508fbf932dfe-kube-api-access-4hx9l\") pod \"test-operator-controller-manager-7866795846-zxqhb\" (UID: \"b42c0b9b-cca5-4ecb-908e-508fbf932dfe\") " pod="openstack-operators/test-operator-controller-manager-7866795846-zxqhb" Feb 17 16:11:20 crc kubenswrapper[4808]: I0217 16:11:20.151098 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6508a74d-2dba-4d1b-910c-95c9463c15a4-cert\") pod \"infra-operator-controller-manager-79d975b745-n6qxn\" (UID: \"6508a74d-2dba-4d1b-910c-95c9463c15a4\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-n6qxn" Feb 17 16:11:20 crc kubenswrapper[4808]: I0217 16:11:20.151151 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9mh5\" (UniqueName: \"kubernetes.io/projected/bdd19f1d-df45-4dda-a2bd-b14da398e043-kube-api-access-b9mh5\") pod \"telemetry-operator-controller-manager-66fcc5ff49-dnzp5\" (UID: \"bdd19f1d-df45-4dda-a2bd-b14da398e043\") " pod="openstack-operators/telemetry-operator-controller-manager-66fcc5ff49-dnzp5" Feb 17 16:11:20 crc kubenswrapper[4808]: E0217 16:11:20.151376 4808 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 17 16:11:20 crc kubenswrapper[4808]: E0217 16:11:20.151426 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6508a74d-2dba-4d1b-910c-95c9463c15a4-cert podName:6508a74d-2dba-4d1b-910c-95c9463c15a4 nodeName:}" failed. No retries permitted until 2026-02-17 16:11:21.151411392 +0000 UTC m=+1044.667770465 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6508a74d-2dba-4d1b-910c-95c9463c15a4-cert") pod "infra-operator-controller-manager-79d975b745-n6qxn" (UID: "6508a74d-2dba-4d1b-910c-95c9463c15a4") : secret "infra-operator-webhook-server-cert" not found Feb 17 16:11:20 crc kubenswrapper[4808]: I0217 16:11:20.161387 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-546d579865-b8s4r"] Feb 17 16:11:20 crc kubenswrapper[4808]: I0217 16:11:20.183389 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hx9l\" (UniqueName: \"kubernetes.io/projected/b42c0b9b-cca5-4ecb-908e-508fbf932dfe-kube-api-access-4hx9l\") pod \"test-operator-controller-manager-7866795846-zxqhb\" (UID: \"b42c0b9b-cca5-4ecb-908e-508fbf932dfe\") " pod="openstack-operators/test-operator-controller-manager-7866795846-zxqhb" Feb 17 16:11:20 crc kubenswrapper[4808]: I0217 16:11:20.208111 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-slw7s" Feb 17 16:11:20 crc kubenswrapper[4808]: I0217 16:11:20.208283 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9mh5\" (UniqueName: \"kubernetes.io/projected/bdd19f1d-df45-4dda-a2bd-b14da398e043-kube-api-access-b9mh5\") pod \"telemetry-operator-controller-manager-66fcc5ff49-dnzp5\" (UID: \"bdd19f1d-df45-4dda-a2bd-b14da398e043\") " pod="openstack-operators/telemetry-operator-controller-manager-66fcc5ff49-dnzp5" Feb 17 16:11:20 crc kubenswrapper[4808]: I0217 16:11:20.226226 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xcs6n"] Feb 17 16:11:20 crc kubenswrapper[4808]: I0217 16:11:20.227338 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xcs6n" Feb 17 16:11:20 crc kubenswrapper[4808]: I0217 16:11:20.233083 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-h9q89" Feb 17 16:11:20 crc kubenswrapper[4808]: I0217 16:11:20.233558 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xcs6n"] Feb 17 16:11:20 crc kubenswrapper[4808]: I0217 16:11:20.233779 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-z4vp8" Feb 17 16:11:20 crc kubenswrapper[4808]: I0217 16:11:20.240535 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-5mm2j" Feb 17 16:11:20 crc kubenswrapper[4808]: I0217 16:11:20.252989 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5e47b192-26de-4639-afe8-ec7b5fcc10c8-metrics-certs\") pod \"openstack-operator-controller-manager-546d579865-b8s4r\" (UID: \"5e47b192-26de-4639-afe8-ec7b5fcc10c8\") " pod="openstack-operators/openstack-operator-controller-manager-546d579865-b8s4r" Feb 17 16:11:20 crc kubenswrapper[4808]: I0217 16:11:20.253065 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sw4t4\" (UniqueName: \"kubernetes.io/projected/cde66c49-b3c4-4f4f-b614-c4343d1c3732-kube-api-access-sw4t4\") pod \"watcher-operator-controller-manager-5db88f68c-5qkk2\" (UID: \"cde66c49-b3c4-4f4f-b614-c4343d1c3732\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-5qkk2" Feb 17 16:11:20 crc kubenswrapper[4808]: I0217 16:11:20.253138 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/5e47b192-26de-4639-afe8-ec7b5fcc10c8-webhook-certs\") pod \"openstack-operator-controller-manager-546d579865-b8s4r\" (UID: \"5e47b192-26de-4639-afe8-ec7b5fcc10c8\") " pod="openstack-operators/openstack-operator-controller-manager-546d579865-b8s4r" Feb 17 16:11:20 crc kubenswrapper[4808]: I0217 16:11:20.253199 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4qkl\" (UniqueName: \"kubernetes.io/projected/5e47b192-26de-4639-afe8-ec7b5fcc10c8-kube-api-access-s4qkl\") pod \"openstack-operator-controller-manager-546d579865-b8s4r\" (UID: \"5e47b192-26de-4639-afe8-ec7b5fcc10c8\") " pod="openstack-operators/openstack-operator-controller-manager-546d579865-b8s4r" Feb 17 16:11:20 crc kubenswrapper[4808]: I0217 16:11:20.293245 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sw4t4\" (UniqueName: \"kubernetes.io/projected/cde66c49-b3c4-4f4f-b614-c4343d1c3732-kube-api-access-sw4t4\") pod \"watcher-operator-controller-manager-5db88f68c-5qkk2\" (UID: \"cde66c49-b3c4-4f4f-b614-c4343d1c3732\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-5qkk2" Feb 17 16:11:20 crc kubenswrapper[4808]: I0217 16:11:20.336898 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7866795846-zxqhb" Feb 17 16:11:20 crc kubenswrapper[4808]: I0217 16:11:20.354018 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4qkl\" (UniqueName: \"kubernetes.io/projected/5e47b192-26de-4639-afe8-ec7b5fcc10c8-kube-api-access-s4qkl\") pod \"openstack-operator-controller-manager-546d579865-b8s4r\" (UID: \"5e47b192-26de-4639-afe8-ec7b5fcc10c8\") " pod="openstack-operators/openstack-operator-controller-manager-546d579865-b8s4r" Feb 17 16:11:20 crc kubenswrapper[4808]: I0217 16:11:20.354119 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v54s4\" (UniqueName: \"kubernetes.io/projected/a83d92da-4f15-4e33-ab57-ae7bc9e0da5e-kube-api-access-v54s4\") pod \"rabbitmq-cluster-operator-manager-668c99d594-xcs6n\" (UID: \"a83d92da-4f15-4e33-ab57-ae7bc9e0da5e\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xcs6n" Feb 17 16:11:20 crc kubenswrapper[4808]: I0217 16:11:20.354155 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5e47b192-26de-4639-afe8-ec7b5fcc10c8-metrics-certs\") pod \"openstack-operator-controller-manager-546d579865-b8s4r\" (UID: \"5e47b192-26de-4639-afe8-ec7b5fcc10c8\") " pod="openstack-operators/openstack-operator-controller-manager-546d579865-b8s4r" Feb 17 16:11:20 crc kubenswrapper[4808]: I0217 16:11:20.354186 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2ec18a16-766f-4a0c-a393-0ca7a999011e-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9csf4ws\" (UID: \"2ec18a16-766f-4a0c-a393-0ca7a999011e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9csf4ws" Feb 17 16:11:20 crc kubenswrapper[4808]: I0217 16:11:20.354220 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/5e47b192-26de-4639-afe8-ec7b5fcc10c8-webhook-certs\") pod \"openstack-operator-controller-manager-546d579865-b8s4r\" (UID: \"5e47b192-26de-4639-afe8-ec7b5fcc10c8\") " pod="openstack-operators/openstack-operator-controller-manager-546d579865-b8s4r" Feb 17 16:11:20 crc kubenswrapper[4808]: E0217 16:11:20.354336 4808 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 17 16:11:20 crc kubenswrapper[4808]: E0217 16:11:20.354380 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5e47b192-26de-4639-afe8-ec7b5fcc10c8-webhook-certs podName:5e47b192-26de-4639-afe8-ec7b5fcc10c8 nodeName:}" failed. No retries permitted until 2026-02-17 16:11:20.854366936 +0000 UTC m=+1044.370726009 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/5e47b192-26de-4639-afe8-ec7b5fcc10c8-webhook-certs") pod "openstack-operator-controller-manager-546d579865-b8s4r" (UID: "5e47b192-26de-4639-afe8-ec7b5fcc10c8") : secret "webhook-server-cert" not found Feb 17 16:11:20 crc kubenswrapper[4808]: E0217 16:11:20.354771 4808 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 16:11:20 crc kubenswrapper[4808]: E0217 16:11:20.354803 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2ec18a16-766f-4a0c-a393-0ca7a999011e-cert podName:2ec18a16-766f-4a0c-a393-0ca7a999011e nodeName:}" failed. No retries permitted until 2026-02-17 16:11:21.354795498 +0000 UTC m=+1044.871154571 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2ec18a16-766f-4a0c-a393-0ca7a999011e-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9csf4ws" (UID: "2ec18a16-766f-4a0c-a393-0ca7a999011e") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 16:11:20 crc kubenswrapper[4808]: E0217 16:11:20.354833 4808 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 17 16:11:20 crc kubenswrapper[4808]: E0217 16:11:20.354948 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5e47b192-26de-4639-afe8-ec7b5fcc10c8-metrics-certs podName:5e47b192-26de-4639-afe8-ec7b5fcc10c8 nodeName:}" failed. No retries permitted until 2026-02-17 16:11:20.854919421 +0000 UTC m=+1044.371278494 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5e47b192-26de-4639-afe8-ec7b5fcc10c8-metrics-certs") pod "openstack-operator-controller-manager-546d579865-b8s4r" (UID: "5e47b192-26de-4639-afe8-ec7b5fcc10c8") : secret "metrics-server-cert" not found Feb 17 16:11:20 crc kubenswrapper[4808]: I0217 16:11:20.375102 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4qkl\" (UniqueName: \"kubernetes.io/projected/5e47b192-26de-4639-afe8-ec7b5fcc10c8-kube-api-access-s4qkl\") pod \"openstack-operator-controller-manager-546d579865-b8s4r\" (UID: \"5e47b192-26de-4639-afe8-ec7b5fcc10c8\") " pod="openstack-operators/openstack-operator-controller-manager-546d579865-b8s4r" Feb 17 16:11:20 crc kubenswrapper[4808]: I0217 16:11:20.376893 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-66fcc5ff49-dnzp5" Feb 17 16:11:20 crc kubenswrapper[4808]: I0217 16:11:20.413810 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-5qkk2" Feb 17 16:11:20 crc kubenswrapper[4808]: I0217 16:11:20.455931 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v54s4\" (UniqueName: \"kubernetes.io/projected/a83d92da-4f15-4e33-ab57-ae7bc9e0da5e-kube-api-access-v54s4\") pod \"rabbitmq-cluster-operator-manager-668c99d594-xcs6n\" (UID: \"a83d92da-4f15-4e33-ab57-ae7bc9e0da5e\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xcs6n" Feb 17 16:11:20 crc kubenswrapper[4808]: I0217 16:11:20.504304 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v54s4\" (UniqueName: \"kubernetes.io/projected/a83d92da-4f15-4e33-ab57-ae7bc9e0da5e-kube-api-access-v54s4\") pod \"rabbitmq-cluster-operator-manager-668c99d594-xcs6n\" (UID: \"a83d92da-4f15-4e33-ab57-ae7bc9e0da5e\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xcs6n" Feb 17 16:11:20 crc kubenswrapper[4808]: I0217 16:11:20.517715 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xcs6n" Feb 17 16:11:20 crc kubenswrapper[4808]: I0217 16:11:20.690823 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-gl97b"] Feb 17 16:11:20 crc kubenswrapper[4808]: I0217 16:11:20.869698 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5e47b192-26de-4639-afe8-ec7b5fcc10c8-metrics-certs\") pod \"openstack-operator-controller-manager-546d579865-b8s4r\" (UID: \"5e47b192-26de-4639-afe8-ec7b5fcc10c8\") " pod="openstack-operators/openstack-operator-controller-manager-546d579865-b8s4r" Feb 17 16:11:20 crc kubenswrapper[4808]: I0217 16:11:20.869772 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/5e47b192-26de-4639-afe8-ec7b5fcc10c8-webhook-certs\") pod \"openstack-operator-controller-manager-546d579865-b8s4r\" (UID: \"5e47b192-26de-4639-afe8-ec7b5fcc10c8\") " pod="openstack-operators/openstack-operator-controller-manager-546d579865-b8s4r" Feb 17 16:11:20 crc kubenswrapper[4808]: E0217 16:11:20.869926 4808 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 17 16:11:20 crc kubenswrapper[4808]: E0217 16:11:20.869984 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5e47b192-26de-4639-afe8-ec7b5fcc10c8-webhook-certs podName:5e47b192-26de-4639-afe8-ec7b5fcc10c8 nodeName:}" failed. No retries permitted until 2026-02-17 16:11:21.869967056 +0000 UTC m=+1045.386326129 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/5e47b192-26de-4639-afe8-ec7b5fcc10c8-webhook-certs") pod "openstack-operator-controller-manager-546d579865-b8s4r" (UID: "5e47b192-26de-4639-afe8-ec7b5fcc10c8") : secret "webhook-server-cert" not found Feb 17 16:11:20 crc kubenswrapper[4808]: E0217 16:11:20.870020 4808 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 17 16:11:20 crc kubenswrapper[4808]: E0217 16:11:20.870153 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5e47b192-26de-4639-afe8-ec7b5fcc10c8-metrics-certs podName:5e47b192-26de-4639-afe8-ec7b5fcc10c8 nodeName:}" failed. No retries permitted until 2026-02-17 16:11:21.870124251 +0000 UTC m=+1045.386483324 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5e47b192-26de-4639-afe8-ec7b5fcc10c8-metrics-certs") pod "openstack-operator-controller-manager-546d579865-b8s4r" (UID: "5e47b192-26de-4639-afe8-ec7b5fcc10c8") : secret "metrics-server-cert" not found Feb 17 16:11:21 crc kubenswrapper[4808]: W0217 16:11:21.138083 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod96baec58_63b9_49cd_9cf4_32639e58d4ac.slice/crio-a06b2344c40b3c80d2ac34c2e98b401dd4e26a7125978d1a9e4e62233da528ac WatchSource:0}: Error finding container a06b2344c40b3c80d2ac34c2e98b401dd4e26a7125978d1a9e4e62233da528ac: Status 404 returned error can't find the container with id a06b2344c40b3c80d2ac34c2e98b401dd4e26a7125978d1a9e4e62233da528ac Feb 17 16:11:21 crc kubenswrapper[4808]: I0217 16:11:21.142964 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-8xfc6"] Feb 17 16:11:21 crc kubenswrapper[4808]: I0217 16:11:21.161701 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-b7hkk"] Feb 17 16:11:21 crc kubenswrapper[4808]: I0217 16:11:21.175230 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6508a74d-2dba-4d1b-910c-95c9463c15a4-cert\") pod \"infra-operator-controller-manager-79d975b745-n6qxn\" (UID: \"6508a74d-2dba-4d1b-910c-95c9463c15a4\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-n6qxn" Feb 17 16:11:21 crc kubenswrapper[4808]: E0217 16:11:21.175438 4808 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 17 16:11:21 crc kubenswrapper[4808]: E0217 16:11:21.175509 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6508a74d-2dba-4d1b-910c-95c9463c15a4-cert podName:6508a74d-2dba-4d1b-910c-95c9463c15a4 nodeName:}" failed. No retries permitted until 2026-02-17 16:11:23.175488328 +0000 UTC m=+1046.691847401 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6508a74d-2dba-4d1b-910c-95c9463c15a4-cert") pod "infra-operator-controller-manager-79d975b745-n6qxn" (UID: "6508a74d-2dba-4d1b-910c-95c9463c15a4") : secret "infra-operator-webhook-server-cert" not found Feb 17 16:11:21 crc kubenswrapper[4808]: I0217 16:11:21.191238 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-cjh7p"] Feb 17 16:11:21 crc kubenswrapper[4808]: W0217 16:11:21.198967 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3e657888_7f8f_4d5d_8ef3_7f7472a7e4fb.slice/crio-10a6a87584a429feaab67a052c3e03a6668a5ea86c1a7e9eccd39b814359a06f WatchSource:0}: Error finding container 10a6a87584a429feaab67a052c3e03a6668a5ea86c1a7e9eccd39b814359a06f: Status 404 returned error can't find the container with id 10a6a87584a429feaab67a052c3e03a6668a5ea86c1a7e9eccd39b814359a06f Feb 17 16:11:21 crc kubenswrapper[4808]: I0217 16:11:21.201046 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-xv924"] Feb 17 16:11:21 crc kubenswrapper[4808]: I0217 16:11:21.377664 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2ec18a16-766f-4a0c-a393-0ca7a999011e-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9csf4ws\" (UID: \"2ec18a16-766f-4a0c-a393-0ca7a999011e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9csf4ws" Feb 17 16:11:21 crc kubenswrapper[4808]: E0217 16:11:21.378081 4808 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 16:11:21 crc kubenswrapper[4808]: E0217 16:11:21.378247 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2ec18a16-766f-4a0c-a393-0ca7a999011e-cert podName:2ec18a16-766f-4a0c-a393-0ca7a999011e nodeName:}" failed. No retries permitted until 2026-02-17 16:11:23.378225358 +0000 UTC m=+1046.894584441 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2ec18a16-766f-4a0c-a393-0ca7a999011e-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9csf4ws" (UID: "2ec18a16-766f-4a0c-a393-0ca7a999011e") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 16:11:21 crc kubenswrapper[4808]: I0217 16:11:21.539042 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-5mm2j"] Feb 17 16:11:21 crc kubenswrapper[4808]: I0217 16:11:21.583638 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-tkhr5"] Feb 17 16:11:21 crc kubenswrapper[4808]: I0217 16:11:21.589900 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-plpr2"] Feb 17 16:11:21 crc kubenswrapper[4808]: I0217 16:11:21.605379 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xcs6n"] Feb 17 16:11:21 crc kubenswrapper[4808]: I0217 16:11:21.618056 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-zxqhb"] Feb 17 16:11:21 crc kubenswrapper[4808]: W0217 16:11:21.619861 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod681f334b_d0ac_43dc_babb_92d9cb7c0440.slice/crio-f5a9beb81f1dbc024a1ada7b5ff85611d38177ccd8ef673390c3ceecf75de984 WatchSource:0}: Error finding container f5a9beb81f1dbc024a1ada7b5ff85611d38177ccd8ef673390c3ceecf75de984: Status 404 returned error can't find the container with id f5a9beb81f1dbc024a1ada7b5ff85611d38177ccd8ef673390c3ceecf75de984 Feb 17 16:11:21 crc kubenswrapper[4808]: I0217 16:11:21.653493 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-z4vp8"] Feb 17 16:11:21 crc kubenswrapper[4808]: I0217 16:11:21.672876 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-5qkk2"] Feb 17 16:11:21 crc kubenswrapper[4808]: I0217 16:11:21.674606 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-xp9sf"] Feb 17 16:11:21 crc kubenswrapper[4808]: I0217 16:11:21.681854 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-kg6xx"] Feb 17 16:11:21 crc kubenswrapper[4808]: W0217 16:11:21.718019 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda83d92da_4f15_4e33_ab57_ae7bc9e0da5e.slice/crio-49adc6d9733c663faf6a80c40c9cac3b3035952c4d651eeed023ebcf7b3b375d WatchSource:0}: Error finding container 49adc6d9733c663faf6a80c40c9cac3b3035952c4d651eeed023ebcf7b3b375d: Status 404 returned error can't find the container with id 49adc6d9733c663faf6a80c40c9cac3b3035952c4d651eeed023ebcf7b3b375d Feb 17 16:11:21 crc kubenswrapper[4808]: I0217 16:11:21.723686 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-thpj7"] Feb 17 16:11:21 crc kubenswrapper[4808]: I0217 16:11:21.730649 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-slw7s"] Feb 17 16:11:21 crc kubenswrapper[4808]: I0217 16:11:21.740651 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-t9k25"] Feb 17 16:11:21 crc kubenswrapper[4808]: I0217 16:11:21.741460 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-4cv77"] Feb 17 16:11:21 crc kubenswrapper[4808]: I0217 16:11:21.760229 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987464f4-b7hkk" event={"ID":"b622bb16-c5b4-45ea-b493-e681d36d49ac","Type":"ContainerStarted","Data":"a013bdb67caf173970839cbc44f7c0d28e286c6c821cb41a233b48e8ffa75d00"} Feb 17 16:11:21 crc kubenswrapper[4808]: I0217 16:11:21.760756 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-vgbmj"] Feb 17 16:11:21 crc kubenswrapper[4808]: I0217 16:11:21.821812 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-66fcc5ff49-dnzp5"] Feb 17 16:11:21 crc kubenswrapper[4808]: I0217 16:11:21.836316 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-8xfc6" event={"ID":"96baec58-63b9-49cd-9cf4-32639e58d4ac","Type":"ContainerStarted","Data":"a06b2344c40b3c80d2ac34c2e98b401dd4e26a7125978d1a9e4e62233da528ac"} Feb 17 16:11:21 crc kubenswrapper[4808]: E0217 16:11:21.837619 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-w8dhp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-567668f5cf-t9k25_openstack-operators(a6f8ca14-e1db-4dcc-a64d-7bf137105e80): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 17 16:11:21 crc kubenswrapper[4808]: E0217 16:11:21.837686 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ggmbl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-68f46476f-z4vp8_openstack-operators(74dda28c-8860-440c-b97c-b16bab985ff0): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 17 16:11:21 crc kubenswrapper[4808]: E0217 16:11:21.837699 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:7e1b0b7b172ad0d707ab80dd72d609e1d0f5bbd38a22c24a28ed0f17a960c867,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2zfgx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-554564d7fc-thpj7_openstack-operators(ace1fd54-7ff8-45b9-a77b-c3908044365e): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 17 16:11:21 crc kubenswrapper[4808]: E0217 16:11:21.837719 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sw4t4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-5db88f68c-5qkk2_openstack-operators(cde66c49-b3c4-4f4f-b614-c4343d1c3732): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 17 16:11:21 crc kubenswrapper[4808]: E0217 16:11:21.838815 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-68f46476f-z4vp8" podUID="74dda28c-8860-440c-b97c-b16bab985ff0" Feb 17 16:11:21 crc kubenswrapper[4808]: E0217 16:11:21.838854 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-5qkk2" podUID="cde66c49-b3c4-4f4f-b614-c4343d1c3732" Feb 17 16:11:21 crc kubenswrapper[4808]: E0217 16:11:21.838948 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-t9k25" podUID="a6f8ca14-e1db-4dcc-a64d-7bf137105e80" Feb 17 16:11:21 crc kubenswrapper[4808]: E0217 16:11:21.838974 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-thpj7" podUID="ace1fd54-7ff8-45b9-a77b-c3908044365e" Feb 17 16:11:21 crc kubenswrapper[4808]: I0217 16:11:21.840155 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-cjh7p" event={"ID":"3e657888-7f8f-4d5d-8ef3-7f7472a7e4fb","Type":"ContainerStarted","Data":"10a6a87584a429feaab67a052c3e03a6668a5ea86c1a7e9eccd39b814359a06f"} Feb 17 16:11:21 crc kubenswrapper[4808]: E0217 16:11:21.841079 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.110:5001/openstack-k8s-operators/telemetry-operator:49fb0a393e644ad55559f09981950c6ee3a56dc1,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-b9mh5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-66fcc5ff49-dnzp5_openstack-operators(bdd19f1d-df45-4dda-a2bd-b14da398e043): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 17 16:11:21 crc kubenswrapper[4808]: E0217 16:11:21.845143 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-66fcc5ff49-dnzp5" podUID="bdd19f1d-df45-4dda-a2bd-b14da398e043" Feb 17 16:11:21 crc kubenswrapper[4808]: E0217 16:11:21.846133 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:543c103838f3e6ef48755665a7695dfa3ed84753c557560257d265db31f92759,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pbdg6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-d44cf6b75-slw7s_openstack-operators(6764d3f3-5e9f-4635-973e-81324dbc8e34): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 17 16:11:21 crc kubenswrapper[4808]: E0217 16:11:21.848389 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-slw7s" podUID="6764d3f3-5e9f-4635-973e-81324dbc8e34" Feb 17 16:11:21 crc kubenswrapper[4808]: I0217 16:11:21.866154 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-5mm2j" event={"ID":"0a170b4f-607d-4c7c-bd0c-ee6c29523b44","Type":"ContainerStarted","Data":"a86b74ef2e726a2453a5336f67089f6236bae472a2e5292332bbe88aea3586c9"} Feb 17 16:11:21 crc kubenswrapper[4808]: I0217 16:11:21.873394 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-tkhr5" event={"ID":"93278ccd-52fe-4848-9a46-3f47369d47ab","Type":"ContainerStarted","Data":"6745e62320b4c7aca0252eb3a66554bad32b95d55747ae9157a937d763c44158"} Feb 17 16:11:21 crc kubenswrapper[4808]: I0217 16:11:21.874976 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-xv924" event={"ID":"d4bd0818-617e-418a-b7c7-f70ba7ebc3d8","Type":"ContainerStarted","Data":"c1709e0c16cbcf0c62db44b4f39a7fbb858a964a5421c944bdf39338198333fc"} Feb 17 16:11:21 crc kubenswrapper[4808]: I0217 16:11:21.876796 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-gl97b" event={"ID":"e2e1b5f4-7ed2-4ab1-871b-1974a7559252","Type":"ContainerStarted","Data":"f01f1a3187f6e8d350cf0043f487d17d6c2abdd4325b2fbbefcb320657dfa386"} Feb 17 16:11:21 crc kubenswrapper[4808]: I0217 16:11:21.878385 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-plpr2" event={"ID":"681f334b-d0ac-43dc-babb-92d9cb7c0440","Type":"ContainerStarted","Data":"f5a9beb81f1dbc024a1ada7b5ff85611d38177ccd8ef673390c3ceecf75de984"} Feb 17 16:11:21 crc kubenswrapper[4808]: I0217 16:11:21.901280 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/5e47b192-26de-4639-afe8-ec7b5fcc10c8-webhook-certs\") pod \"openstack-operator-controller-manager-546d579865-b8s4r\" (UID: \"5e47b192-26de-4639-afe8-ec7b5fcc10c8\") " pod="openstack-operators/openstack-operator-controller-manager-546d579865-b8s4r" Feb 17 16:11:21 crc kubenswrapper[4808]: I0217 16:11:21.901425 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5e47b192-26de-4639-afe8-ec7b5fcc10c8-metrics-certs\") pod \"openstack-operator-controller-manager-546d579865-b8s4r\" (UID: \"5e47b192-26de-4639-afe8-ec7b5fcc10c8\") " pod="openstack-operators/openstack-operator-controller-manager-546d579865-b8s4r" Feb 17 16:11:21 crc kubenswrapper[4808]: E0217 16:11:21.901590 4808 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 17 16:11:21 crc kubenswrapper[4808]: E0217 16:11:21.901647 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5e47b192-26de-4639-afe8-ec7b5fcc10c8-metrics-certs podName:5e47b192-26de-4639-afe8-ec7b5fcc10c8 nodeName:}" failed. No retries permitted until 2026-02-17 16:11:23.901628049 +0000 UTC m=+1047.417987122 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5e47b192-26de-4639-afe8-ec7b5fcc10c8-metrics-certs") pod "openstack-operator-controller-manager-546d579865-b8s4r" (UID: "5e47b192-26de-4639-afe8-ec7b5fcc10c8") : secret "metrics-server-cert" not found Feb 17 16:11:21 crc kubenswrapper[4808]: E0217 16:11:21.901694 4808 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 17 16:11:21 crc kubenswrapper[4808]: E0217 16:11:21.901713 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5e47b192-26de-4639-afe8-ec7b5fcc10c8-webhook-certs podName:5e47b192-26de-4639-afe8-ec7b5fcc10c8 nodeName:}" failed. No retries permitted until 2026-02-17 16:11:23.901707221 +0000 UTC m=+1047.418066294 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/5e47b192-26de-4639-afe8-ec7b5fcc10c8-webhook-certs") pod "openstack-operator-controller-manager-546d579865-b8s4r" (UID: "5e47b192-26de-4639-afe8-ec7b5fcc10c8") : secret "webhook-server-cert" not found Feb 17 16:11:22 crc kubenswrapper[4808]: I0217 16:11:22.899975 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-slw7s" event={"ID":"6764d3f3-5e9f-4635-973e-81324dbc8e34","Type":"ContainerStarted","Data":"d5c9ba4fcfc85878cebb360bf7a5018e1fa34a5013319692f4f5b1bb9272ca70"} Feb 17 16:11:22 crc kubenswrapper[4808]: E0217 16:11:22.906592 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:543c103838f3e6ef48755665a7695dfa3ed84753c557560257d265db31f92759\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-slw7s" podUID="6764d3f3-5e9f-4635-973e-81324dbc8e34" Feb 17 16:11:22 crc kubenswrapper[4808]: I0217 16:11:22.914821 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xcs6n" event={"ID":"a83d92da-4f15-4e33-ab57-ae7bc9e0da5e","Type":"ContainerStarted","Data":"49adc6d9733c663faf6a80c40c9cac3b3035952c4d651eeed023ebcf7b3b375d"} Feb 17 16:11:22 crc kubenswrapper[4808]: I0217 16:11:22.916908 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-vgbmj" event={"ID":"a40e52a1-9867-413a-81fb-324789e0a009","Type":"ContainerStarted","Data":"e52f1d2721acbcf9be3698fd178070c78dd8d3ce31c40d0b96b77527b5829735"} Feb 17 16:11:22 crc kubenswrapper[4808]: I0217 16:11:22.932346 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-5qkk2" event={"ID":"cde66c49-b3c4-4f4f-b614-c4343d1c3732","Type":"ContainerStarted","Data":"c5de695f6323ef5910f014183e4a1a0b742925e652f4caab358ecdbdf09a8535"} Feb 17 16:11:22 crc kubenswrapper[4808]: E0217 16:11:22.939864 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-5qkk2" podUID="cde66c49-b3c4-4f4f-b614-c4343d1c3732" Feb 17 16:11:22 crc kubenswrapper[4808]: I0217 16:11:22.949936 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7866795846-zxqhb" event={"ID":"b42c0b9b-cca5-4ecb-908e-508fbf932dfe","Type":"ContainerStarted","Data":"55d9fddddb15a3287572abfeb236e5e5ca9dd50652f15828c8a5dd795a98661b"} Feb 17 16:11:22 crc kubenswrapper[4808]: I0217 16:11:22.972794 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-thpj7" event={"ID":"ace1fd54-7ff8-45b9-a77b-c3908044365e","Type":"ContainerStarted","Data":"0b30b028fcdfafb2488a0117f705591643a17be31a80764ae26c1e10fc159068"} Feb 17 16:11:22 crc kubenswrapper[4808]: E0217 16:11:22.986212 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:7e1b0b7b172ad0d707ab80dd72d609e1d0f5bbd38a22c24a28ed0f17a960c867\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-thpj7" podUID="ace1fd54-7ff8-45b9-a77b-c3908044365e" Feb 17 16:11:22 crc kubenswrapper[4808]: I0217 16:11:22.991728 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-kg6xx" event={"ID":"8d4c91a6-8441-45a6-bb6a-7655ba464fb9","Type":"ContainerStarted","Data":"20d7dc1b6b560cc6277ecd84ae978298226c2eb38537305baaac7ead51dadbb4"} Feb 17 16:11:23 crc kubenswrapper[4808]: I0217 16:11:23.004188 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-xp9sf" event={"ID":"a2547c9d-80d6-491d-8517-26327e35a1f4","Type":"ContainerStarted","Data":"729d7f53e063d4eb3ea3fa2107b7194c4029562dc324a5db783dcdf7ed46c68b"} Feb 17 16:11:23 crc kubenswrapper[4808]: I0217 16:11:23.009694 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-t9k25" event={"ID":"a6f8ca14-e1db-4dcc-a64d-7bf137105e80","Type":"ContainerStarted","Data":"37c6bc603878988cbbea0472e736bda5560b5e05d1eb11742b0a852e536e7944"} Feb 17 16:11:23 crc kubenswrapper[4808]: E0217 16:11:23.015959 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838\\\"\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-t9k25" podUID="a6f8ca14-e1db-4dcc-a64d-7bf137105e80" Feb 17 16:11:23 crc kubenswrapper[4808]: I0217 16:11:23.023085 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-z4vp8" event={"ID":"74dda28c-8860-440c-b97c-b16bab985ff0","Type":"ContainerStarted","Data":"41aa92b167d652175fc28fd82809bf6736d7d5a2812b6fc83b7a9a95cdae24f8"} Feb 17 16:11:23 crc kubenswrapper[4808]: E0217 16:11:23.026395 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68f46476f-z4vp8" podUID="74dda28c-8860-440c-b97c-b16bab985ff0" Feb 17 16:11:23 crc kubenswrapper[4808]: I0217 16:11:23.047396 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-66fcc5ff49-dnzp5" event={"ID":"bdd19f1d-df45-4dda-a2bd-b14da398e043","Type":"ContainerStarted","Data":"0d7ab3bcc5a037e32d12a0b4f5588885275a790adb5c8bfec1ea47a493fabeb1"} Feb 17 16:11:23 crc kubenswrapper[4808]: I0217 16:11:23.067760 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-4cv77" event={"ID":"77df5d1f-daff-4508-861a-335ab87f2366","Type":"ContainerStarted","Data":"b0063d6f14f391db832e148bba81c497614912df0fadf51543ec2f3dd9863c9a"} Feb 17 16:11:23 crc kubenswrapper[4808]: E0217 16:11:23.067904 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.110:5001/openstack-k8s-operators/telemetry-operator:49fb0a393e644ad55559f09981950c6ee3a56dc1\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-66fcc5ff49-dnzp5" podUID="bdd19f1d-df45-4dda-a2bd-b14da398e043" Feb 17 16:11:23 crc kubenswrapper[4808]: I0217 16:11:23.233364 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6508a74d-2dba-4d1b-910c-95c9463c15a4-cert\") pod \"infra-operator-controller-manager-79d975b745-n6qxn\" (UID: \"6508a74d-2dba-4d1b-910c-95c9463c15a4\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-n6qxn" Feb 17 16:11:23 crc kubenswrapper[4808]: E0217 16:11:23.233923 4808 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 17 16:11:23 crc kubenswrapper[4808]: E0217 16:11:23.233969 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6508a74d-2dba-4d1b-910c-95c9463c15a4-cert podName:6508a74d-2dba-4d1b-910c-95c9463c15a4 nodeName:}" failed. No retries permitted until 2026-02-17 16:11:27.233956783 +0000 UTC m=+1050.750315856 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6508a74d-2dba-4d1b-910c-95c9463c15a4-cert") pod "infra-operator-controller-manager-79d975b745-n6qxn" (UID: "6508a74d-2dba-4d1b-910c-95c9463c15a4") : secret "infra-operator-webhook-server-cert" not found Feb 17 16:11:23 crc kubenswrapper[4808]: I0217 16:11:23.436321 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2ec18a16-766f-4a0c-a393-0ca7a999011e-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9csf4ws\" (UID: \"2ec18a16-766f-4a0c-a393-0ca7a999011e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9csf4ws" Feb 17 16:11:23 crc kubenswrapper[4808]: E0217 16:11:23.436691 4808 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 16:11:23 crc kubenswrapper[4808]: E0217 16:11:23.436773 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2ec18a16-766f-4a0c-a393-0ca7a999011e-cert podName:2ec18a16-766f-4a0c-a393-0ca7a999011e nodeName:}" failed. No retries permitted until 2026-02-17 16:11:27.436727482 +0000 UTC m=+1050.953086555 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2ec18a16-766f-4a0c-a393-0ca7a999011e-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9csf4ws" (UID: "2ec18a16-766f-4a0c-a393-0ca7a999011e") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 16:11:23 crc kubenswrapper[4808]: I0217 16:11:23.943368 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5e47b192-26de-4639-afe8-ec7b5fcc10c8-metrics-certs\") pod \"openstack-operator-controller-manager-546d579865-b8s4r\" (UID: \"5e47b192-26de-4639-afe8-ec7b5fcc10c8\") " pod="openstack-operators/openstack-operator-controller-manager-546d579865-b8s4r" Feb 17 16:11:23 crc kubenswrapper[4808]: I0217 16:11:23.943459 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/5e47b192-26de-4639-afe8-ec7b5fcc10c8-webhook-certs\") pod \"openstack-operator-controller-manager-546d579865-b8s4r\" (UID: \"5e47b192-26de-4639-afe8-ec7b5fcc10c8\") " pod="openstack-operators/openstack-operator-controller-manager-546d579865-b8s4r" Feb 17 16:11:23 crc kubenswrapper[4808]: E0217 16:11:23.943642 4808 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 17 16:11:23 crc kubenswrapper[4808]: E0217 16:11:23.943692 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5e47b192-26de-4639-afe8-ec7b5fcc10c8-webhook-certs podName:5e47b192-26de-4639-afe8-ec7b5fcc10c8 nodeName:}" failed. No retries permitted until 2026-02-17 16:11:27.943677908 +0000 UTC m=+1051.460036981 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/5e47b192-26de-4639-afe8-ec7b5fcc10c8-webhook-certs") pod "openstack-operator-controller-manager-546d579865-b8s4r" (UID: "5e47b192-26de-4639-afe8-ec7b5fcc10c8") : secret "webhook-server-cert" not found Feb 17 16:11:23 crc kubenswrapper[4808]: E0217 16:11:23.944222 4808 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 17 16:11:23 crc kubenswrapper[4808]: E0217 16:11:23.944252 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5e47b192-26de-4639-afe8-ec7b5fcc10c8-metrics-certs podName:5e47b192-26de-4639-afe8-ec7b5fcc10c8 nodeName:}" failed. No retries permitted until 2026-02-17 16:11:27.944241794 +0000 UTC m=+1051.460600867 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5e47b192-26de-4639-afe8-ec7b5fcc10c8-metrics-certs") pod "openstack-operator-controller-manager-546d579865-b8s4r" (UID: "5e47b192-26de-4639-afe8-ec7b5fcc10c8") : secret "metrics-server-cert" not found Feb 17 16:11:24 crc kubenswrapper[4808]: E0217 16:11:24.081756 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:7e1b0b7b172ad0d707ab80dd72d609e1d0f5bbd38a22c24a28ed0f17a960c867\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-thpj7" podUID="ace1fd54-7ff8-45b9-a77b-c3908044365e" Feb 17 16:11:24 crc kubenswrapper[4808]: E0217 16:11:24.082093 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68f46476f-z4vp8" podUID="74dda28c-8860-440c-b97c-b16bab985ff0" Feb 17 16:11:24 crc kubenswrapper[4808]: E0217 16:11:24.082152 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838\\\"\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-t9k25" podUID="a6f8ca14-e1db-4dcc-a64d-7bf137105e80" Feb 17 16:11:24 crc kubenswrapper[4808]: E0217 16:11:24.082190 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.110:5001/openstack-k8s-operators/telemetry-operator:49fb0a393e644ad55559f09981950c6ee3a56dc1\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-66fcc5ff49-dnzp5" podUID="bdd19f1d-df45-4dda-a2bd-b14da398e043" Feb 17 16:11:24 crc kubenswrapper[4808]: E0217 16:11:24.082263 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-5qkk2" podUID="cde66c49-b3c4-4f4f-b614-c4343d1c3732" Feb 17 16:11:24 crc kubenswrapper[4808]: E0217 16:11:24.082296 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:543c103838f3e6ef48755665a7695dfa3ed84753c557560257d265db31f92759\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-slw7s" podUID="6764d3f3-5e9f-4635-973e-81324dbc8e34" Feb 17 16:11:27 crc kubenswrapper[4808]: I0217 16:11:27.240730 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6508a74d-2dba-4d1b-910c-95c9463c15a4-cert\") pod \"infra-operator-controller-manager-79d975b745-n6qxn\" (UID: \"6508a74d-2dba-4d1b-910c-95c9463c15a4\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-n6qxn" Feb 17 16:11:27 crc kubenswrapper[4808]: E0217 16:11:27.240954 4808 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 17 16:11:27 crc kubenswrapper[4808]: E0217 16:11:27.241401 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6508a74d-2dba-4d1b-910c-95c9463c15a4-cert podName:6508a74d-2dba-4d1b-910c-95c9463c15a4 nodeName:}" failed. No retries permitted until 2026-02-17 16:11:35.241379766 +0000 UTC m=+1058.757738839 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6508a74d-2dba-4d1b-910c-95c9463c15a4-cert") pod "infra-operator-controller-manager-79d975b745-n6qxn" (UID: "6508a74d-2dba-4d1b-910c-95c9463c15a4") : secret "infra-operator-webhook-server-cert" not found Feb 17 16:11:27 crc kubenswrapper[4808]: I0217 16:11:27.444004 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2ec18a16-766f-4a0c-a393-0ca7a999011e-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9csf4ws\" (UID: \"2ec18a16-766f-4a0c-a393-0ca7a999011e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9csf4ws" Feb 17 16:11:27 crc kubenswrapper[4808]: E0217 16:11:27.444185 4808 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 16:11:27 crc kubenswrapper[4808]: E0217 16:11:27.444525 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2ec18a16-766f-4a0c-a393-0ca7a999011e-cert podName:2ec18a16-766f-4a0c-a393-0ca7a999011e nodeName:}" failed. No retries permitted until 2026-02-17 16:11:35.444505955 +0000 UTC m=+1058.960865028 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2ec18a16-766f-4a0c-a393-0ca7a999011e-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9csf4ws" (UID: "2ec18a16-766f-4a0c-a393-0ca7a999011e") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 16:11:27 crc kubenswrapper[4808]: I0217 16:11:27.951443 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5e47b192-26de-4639-afe8-ec7b5fcc10c8-metrics-certs\") pod \"openstack-operator-controller-manager-546d579865-b8s4r\" (UID: \"5e47b192-26de-4639-afe8-ec7b5fcc10c8\") " pod="openstack-operators/openstack-operator-controller-manager-546d579865-b8s4r" Feb 17 16:11:27 crc kubenswrapper[4808]: I0217 16:11:27.951524 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/5e47b192-26de-4639-afe8-ec7b5fcc10c8-webhook-certs\") pod \"openstack-operator-controller-manager-546d579865-b8s4r\" (UID: \"5e47b192-26de-4639-afe8-ec7b5fcc10c8\") " pod="openstack-operators/openstack-operator-controller-manager-546d579865-b8s4r" Feb 17 16:11:27 crc kubenswrapper[4808]: E0217 16:11:27.952294 4808 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 17 16:11:27 crc kubenswrapper[4808]: E0217 16:11:27.952344 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5e47b192-26de-4639-afe8-ec7b5fcc10c8-webhook-certs podName:5e47b192-26de-4639-afe8-ec7b5fcc10c8 nodeName:}" failed. No retries permitted until 2026-02-17 16:11:35.952329384 +0000 UTC m=+1059.468688457 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/5e47b192-26de-4639-afe8-ec7b5fcc10c8-webhook-certs") pod "openstack-operator-controller-manager-546d579865-b8s4r" (UID: "5e47b192-26de-4639-afe8-ec7b5fcc10c8") : secret "webhook-server-cert" not found Feb 17 16:11:27 crc kubenswrapper[4808]: E0217 16:11:27.954062 4808 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 17 16:11:27 crc kubenswrapper[4808]: E0217 16:11:27.954109 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5e47b192-26de-4639-afe8-ec7b5fcc10c8-metrics-certs podName:5e47b192-26de-4639-afe8-ec7b5fcc10c8 nodeName:}" failed. No retries permitted until 2026-02-17 16:11:35.954097562 +0000 UTC m=+1059.470456635 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5e47b192-26de-4639-afe8-ec7b5fcc10c8-metrics-certs") pod "openstack-operator-controller-manager-546d579865-b8s4r" (UID: "5e47b192-26de-4639-afe8-ec7b5fcc10c8") : secret "metrics-server-cert" not found Feb 17 16:11:35 crc kubenswrapper[4808]: I0217 16:11:35.269199 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6508a74d-2dba-4d1b-910c-95c9463c15a4-cert\") pod \"infra-operator-controller-manager-79d975b745-n6qxn\" (UID: \"6508a74d-2dba-4d1b-910c-95c9463c15a4\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-n6qxn" Feb 17 16:11:35 crc kubenswrapper[4808]: I0217 16:11:35.277630 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6508a74d-2dba-4d1b-910c-95c9463c15a4-cert\") pod \"infra-operator-controller-manager-79d975b745-n6qxn\" (UID: \"6508a74d-2dba-4d1b-910c-95c9463c15a4\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-n6qxn" Feb 17 16:11:35 crc kubenswrapper[4808]: I0217 16:11:35.385839 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79d975b745-n6qxn" Feb 17 16:11:35 crc kubenswrapper[4808]: I0217 16:11:35.474019 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2ec18a16-766f-4a0c-a393-0ca7a999011e-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9csf4ws\" (UID: \"2ec18a16-766f-4a0c-a393-0ca7a999011e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9csf4ws" Feb 17 16:11:35 crc kubenswrapper[4808]: I0217 16:11:35.477586 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2ec18a16-766f-4a0c-a393-0ca7a999011e-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9csf4ws\" (UID: \"2ec18a16-766f-4a0c-a393-0ca7a999011e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9csf4ws" Feb 17 16:11:35 crc kubenswrapper[4808]: I0217 16:11:35.481273 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9csf4ws" Feb 17 16:11:35 crc kubenswrapper[4808]: E0217 16:11:35.747209 4808 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:e4689246ae78635dc3c1db9c677d8b16b8f94276df15fb9c84bfc57cc6578fcf" Feb 17 16:11:35 crc kubenswrapper[4808]: E0217 16:11:35.747404 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:e4689246ae78635dc3c1db9c677d8b16b8f94276df15fb9c84bfc57cc6578fcf,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-s6lvx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-64ddbf8bb-kg6xx_openstack-operators(8d4c91a6-8441-45a6-bb6a-7655ba464fb9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 16:11:35 crc kubenswrapper[4808]: E0217 16:11:35.748738 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-kg6xx" podUID="8d4c91a6-8441-45a6-bb6a-7655ba464fb9" Feb 17 16:11:35 crc kubenswrapper[4808]: I0217 16:11:35.980636 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5e47b192-26de-4639-afe8-ec7b5fcc10c8-metrics-certs\") pod \"openstack-operator-controller-manager-546d579865-b8s4r\" (UID: \"5e47b192-26de-4639-afe8-ec7b5fcc10c8\") " pod="openstack-operators/openstack-operator-controller-manager-546d579865-b8s4r" Feb 17 16:11:35 crc kubenswrapper[4808]: I0217 16:11:35.981165 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/5e47b192-26de-4639-afe8-ec7b5fcc10c8-webhook-certs\") pod \"openstack-operator-controller-manager-546d579865-b8s4r\" (UID: \"5e47b192-26de-4639-afe8-ec7b5fcc10c8\") " pod="openstack-operators/openstack-operator-controller-manager-546d579865-b8s4r" Feb 17 16:11:35 crc kubenswrapper[4808]: E0217 16:11:35.981338 4808 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 17 16:11:35 crc kubenswrapper[4808]: E0217 16:11:35.981400 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5e47b192-26de-4639-afe8-ec7b5fcc10c8-webhook-certs podName:5e47b192-26de-4639-afe8-ec7b5fcc10c8 nodeName:}" failed. No retries permitted until 2026-02-17 16:11:51.981384324 +0000 UTC m=+1075.497743397 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/5e47b192-26de-4639-afe8-ec7b5fcc10c8-webhook-certs") pod "openstack-operator-controller-manager-546d579865-b8s4r" (UID: "5e47b192-26de-4639-afe8-ec7b5fcc10c8") : secret "webhook-server-cert" not found Feb 17 16:11:35 crc kubenswrapper[4808]: I0217 16:11:35.984242 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5e47b192-26de-4639-afe8-ec7b5fcc10c8-metrics-certs\") pod \"openstack-operator-controller-manager-546d579865-b8s4r\" (UID: \"5e47b192-26de-4639-afe8-ec7b5fcc10c8\") " pod="openstack-operators/openstack-operator-controller-manager-546d579865-b8s4r" Feb 17 16:11:36 crc kubenswrapper[4808]: E0217 16:11:36.233027 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:e4689246ae78635dc3c1db9c677d8b16b8f94276df15fb9c84bfc57cc6578fcf\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-kg6xx" podUID="8d4c91a6-8441-45a6-bb6a-7655ba464fb9" Feb 17 16:11:36 crc kubenswrapper[4808]: E0217 16:11:36.314144 4808 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Feb 17 16:11:36 crc kubenswrapper[4808]: E0217 16:11:36.315291 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-v54s4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-xcs6n_openstack-operators(a83d92da-4f15-4e33-ab57-ae7bc9e0da5e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 16:11:36 crc kubenswrapper[4808]: E0217 16:11:36.317072 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xcs6n" podUID="a83d92da-4f15-4e33-ab57-ae7bc9e0da5e" Feb 17 16:11:36 crc kubenswrapper[4808]: E0217 16:11:36.975966 4808 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1" Feb 17 16:11:36 crc kubenswrapper[4808]: E0217 16:11:36.976188 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4jgrh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-b4d948c87-8xfc6_openstack-operators(96baec58-63b9-49cd-9cf4-32639e58d4ac): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 16:11:36 crc kubenswrapper[4808]: E0217 16:11:36.977407 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-8xfc6" podUID="96baec58-63b9-49cd-9cf4-32639e58d4ac" Feb 17 16:11:37 crc kubenswrapper[4808]: E0217 16:11:37.240746 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-8xfc6" podUID="96baec58-63b9-49cd-9cf4-32639e58d4ac" Feb 17 16:11:37 crc kubenswrapper[4808]: E0217 16:11:37.242516 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xcs6n" podUID="a83d92da-4f15-4e33-ab57-ae7bc9e0da5e" Feb 17 16:11:37 crc kubenswrapper[4808]: I0217 16:11:37.960348 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-n6qxn"] Feb 17 16:11:37 crc kubenswrapper[4808]: I0217 16:11:37.986809 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9csf4ws"] Feb 17 16:11:38 crc kubenswrapper[4808]: W0217 16:11:38.319909 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6508a74d_2dba_4d1b_910c_95c9463c15a4.slice/crio-237b2f365540b7c24cef63cda10c1e1a62ee840be7609d75aa28a2647feb1d55 WatchSource:0}: Error finding container 237b2f365540b7c24cef63cda10c1e1a62ee840be7609d75aa28a2647feb1d55: Status 404 returned error can't find the container with id 237b2f365540b7c24cef63cda10c1e1a62ee840be7609d75aa28a2647feb1d55 Feb 17 16:11:38 crc kubenswrapper[4808]: W0217 16:11:38.321722 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2ec18a16_766f_4a0c_a393_0ca7a999011e.slice/crio-cad07ffd94f36d188bfc3c799761ba83622b51e50da7face66ceac5e9109af79 WatchSource:0}: Error finding container cad07ffd94f36d188bfc3c799761ba83622b51e50da7face66ceac5e9109af79: Status 404 returned error can't find the container with id cad07ffd94f36d188bfc3c799761ba83622b51e50da7face66ceac5e9109af79 Feb 17 16:11:39 crc kubenswrapper[4808]: I0217 16:11:39.266143 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-gl97b" event={"ID":"e2e1b5f4-7ed2-4ab1-871b-1974a7559252","Type":"ContainerStarted","Data":"10ba36d4f9cf03b45783fab1951237e478555c6bef77aa74f843c9d4918aa3c5"} Feb 17 16:11:39 crc kubenswrapper[4808]: I0217 16:11:39.266519 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-gl97b" Feb 17 16:11:39 crc kubenswrapper[4808]: I0217 16:11:39.270920 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-5mm2j" event={"ID":"0a170b4f-607d-4c7c-bd0c-ee6c29523b44","Type":"ContainerStarted","Data":"63e8a86166c0d60b5adc435b1753a337c61a19af3e97088c7b2e8f9cfbb53239"} Feb 17 16:11:39 crc kubenswrapper[4808]: I0217 16:11:39.271188 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-5mm2j" Feb 17 16:11:39 crc kubenswrapper[4808]: I0217 16:11:39.279802 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-z4vp8" event={"ID":"74dda28c-8860-440c-b97c-b16bab985ff0","Type":"ContainerStarted","Data":"6497a201ad8e9130bd3a0568def9743e4a96faf5dbd76f138408a2c0aec4a7e0"} Feb 17 16:11:39 crc kubenswrapper[4808]: I0217 16:11:39.280433 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68f46476f-z4vp8" Feb 17 16:11:39 crc kubenswrapper[4808]: I0217 16:11:39.287737 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-tkhr5" event={"ID":"93278ccd-52fe-4848-9a46-3f47369d47ab","Type":"ContainerStarted","Data":"d7f91e5327480e544341356ffa79a8ee03c2d2edf8fdaa07bb58a258ca2dcc5c"} Feb 17 16:11:39 crc kubenswrapper[4808]: I0217 16:11:39.287909 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-tkhr5" Feb 17 16:11:39 crc kubenswrapper[4808]: I0217 16:11:39.299894 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-gl97b" podStartSLOduration=4.115590268 podStartE2EDuration="20.299875464s" podCreationTimestamp="2026-02-17 16:11:19 +0000 UTC" firstStartedPulling="2026-02-17 16:11:20.776558888 +0000 UTC m=+1044.292917961" lastFinishedPulling="2026-02-17 16:11:36.960844084 +0000 UTC m=+1060.477203157" observedRunningTime="2026-02-17 16:11:39.299774881 +0000 UTC m=+1062.816133954" watchObservedRunningTime="2026-02-17 16:11:39.299875464 +0000 UTC m=+1062.816234537" Feb 17 16:11:39 crc kubenswrapper[4808]: I0217 16:11:39.303919 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7866795846-zxqhb" event={"ID":"b42c0b9b-cca5-4ecb-908e-508fbf932dfe","Type":"ContainerStarted","Data":"7bf2580f0e19d355458c489b39109492283ef204a75a011033182321aedaec9b"} Feb 17 16:11:39 crc kubenswrapper[4808]: I0217 16:11:39.304654 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-7866795846-zxqhb" Feb 17 16:11:39 crc kubenswrapper[4808]: I0217 16:11:39.345940 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-5mm2j" podStartSLOduration=5.041002382 podStartE2EDuration="20.34592292s" podCreationTimestamp="2026-02-17 16:11:19 +0000 UTC" firstStartedPulling="2026-02-17 16:11:21.640431307 +0000 UTC m=+1045.156790380" lastFinishedPulling="2026-02-17 16:11:36.945351845 +0000 UTC m=+1060.461710918" observedRunningTime="2026-02-17 16:11:39.329543388 +0000 UTC m=+1062.845902461" watchObservedRunningTime="2026-02-17 16:11:39.34592292 +0000 UTC m=+1062.862281993" Feb 17 16:11:39 crc kubenswrapper[4808]: I0217 16:11:39.368906 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-4cv77" event={"ID":"77df5d1f-daff-4508-861a-335ab87f2366","Type":"ContainerStarted","Data":"c7c580f02fa62c4b557abb26c0105494d4fd28d5a667d1edb351e3e50d268919"} Feb 17 16:11:39 crc kubenswrapper[4808]: I0217 16:11:39.369558 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-4cv77" Feb 17 16:11:39 crc kubenswrapper[4808]: I0217 16:11:39.370301 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-tkhr5" podStartSLOduration=5.117258348 podStartE2EDuration="20.3702877s" podCreationTimestamp="2026-02-17 16:11:19 +0000 UTC" firstStartedPulling="2026-02-17 16:11:21.69704266 +0000 UTC m=+1045.213401733" lastFinishedPulling="2026-02-17 16:11:36.950071992 +0000 UTC m=+1060.466431085" observedRunningTime="2026-02-17 16:11:39.366982571 +0000 UTC m=+1062.883341654" watchObservedRunningTime="2026-02-17 16:11:39.3702877 +0000 UTC m=+1062.886646773" Feb 17 16:11:39 crc kubenswrapper[4808]: I0217 16:11:39.390628 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-xp9sf" event={"ID":"a2547c9d-80d6-491d-8517-26327e35a1f4","Type":"ContainerStarted","Data":"59f1bc51c76506d0352a289169377333de9bc21398f8f86076b19fd57d8cf149"} Feb 17 16:11:39 crc kubenswrapper[4808]: I0217 16:11:39.391223 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-xp9sf" Feb 17 16:11:39 crc kubenswrapper[4808]: I0217 16:11:39.401973 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987464f4-b7hkk" event={"ID":"b622bb16-c5b4-45ea-b493-e681d36d49ac","Type":"ContainerStarted","Data":"d102a1692b29c78ec5949caf797a6f631fc63ce4f4fdca2a995d1ab4319dce2b"} Feb 17 16:11:39 crc kubenswrapper[4808]: I0217 16:11:39.402606 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-77987464f4-b7hkk" Feb 17 16:11:39 crc kubenswrapper[4808]: I0217 16:11:39.410629 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-vgbmj" event={"ID":"a40e52a1-9867-413a-81fb-324789e0a009","Type":"ContainerStarted","Data":"b2e8f40bc85c48f93a9ebc1a04f882ac64bc96ec2e858900d68c3eb95e8624f3"} Feb 17 16:11:39 crc kubenswrapper[4808]: I0217 16:11:39.411547 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-vgbmj" Feb 17 16:11:39 crc kubenswrapper[4808]: I0217 16:11:39.412056 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-68f46476f-z4vp8" podStartSLOduration=3.8760442729999998 podStartE2EDuration="20.412040691s" podCreationTimestamp="2026-02-17 16:11:19 +0000 UTC" firstStartedPulling="2026-02-17 16:11:21.837555005 +0000 UTC m=+1045.353914078" lastFinishedPulling="2026-02-17 16:11:38.373551423 +0000 UTC m=+1061.889910496" observedRunningTime="2026-02-17 16:11:39.409134162 +0000 UTC m=+1062.925493235" watchObservedRunningTime="2026-02-17 16:11:39.412040691 +0000 UTC m=+1062.928399764" Feb 17 16:11:39 crc kubenswrapper[4808]: I0217 16:11:39.424972 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-xv924" event={"ID":"d4bd0818-617e-418a-b7c7-f70ba7ebc3d8","Type":"ContainerStarted","Data":"9c3d9151cb320a5badba0841bfb936a18ca767b80699a4b018bac68278862dc8"} Feb 17 16:11:39 crc kubenswrapper[4808]: I0217 16:11:39.425055 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-xv924" Feb 17 16:11:39 crc kubenswrapper[4808]: I0217 16:11:39.447220 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-thpj7" event={"ID":"ace1fd54-7ff8-45b9-a77b-c3908044365e","Type":"ContainerStarted","Data":"f58304db8542cdf4ba5a2ead3868c83fac7d59192ab35082ded69eab18dd4582"} Feb 17 16:11:39 crc kubenswrapper[4808]: I0217 16:11:39.448065 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-thpj7" Feb 17 16:11:39 crc kubenswrapper[4808]: I0217 16:11:39.457089 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-plpr2" event={"ID":"681f334b-d0ac-43dc-babb-92d9cb7c0440","Type":"ContainerStarted","Data":"02f2062e15e1d75b80c1caf8051d0d941859d1acb5970a57b47ff4e2471daf18"} Feb 17 16:11:39 crc kubenswrapper[4808]: I0217 16:11:39.458167 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-plpr2" Feb 17 16:11:39 crc kubenswrapper[4808]: I0217 16:11:39.459737 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-xp9sf" podStartSLOduration=5.257152746 podStartE2EDuration="20.459714972s" podCreationTimestamp="2026-02-17 16:11:19 +0000 UTC" firstStartedPulling="2026-02-17 16:11:21.743234961 +0000 UTC m=+1045.259594034" lastFinishedPulling="2026-02-17 16:11:36.945797187 +0000 UTC m=+1060.462156260" observedRunningTime="2026-02-17 16:11:39.443968635 +0000 UTC m=+1062.960327708" watchObservedRunningTime="2026-02-17 16:11:39.459714972 +0000 UTC m=+1062.976074045" Feb 17 16:11:39 crc kubenswrapper[4808]: I0217 16:11:39.464896 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-cjh7p" event={"ID":"3e657888-7f8f-4d5d-8ef3-7f7472a7e4fb","Type":"ContainerStarted","Data":"471671a3ff538430bb2cd71466b023fff9c7f5639bd40d52c4753c7643e06ccc"} Feb 17 16:11:39 crc kubenswrapper[4808]: I0217 16:11:39.465293 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-cjh7p" Feb 17 16:11:39 crc kubenswrapper[4808]: I0217 16:11:39.466958 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-n6qxn" event={"ID":"6508a74d-2dba-4d1b-910c-95c9463c15a4","Type":"ContainerStarted","Data":"237b2f365540b7c24cef63cda10c1e1a62ee840be7609d75aa28a2647feb1d55"} Feb 17 16:11:39 crc kubenswrapper[4808]: I0217 16:11:39.481839 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-77987464f4-b7hkk" podStartSLOduration=4.702851988 podStartE2EDuration="20.48181618s" podCreationTimestamp="2026-02-17 16:11:19 +0000 UTC" firstStartedPulling="2026-02-17 16:11:21.164680176 +0000 UTC m=+1044.681039249" lastFinishedPulling="2026-02-17 16:11:36.943644368 +0000 UTC m=+1060.460003441" observedRunningTime="2026-02-17 16:11:39.480454953 +0000 UTC m=+1062.996814056" watchObservedRunningTime="2026-02-17 16:11:39.48181618 +0000 UTC m=+1062.998175253" Feb 17 16:11:39 crc kubenswrapper[4808]: I0217 16:11:39.485842 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9csf4ws" event={"ID":"2ec18a16-766f-4a0c-a393-0ca7a999011e","Type":"ContainerStarted","Data":"cad07ffd94f36d188bfc3c799761ba83622b51e50da7face66ceac5e9109af79"} Feb 17 16:11:39 crc kubenswrapper[4808]: I0217 16:11:39.537783 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-7866795846-zxqhb" podStartSLOduration=5.323530643 podStartE2EDuration="20.537767835s" podCreationTimestamp="2026-02-17 16:11:19 +0000 UTC" firstStartedPulling="2026-02-17 16:11:21.739257542 +0000 UTC m=+1045.255616615" lastFinishedPulling="2026-02-17 16:11:36.953494744 +0000 UTC m=+1060.469853807" observedRunningTime="2026-02-17 16:11:39.535028701 +0000 UTC m=+1063.051387784" watchObservedRunningTime="2026-02-17 16:11:39.537767835 +0000 UTC m=+1063.054126908" Feb 17 16:11:39 crc kubenswrapper[4808]: I0217 16:11:39.554566 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-4cv77" podStartSLOduration=5.358007337 podStartE2EDuration="20.55454928s" podCreationTimestamp="2026-02-17 16:11:19 +0000 UTC" firstStartedPulling="2026-02-17 16:11:21.755644906 +0000 UTC m=+1045.272003979" lastFinishedPulling="2026-02-17 16:11:36.952186829 +0000 UTC m=+1060.468545922" observedRunningTime="2026-02-17 16:11:39.550666635 +0000 UTC m=+1063.067025708" watchObservedRunningTime="2026-02-17 16:11:39.55454928 +0000 UTC m=+1063.070908343" Feb 17 16:11:39 crc kubenswrapper[4808]: I0217 16:11:39.591670 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-vgbmj" podStartSLOduration=5.4840432 podStartE2EDuration="20.591652044s" podCreationTimestamp="2026-02-17 16:11:19 +0000 UTC" firstStartedPulling="2026-02-17 16:11:21.837207015 +0000 UTC m=+1045.353566088" lastFinishedPulling="2026-02-17 16:11:36.944815859 +0000 UTC m=+1060.461174932" observedRunningTime="2026-02-17 16:11:39.58891817 +0000 UTC m=+1063.105277243" watchObservedRunningTime="2026-02-17 16:11:39.591652044 +0000 UTC m=+1063.108011117" Feb 17 16:11:39 crc kubenswrapper[4808]: I0217 16:11:39.613148 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-plpr2" podStartSLOduration=5.310232913 podStartE2EDuration="20.613132136s" podCreationTimestamp="2026-02-17 16:11:19 +0000 UTC" firstStartedPulling="2026-02-17 16:11:21.640708144 +0000 UTC m=+1045.157067217" lastFinishedPulling="2026-02-17 16:11:36.943607357 +0000 UTC m=+1060.459966440" observedRunningTime="2026-02-17 16:11:39.610919916 +0000 UTC m=+1063.127278989" watchObservedRunningTime="2026-02-17 16:11:39.613132136 +0000 UTC m=+1063.129491209" Feb 17 16:11:39 crc kubenswrapper[4808]: I0217 16:11:39.647022 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-thpj7" podStartSLOduration=4.061484973 podStartE2EDuration="20.647004783s" podCreationTimestamp="2026-02-17 16:11:19 +0000 UTC" firstStartedPulling="2026-02-17 16:11:21.837531374 +0000 UTC m=+1045.353890447" lastFinishedPulling="2026-02-17 16:11:38.423051184 +0000 UTC m=+1061.939410257" observedRunningTime="2026-02-17 16:11:39.638446101 +0000 UTC m=+1063.154805174" watchObservedRunningTime="2026-02-17 16:11:39.647004783 +0000 UTC m=+1063.163363856" Feb 17 16:11:39 crc kubenswrapper[4808]: I0217 16:11:39.673477 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-cjh7p" podStartSLOduration=4.934671433 podStartE2EDuration="20.673453028s" podCreationTimestamp="2026-02-17 16:11:19 +0000 UTC" firstStartedPulling="2026-02-17 16:11:21.206998481 +0000 UTC m=+1044.723357554" lastFinishedPulling="2026-02-17 16:11:36.945780076 +0000 UTC m=+1060.462139149" observedRunningTime="2026-02-17 16:11:39.66390647 +0000 UTC m=+1063.180265563" watchObservedRunningTime="2026-02-17 16:11:39.673453028 +0000 UTC m=+1063.189812101" Feb 17 16:11:39 crc kubenswrapper[4808]: I0217 16:11:39.701260 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-xv924" podStartSLOduration=4.943286648 podStartE2EDuration="20.701239551s" podCreationTimestamp="2026-02-17 16:11:19 +0000 UTC" firstStartedPulling="2026-02-17 16:11:21.193541368 +0000 UTC m=+1044.709900441" lastFinishedPulling="2026-02-17 16:11:36.951494251 +0000 UTC m=+1060.467853344" observedRunningTime="2026-02-17 16:11:39.694855468 +0000 UTC m=+1063.211214571" watchObservedRunningTime="2026-02-17 16:11:39.701239551 +0000 UTC m=+1063.217598634" Feb 17 16:11:46 crc kubenswrapper[4808]: I0217 16:11:46.544285 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-t9k25" event={"ID":"a6f8ca14-e1db-4dcc-a64d-7bf137105e80","Type":"ContainerStarted","Data":"88a8fdb8db4991b23917c8312b4175332240d40a9f79fc4130257e29403cf5d7"} Feb 17 16:11:46 crc kubenswrapper[4808]: I0217 16:11:46.544892 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-t9k25" Feb 17 16:11:46 crc kubenswrapper[4808]: I0217 16:11:46.545967 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-n6qxn" event={"ID":"6508a74d-2dba-4d1b-910c-95c9463c15a4","Type":"ContainerStarted","Data":"558762a59baaab1168f86ef43b4d76016a7250671391a5b40b5d0979d10b358a"} Feb 17 16:11:46 crc kubenswrapper[4808]: I0217 16:11:46.546083 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79d975b745-n6qxn" Feb 17 16:11:46 crc kubenswrapper[4808]: I0217 16:11:46.547684 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9csf4ws" event={"ID":"2ec18a16-766f-4a0c-a393-0ca7a999011e","Type":"ContainerStarted","Data":"c91652db3c8c897772638b6654280bc4621e873e9701eda8ff3cf54fd4856b76"} Feb 17 16:11:46 crc kubenswrapper[4808]: I0217 16:11:46.547817 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9csf4ws" Feb 17 16:11:46 crc kubenswrapper[4808]: I0217 16:11:46.550107 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-5qkk2" event={"ID":"cde66c49-b3c4-4f4f-b614-c4343d1c3732","Type":"ContainerStarted","Data":"b4044d49f55d6d44041e442f6cbe164ea3fd523bc3d8574d53f27573385913c7"} Feb 17 16:11:46 crc kubenswrapper[4808]: I0217 16:11:46.550361 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-5qkk2" Feb 17 16:11:46 crc kubenswrapper[4808]: I0217 16:11:46.552864 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-66fcc5ff49-dnzp5" event={"ID":"bdd19f1d-df45-4dda-a2bd-b14da398e043","Type":"ContainerStarted","Data":"33d6a07fb5251112637b4c21e182ca6b6a5429ea65ee868cdcd15af9eebf7d94"} Feb 17 16:11:46 crc kubenswrapper[4808]: I0217 16:11:46.553065 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-66fcc5ff49-dnzp5" Feb 17 16:11:46 crc kubenswrapper[4808]: I0217 16:11:46.554873 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-slw7s" event={"ID":"6764d3f3-5e9f-4635-973e-81324dbc8e34","Type":"ContainerStarted","Data":"403eef907dbbbbbb81eaacc2ef278118280b09ee4a8dec83f69983fcad525b75"} Feb 17 16:11:46 crc kubenswrapper[4808]: I0217 16:11:46.555025 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-slw7s" Feb 17 16:11:46 crc kubenswrapper[4808]: I0217 16:11:46.569745 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-t9k25" podStartSLOduration=3.591555949 podStartE2EDuration="27.569727358s" podCreationTimestamp="2026-02-17 16:11:19 +0000 UTC" firstStartedPulling="2026-02-17 16:11:21.837485563 +0000 UTC m=+1045.353844636" lastFinishedPulling="2026-02-17 16:11:45.815656962 +0000 UTC m=+1069.332016045" observedRunningTime="2026-02-17 16:11:46.56349604 +0000 UTC m=+1070.079855123" watchObservedRunningTime="2026-02-17 16:11:46.569727358 +0000 UTC m=+1070.086086431" Feb 17 16:11:46 crc kubenswrapper[4808]: I0217 16:11:46.581984 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-5qkk2" podStartSLOduration=3.5838316900000002 podStartE2EDuration="27.581966879s" podCreationTimestamp="2026-02-17 16:11:19 +0000 UTC" firstStartedPulling="2026-02-17 16:11:21.837661398 +0000 UTC m=+1045.354020471" lastFinishedPulling="2026-02-17 16:11:45.835796577 +0000 UTC m=+1069.352155660" observedRunningTime="2026-02-17 16:11:46.581849096 +0000 UTC m=+1070.098208179" watchObservedRunningTime="2026-02-17 16:11:46.581966879 +0000 UTC m=+1070.098325952" Feb 17 16:11:46 crc kubenswrapper[4808]: I0217 16:11:46.599754 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79d975b745-n6qxn" podStartSLOduration=20.152870903 podStartE2EDuration="27.599735141s" podCreationTimestamp="2026-02-17 16:11:19 +0000 UTC" firstStartedPulling="2026-02-17 16:11:38.367603812 +0000 UTC m=+1061.883962885" lastFinishedPulling="2026-02-17 16:11:45.81446805 +0000 UTC m=+1069.330827123" observedRunningTime="2026-02-17 16:11:46.597735717 +0000 UTC m=+1070.114094790" watchObservedRunningTime="2026-02-17 16:11:46.599735141 +0000 UTC m=+1070.116094214" Feb 17 16:11:46 crc kubenswrapper[4808]: I0217 16:11:46.633325 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-slw7s" podStartSLOduration=3.679400278 podStartE2EDuration="27.63330954s" podCreationTimestamp="2026-02-17 16:11:19 +0000 UTC" firstStartedPulling="2026-02-17 16:11:21.846024734 +0000 UTC m=+1045.362383807" lastFinishedPulling="2026-02-17 16:11:45.799933996 +0000 UTC m=+1069.316293069" observedRunningTime="2026-02-17 16:11:46.629536997 +0000 UTC m=+1070.145896070" watchObservedRunningTime="2026-02-17 16:11:46.63330954 +0000 UTC m=+1070.149668613" Feb 17 16:11:46 crc kubenswrapper[4808]: I0217 16:11:46.643231 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-66fcc5ff49-dnzp5" podStartSLOduration=3.672908142 podStartE2EDuration="27.643213148s" podCreationTimestamp="2026-02-17 16:11:19 +0000 UTC" firstStartedPulling="2026-02-17 16:11:21.840909565 +0000 UTC m=+1045.357268628" lastFinishedPulling="2026-02-17 16:11:45.811214561 +0000 UTC m=+1069.327573634" observedRunningTime="2026-02-17 16:11:46.641967674 +0000 UTC m=+1070.158326747" watchObservedRunningTime="2026-02-17 16:11:46.643213148 +0000 UTC m=+1070.159572221" Feb 17 16:11:46 crc kubenswrapper[4808]: I0217 16:11:46.667691 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9csf4ws" podStartSLOduration=20.201706676 podStartE2EDuration="27.66767547s" podCreationTimestamp="2026-02-17 16:11:19 +0000 UTC" firstStartedPulling="2026-02-17 16:11:38.344917038 +0000 UTC m=+1061.861276111" lastFinishedPulling="2026-02-17 16:11:45.810885812 +0000 UTC m=+1069.327244905" observedRunningTime="2026-02-17 16:11:46.663497488 +0000 UTC m=+1070.179856581" watchObservedRunningTime="2026-02-17 16:11:46.66767547 +0000 UTC m=+1070.184034543" Feb 17 16:11:49 crc kubenswrapper[4808]: I0217 16:11:49.393568 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-cjh7p" Feb 17 16:11:49 crc kubenswrapper[4808]: I0217 16:11:49.394953 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-4cv77" Feb 17 16:11:49 crc kubenswrapper[4808]: I0217 16:11:49.477301 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-gl97b" Feb 17 16:11:49 crc kubenswrapper[4808]: I0217 16:11:49.514386 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-77987464f4-b7hkk" Feb 17 16:11:49 crc kubenswrapper[4808]: I0217 16:11:49.582383 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-8xfc6" event={"ID":"96baec58-63b9-49cd-9cf4-32639e58d4ac","Type":"ContainerStarted","Data":"9e478c9f9a0d25bbeae5b246ed737a0687f185285699245fcb63975c99556b60"} Feb 17 16:11:49 crc kubenswrapper[4808]: I0217 16:11:49.582991 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-8xfc6" Feb 17 16:11:49 crc kubenswrapper[4808]: I0217 16:11:49.600031 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-8xfc6" podStartSLOduration=3.164275062 podStartE2EDuration="30.600001315s" podCreationTimestamp="2026-02-17 16:11:19 +0000 UTC" firstStartedPulling="2026-02-17 16:11:21.14042901 +0000 UTC m=+1044.656788083" lastFinishedPulling="2026-02-17 16:11:48.576155263 +0000 UTC m=+1072.092514336" observedRunningTime="2026-02-17 16:11:49.599249155 +0000 UTC m=+1073.115608238" watchObservedRunningTime="2026-02-17 16:11:49.600001315 +0000 UTC m=+1073.116360398" Feb 17 16:11:49 crc kubenswrapper[4808]: I0217 16:11:49.709420 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-xv924" Feb 17 16:11:49 crc kubenswrapper[4808]: I0217 16:11:49.735792 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-plpr2" Feb 17 16:11:49 crc kubenswrapper[4808]: I0217 16:11:49.964266 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-thpj7" Feb 17 16:11:50 crc kubenswrapper[4808]: I0217 16:11:50.046974 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-tkhr5" Feb 17 16:11:50 crc kubenswrapper[4808]: I0217 16:11:50.108016 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-vgbmj" Feb 17 16:11:50 crc kubenswrapper[4808]: I0217 16:11:50.143884 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-xp9sf" Feb 17 16:11:50 crc kubenswrapper[4808]: I0217 16:11:50.237698 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68f46476f-z4vp8" Feb 17 16:11:50 crc kubenswrapper[4808]: I0217 16:11:50.243522 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-5mm2j" Feb 17 16:11:50 crc kubenswrapper[4808]: I0217 16:11:50.338718 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-7866795846-zxqhb" Feb 17 16:11:51 crc kubenswrapper[4808]: I0217 16:11:51.591831 4808 patch_prober.go:28] interesting pod/machine-config-daemon-k8v8k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:11:51 crc kubenswrapper[4808]: I0217 16:11:51.593443 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:11:52 crc kubenswrapper[4808]: I0217 16:11:52.047987 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/5e47b192-26de-4639-afe8-ec7b5fcc10c8-webhook-certs\") pod \"openstack-operator-controller-manager-546d579865-b8s4r\" (UID: \"5e47b192-26de-4639-afe8-ec7b5fcc10c8\") " pod="openstack-operators/openstack-operator-controller-manager-546d579865-b8s4r" Feb 17 16:11:52 crc kubenswrapper[4808]: I0217 16:11:52.058906 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/5e47b192-26de-4639-afe8-ec7b5fcc10c8-webhook-certs\") pod \"openstack-operator-controller-manager-546d579865-b8s4r\" (UID: \"5e47b192-26de-4639-afe8-ec7b5fcc10c8\") " pod="openstack-operators/openstack-operator-controller-manager-546d579865-b8s4r" Feb 17 16:11:52 crc kubenswrapper[4808]: I0217 16:11:52.252105 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-546d579865-b8s4r" Feb 17 16:11:52 crc kubenswrapper[4808]: I0217 16:11:52.538161 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-546d579865-b8s4r"] Feb 17 16:11:52 crc kubenswrapper[4808]: W0217 16:11:52.545838 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5e47b192_26de_4639_afe8_ec7b5fcc10c8.slice/crio-880af836abc3db7eb8cdedcc5e43229be289a5c6d06291b732094b8049fcdadd WatchSource:0}: Error finding container 880af836abc3db7eb8cdedcc5e43229be289a5c6d06291b732094b8049fcdadd: Status 404 returned error can't find the container with id 880af836abc3db7eb8cdedcc5e43229be289a5c6d06291b732094b8049fcdadd Feb 17 16:11:52 crc kubenswrapper[4808]: I0217 16:11:52.605248 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-546d579865-b8s4r" event={"ID":"5e47b192-26de-4639-afe8-ec7b5fcc10c8","Type":"ContainerStarted","Data":"880af836abc3db7eb8cdedcc5e43229be289a5c6d06291b732094b8049fcdadd"} Feb 17 16:11:55 crc kubenswrapper[4808]: I0217 16:11:55.395801 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79d975b745-n6qxn" Feb 17 16:11:55 crc kubenswrapper[4808]: I0217 16:11:55.490982 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9csf4ws" Feb 17 16:11:56 crc kubenswrapper[4808]: I0217 16:11:56.635117 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-546d579865-b8s4r" event={"ID":"5e47b192-26de-4639-afe8-ec7b5fcc10c8","Type":"ContainerStarted","Data":"6d4d77a435b1716349fcb18d5270ad1cbe553927d1e8453a2abbc8dc3f218c2b"} Feb 17 16:11:56 crc kubenswrapper[4808]: I0217 16:11:56.635533 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-546d579865-b8s4r" Feb 17 16:11:56 crc kubenswrapper[4808]: I0217 16:11:56.637310 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-kg6xx" event={"ID":"8d4c91a6-8441-45a6-bb6a-7655ba464fb9","Type":"ContainerStarted","Data":"cd5188157b24f9c4992d2b83ab17e8dcb213752403da8c5826e5978a986199b5"} Feb 17 16:11:56 crc kubenswrapper[4808]: I0217 16:11:56.637477 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-kg6xx" Feb 17 16:11:56 crc kubenswrapper[4808]: I0217 16:11:56.665506 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-546d579865-b8s4r" podStartSLOduration=37.665487245 podStartE2EDuration="37.665487245s" podCreationTimestamp="2026-02-17 16:11:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:11:56.660542851 +0000 UTC m=+1080.176901924" watchObservedRunningTime="2026-02-17 16:11:56.665487245 +0000 UTC m=+1080.181846318" Feb 17 16:11:56 crc kubenswrapper[4808]: I0217 16:11:56.682401 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-kg6xx" podStartSLOduration=3.061028456 podStartE2EDuration="37.682385903s" podCreationTimestamp="2026-02-17 16:11:19 +0000 UTC" firstStartedPulling="2026-02-17 16:11:21.832232181 +0000 UTC m=+1045.348591254" lastFinishedPulling="2026-02-17 16:11:56.453589618 +0000 UTC m=+1079.969948701" observedRunningTime="2026-02-17 16:11:56.678741705 +0000 UTC m=+1080.195100778" watchObservedRunningTime="2026-02-17 16:11:56.682385903 +0000 UTC m=+1080.198744976" Feb 17 16:11:57 crc kubenswrapper[4808]: I0217 16:11:57.649553 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xcs6n" event={"ID":"a83d92da-4f15-4e33-ab57-ae7bc9e0da5e","Type":"ContainerStarted","Data":"dcc6f8302433f854a84b7e778dd07fface235aeb9f74a175a0c5960110747d44"} Feb 17 16:11:57 crc kubenswrapper[4808]: I0217 16:11:57.673990 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xcs6n" podStartSLOduration=3.958778523 podStartE2EDuration="38.673966991s" podCreationTimestamp="2026-02-17 16:11:19 +0000 UTC" firstStartedPulling="2026-02-17 16:11:21.739660834 +0000 UTC m=+1045.256019907" lastFinishedPulling="2026-02-17 16:11:56.454849302 +0000 UTC m=+1079.971208375" observedRunningTime="2026-02-17 16:11:57.667555877 +0000 UTC m=+1081.183914950" watchObservedRunningTime="2026-02-17 16:11:57.673966991 +0000 UTC m=+1081.190326064" Feb 17 16:11:59 crc kubenswrapper[4808]: I0217 16:11:59.690706 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-8xfc6" Feb 17 16:12:00 crc kubenswrapper[4808]: I0217 16:12:00.069478 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-t9k25" Feb 17 16:12:00 crc kubenswrapper[4808]: I0217 16:12:00.211196 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-slw7s" Feb 17 16:12:00 crc kubenswrapper[4808]: I0217 16:12:00.380883 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-66fcc5ff49-dnzp5" Feb 17 16:12:00 crc kubenswrapper[4808]: I0217 16:12:00.416767 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-5qkk2" Feb 17 16:12:02 crc kubenswrapper[4808]: I0217 16:12:02.266174 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-546d579865-b8s4r" Feb 17 16:12:10 crc kubenswrapper[4808]: I0217 16:12:10.124963 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-kg6xx" Feb 17 16:12:21 crc kubenswrapper[4808]: I0217 16:12:21.595631 4808 patch_prober.go:28] interesting pod/machine-config-daemon-k8v8k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:12:21 crc kubenswrapper[4808]: I0217 16:12:21.596334 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:12:27 crc kubenswrapper[4808]: I0217 16:12:27.474970 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-8jstw"] Feb 17 16:12:27 crc kubenswrapper[4808]: I0217 16:12:27.477931 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-8jstw" Feb 17 16:12:27 crc kubenswrapper[4808]: I0217 16:12:27.485141 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Feb 17 16:12:27 crc kubenswrapper[4808]: I0217 16:12:27.485300 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Feb 17 16:12:27 crc kubenswrapper[4808]: I0217 16:12:27.485616 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Feb 17 16:12:27 crc kubenswrapper[4808]: I0217 16:12:27.485764 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-r4pxs" Feb 17 16:12:27 crc kubenswrapper[4808]: I0217 16:12:27.494190 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-8jstw"] Feb 17 16:12:27 crc kubenswrapper[4808]: I0217 16:12:27.532047 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-g8xlz"] Feb 17 16:12:27 crc kubenswrapper[4808]: I0217 16:12:27.533235 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-g8xlz" Feb 17 16:12:27 crc kubenswrapper[4808]: I0217 16:12:27.538936 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Feb 17 16:12:27 crc kubenswrapper[4808]: I0217 16:12:27.562148 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-g8xlz"] Feb 17 16:12:27 crc kubenswrapper[4808]: I0217 16:12:27.577860 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/38d70adc-e16e-4470-9b59-1c728c29318d-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-g8xlz\" (UID: \"38d70adc-e16e-4470-9b59-1c728c29318d\") " pod="openstack/dnsmasq-dns-78dd6ddcc-g8xlz" Feb 17 16:12:27 crc kubenswrapper[4808]: I0217 16:12:27.577902 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kwnk\" (UniqueName: \"kubernetes.io/projected/38d70adc-e16e-4470-9b59-1c728c29318d-kube-api-access-2kwnk\") pod \"dnsmasq-dns-78dd6ddcc-g8xlz\" (UID: \"38d70adc-e16e-4470-9b59-1c728c29318d\") " pod="openstack/dnsmasq-dns-78dd6ddcc-g8xlz" Feb 17 16:12:27 crc kubenswrapper[4808]: I0217 16:12:27.577935 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/973eee94-2439-415c-b9b8-2f6f72738ac9-config\") pod \"dnsmasq-dns-675f4bcbfc-8jstw\" (UID: \"973eee94-2439-415c-b9b8-2f6f72738ac9\") " pod="openstack/dnsmasq-dns-675f4bcbfc-8jstw" Feb 17 16:12:27 crc kubenswrapper[4808]: I0217 16:12:27.578001 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d88gz\" (UniqueName: \"kubernetes.io/projected/973eee94-2439-415c-b9b8-2f6f72738ac9-kube-api-access-d88gz\") pod \"dnsmasq-dns-675f4bcbfc-8jstw\" (UID: \"973eee94-2439-415c-b9b8-2f6f72738ac9\") " pod="openstack/dnsmasq-dns-675f4bcbfc-8jstw" Feb 17 16:12:27 crc kubenswrapper[4808]: I0217 16:12:27.578019 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38d70adc-e16e-4470-9b59-1c728c29318d-config\") pod \"dnsmasq-dns-78dd6ddcc-g8xlz\" (UID: \"38d70adc-e16e-4470-9b59-1c728c29318d\") " pod="openstack/dnsmasq-dns-78dd6ddcc-g8xlz" Feb 17 16:12:27 crc kubenswrapper[4808]: I0217 16:12:27.678450 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/38d70adc-e16e-4470-9b59-1c728c29318d-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-g8xlz\" (UID: \"38d70adc-e16e-4470-9b59-1c728c29318d\") " pod="openstack/dnsmasq-dns-78dd6ddcc-g8xlz" Feb 17 16:12:27 crc kubenswrapper[4808]: I0217 16:12:27.678482 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2kwnk\" (UniqueName: \"kubernetes.io/projected/38d70adc-e16e-4470-9b59-1c728c29318d-kube-api-access-2kwnk\") pod \"dnsmasq-dns-78dd6ddcc-g8xlz\" (UID: \"38d70adc-e16e-4470-9b59-1c728c29318d\") " pod="openstack/dnsmasq-dns-78dd6ddcc-g8xlz" Feb 17 16:12:27 crc kubenswrapper[4808]: I0217 16:12:27.678511 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/973eee94-2439-415c-b9b8-2f6f72738ac9-config\") pod \"dnsmasq-dns-675f4bcbfc-8jstw\" (UID: \"973eee94-2439-415c-b9b8-2f6f72738ac9\") " pod="openstack/dnsmasq-dns-675f4bcbfc-8jstw" Feb 17 16:12:27 crc kubenswrapper[4808]: I0217 16:12:27.678570 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d88gz\" (UniqueName: \"kubernetes.io/projected/973eee94-2439-415c-b9b8-2f6f72738ac9-kube-api-access-d88gz\") pod \"dnsmasq-dns-675f4bcbfc-8jstw\" (UID: \"973eee94-2439-415c-b9b8-2f6f72738ac9\") " pod="openstack/dnsmasq-dns-675f4bcbfc-8jstw" Feb 17 16:12:27 crc kubenswrapper[4808]: I0217 16:12:27.678605 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38d70adc-e16e-4470-9b59-1c728c29318d-config\") pod \"dnsmasq-dns-78dd6ddcc-g8xlz\" (UID: \"38d70adc-e16e-4470-9b59-1c728c29318d\") " pod="openstack/dnsmasq-dns-78dd6ddcc-g8xlz" Feb 17 16:12:27 crc kubenswrapper[4808]: I0217 16:12:27.679427 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/38d70adc-e16e-4470-9b59-1c728c29318d-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-g8xlz\" (UID: \"38d70adc-e16e-4470-9b59-1c728c29318d\") " pod="openstack/dnsmasq-dns-78dd6ddcc-g8xlz" Feb 17 16:12:27 crc kubenswrapper[4808]: I0217 16:12:27.679439 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38d70adc-e16e-4470-9b59-1c728c29318d-config\") pod \"dnsmasq-dns-78dd6ddcc-g8xlz\" (UID: \"38d70adc-e16e-4470-9b59-1c728c29318d\") " pod="openstack/dnsmasq-dns-78dd6ddcc-g8xlz" Feb 17 16:12:27 crc kubenswrapper[4808]: I0217 16:12:27.681316 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/973eee94-2439-415c-b9b8-2f6f72738ac9-config\") pod \"dnsmasq-dns-675f4bcbfc-8jstw\" (UID: \"973eee94-2439-415c-b9b8-2f6f72738ac9\") " pod="openstack/dnsmasq-dns-675f4bcbfc-8jstw" Feb 17 16:12:27 crc kubenswrapper[4808]: I0217 16:12:27.696129 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2kwnk\" (UniqueName: \"kubernetes.io/projected/38d70adc-e16e-4470-9b59-1c728c29318d-kube-api-access-2kwnk\") pod \"dnsmasq-dns-78dd6ddcc-g8xlz\" (UID: \"38d70adc-e16e-4470-9b59-1c728c29318d\") " pod="openstack/dnsmasq-dns-78dd6ddcc-g8xlz" Feb 17 16:12:27 crc kubenswrapper[4808]: I0217 16:12:27.698171 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d88gz\" (UniqueName: \"kubernetes.io/projected/973eee94-2439-415c-b9b8-2f6f72738ac9-kube-api-access-d88gz\") pod \"dnsmasq-dns-675f4bcbfc-8jstw\" (UID: \"973eee94-2439-415c-b9b8-2f6f72738ac9\") " pod="openstack/dnsmasq-dns-675f4bcbfc-8jstw" Feb 17 16:12:27 crc kubenswrapper[4808]: I0217 16:12:27.804449 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-8jstw" Feb 17 16:12:27 crc kubenswrapper[4808]: I0217 16:12:27.870332 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-g8xlz" Feb 17 16:12:28 crc kubenswrapper[4808]: I0217 16:12:28.152694 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-g8xlz"] Feb 17 16:12:28 crc kubenswrapper[4808]: I0217 16:12:28.287591 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-8jstw"] Feb 17 16:12:28 crc kubenswrapper[4808]: I0217 16:12:28.950437 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-8jstw" event={"ID":"973eee94-2439-415c-b9b8-2f6f72738ac9","Type":"ContainerStarted","Data":"8041177f9f605013ae787b3681b3a5558dd54bee858e7ca6318f63453fa6a01c"} Feb 17 16:12:28 crc kubenswrapper[4808]: I0217 16:12:28.952787 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-g8xlz" event={"ID":"38d70adc-e16e-4470-9b59-1c728c29318d","Type":"ContainerStarted","Data":"36e351405a8f30735cdfbd65ebbfe018758adcc5855f9db2bc133ed0f4654c84"} Feb 17 16:12:30 crc kubenswrapper[4808]: I0217 16:12:30.262956 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-8jstw"] Feb 17 16:12:30 crc kubenswrapper[4808]: I0217 16:12:30.291097 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-8sg8r"] Feb 17 16:12:30 crc kubenswrapper[4808]: I0217 16:12:30.292413 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-8sg8r" Feb 17 16:12:30 crc kubenswrapper[4808]: I0217 16:12:30.304518 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-8sg8r"] Feb 17 16:12:30 crc kubenswrapper[4808]: I0217 16:12:30.331898 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bac5f26b-ff81-49e2-854f-9cad23a57593-config\") pod \"dnsmasq-dns-666b6646f7-8sg8r\" (UID: \"bac5f26b-ff81-49e2-854f-9cad23a57593\") " pod="openstack/dnsmasq-dns-666b6646f7-8sg8r" Feb 17 16:12:30 crc kubenswrapper[4808]: I0217 16:12:30.331971 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdvxp\" (UniqueName: \"kubernetes.io/projected/bac5f26b-ff81-49e2-854f-9cad23a57593-kube-api-access-tdvxp\") pod \"dnsmasq-dns-666b6646f7-8sg8r\" (UID: \"bac5f26b-ff81-49e2-854f-9cad23a57593\") " pod="openstack/dnsmasq-dns-666b6646f7-8sg8r" Feb 17 16:12:30 crc kubenswrapper[4808]: I0217 16:12:30.332082 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bac5f26b-ff81-49e2-854f-9cad23a57593-dns-svc\") pod \"dnsmasq-dns-666b6646f7-8sg8r\" (UID: \"bac5f26b-ff81-49e2-854f-9cad23a57593\") " pod="openstack/dnsmasq-dns-666b6646f7-8sg8r" Feb 17 16:12:30 crc kubenswrapper[4808]: I0217 16:12:30.434123 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bac5f26b-ff81-49e2-854f-9cad23a57593-dns-svc\") pod \"dnsmasq-dns-666b6646f7-8sg8r\" (UID: \"bac5f26b-ff81-49e2-854f-9cad23a57593\") " pod="openstack/dnsmasq-dns-666b6646f7-8sg8r" Feb 17 16:12:30 crc kubenswrapper[4808]: I0217 16:12:30.434191 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bac5f26b-ff81-49e2-854f-9cad23a57593-config\") pod \"dnsmasq-dns-666b6646f7-8sg8r\" (UID: \"bac5f26b-ff81-49e2-854f-9cad23a57593\") " pod="openstack/dnsmasq-dns-666b6646f7-8sg8r" Feb 17 16:12:30 crc kubenswrapper[4808]: I0217 16:12:30.434237 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdvxp\" (UniqueName: \"kubernetes.io/projected/bac5f26b-ff81-49e2-854f-9cad23a57593-kube-api-access-tdvxp\") pod \"dnsmasq-dns-666b6646f7-8sg8r\" (UID: \"bac5f26b-ff81-49e2-854f-9cad23a57593\") " pod="openstack/dnsmasq-dns-666b6646f7-8sg8r" Feb 17 16:12:30 crc kubenswrapper[4808]: I0217 16:12:30.435230 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bac5f26b-ff81-49e2-854f-9cad23a57593-dns-svc\") pod \"dnsmasq-dns-666b6646f7-8sg8r\" (UID: \"bac5f26b-ff81-49e2-854f-9cad23a57593\") " pod="openstack/dnsmasq-dns-666b6646f7-8sg8r" Feb 17 16:12:30 crc kubenswrapper[4808]: I0217 16:12:30.435365 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bac5f26b-ff81-49e2-854f-9cad23a57593-config\") pod \"dnsmasq-dns-666b6646f7-8sg8r\" (UID: \"bac5f26b-ff81-49e2-854f-9cad23a57593\") " pod="openstack/dnsmasq-dns-666b6646f7-8sg8r" Feb 17 16:12:30 crc kubenswrapper[4808]: I0217 16:12:30.458741 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdvxp\" (UniqueName: \"kubernetes.io/projected/bac5f26b-ff81-49e2-854f-9cad23a57593-kube-api-access-tdvxp\") pod \"dnsmasq-dns-666b6646f7-8sg8r\" (UID: \"bac5f26b-ff81-49e2-854f-9cad23a57593\") " pod="openstack/dnsmasq-dns-666b6646f7-8sg8r" Feb 17 16:12:30 crc kubenswrapper[4808]: I0217 16:12:30.557761 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-g8xlz"] Feb 17 16:12:30 crc kubenswrapper[4808]: I0217 16:12:30.588789 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-5wrzq"] Feb 17 16:12:30 crc kubenswrapper[4808]: I0217 16:12:30.590436 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-5wrzq" Feb 17 16:12:30 crc kubenswrapper[4808]: I0217 16:12:30.603509 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-5wrzq"] Feb 17 16:12:30 crc kubenswrapper[4808]: I0217 16:12:30.613186 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-8sg8r" Feb 17 16:12:30 crc kubenswrapper[4808]: I0217 16:12:30.638095 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdqm8\" (UniqueName: \"kubernetes.io/projected/24cc6fe1-da44-4d61-98bf-3088b398903b-kube-api-access-zdqm8\") pod \"dnsmasq-dns-57d769cc4f-5wrzq\" (UID: \"24cc6fe1-da44-4d61-98bf-3088b398903b\") " pod="openstack/dnsmasq-dns-57d769cc4f-5wrzq" Feb 17 16:12:30 crc kubenswrapper[4808]: I0217 16:12:30.638154 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24cc6fe1-da44-4d61-98bf-3088b398903b-config\") pod \"dnsmasq-dns-57d769cc4f-5wrzq\" (UID: \"24cc6fe1-da44-4d61-98bf-3088b398903b\") " pod="openstack/dnsmasq-dns-57d769cc4f-5wrzq" Feb 17 16:12:30 crc kubenswrapper[4808]: I0217 16:12:30.638215 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/24cc6fe1-da44-4d61-98bf-3088b398903b-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-5wrzq\" (UID: \"24cc6fe1-da44-4d61-98bf-3088b398903b\") " pod="openstack/dnsmasq-dns-57d769cc4f-5wrzq" Feb 17 16:12:30 crc kubenswrapper[4808]: I0217 16:12:30.739489 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdqm8\" (UniqueName: \"kubernetes.io/projected/24cc6fe1-da44-4d61-98bf-3088b398903b-kube-api-access-zdqm8\") pod \"dnsmasq-dns-57d769cc4f-5wrzq\" (UID: \"24cc6fe1-da44-4d61-98bf-3088b398903b\") " pod="openstack/dnsmasq-dns-57d769cc4f-5wrzq" Feb 17 16:12:30 crc kubenswrapper[4808]: I0217 16:12:30.739545 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24cc6fe1-da44-4d61-98bf-3088b398903b-config\") pod \"dnsmasq-dns-57d769cc4f-5wrzq\" (UID: \"24cc6fe1-da44-4d61-98bf-3088b398903b\") " pod="openstack/dnsmasq-dns-57d769cc4f-5wrzq" Feb 17 16:12:30 crc kubenswrapper[4808]: I0217 16:12:30.739646 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/24cc6fe1-da44-4d61-98bf-3088b398903b-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-5wrzq\" (UID: \"24cc6fe1-da44-4d61-98bf-3088b398903b\") " pod="openstack/dnsmasq-dns-57d769cc4f-5wrzq" Feb 17 16:12:30 crc kubenswrapper[4808]: I0217 16:12:30.740511 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24cc6fe1-da44-4d61-98bf-3088b398903b-config\") pod \"dnsmasq-dns-57d769cc4f-5wrzq\" (UID: \"24cc6fe1-da44-4d61-98bf-3088b398903b\") " pod="openstack/dnsmasq-dns-57d769cc4f-5wrzq" Feb 17 16:12:30 crc kubenswrapper[4808]: I0217 16:12:30.740611 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/24cc6fe1-da44-4d61-98bf-3088b398903b-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-5wrzq\" (UID: \"24cc6fe1-da44-4d61-98bf-3088b398903b\") " pod="openstack/dnsmasq-dns-57d769cc4f-5wrzq" Feb 17 16:12:30 crc kubenswrapper[4808]: I0217 16:12:30.758970 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdqm8\" (UniqueName: \"kubernetes.io/projected/24cc6fe1-da44-4d61-98bf-3088b398903b-kube-api-access-zdqm8\") pod \"dnsmasq-dns-57d769cc4f-5wrzq\" (UID: \"24cc6fe1-da44-4d61-98bf-3088b398903b\") " pod="openstack/dnsmasq-dns-57d769cc4f-5wrzq" Feb 17 16:12:30 crc kubenswrapper[4808]: I0217 16:12:30.920996 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-5wrzq" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.433673 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.437619 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.444629 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.445203 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.445496 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-gc9dp" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.445721 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.445753 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.445855 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.446241 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.449372 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.450241 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/698c36e9-5f87-4836-8660-aaceac669005-pod-info\") pod \"rabbitmq-server-0\" (UID: \"698c36e9-5f87-4836-8660-aaceac669005\") " pod="openstack/rabbitmq-server-0" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.450281 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/698c36e9-5f87-4836-8660-aaceac669005-server-conf\") pod \"rabbitmq-server-0\" (UID: \"698c36e9-5f87-4836-8660-aaceac669005\") " pod="openstack/rabbitmq-server-0" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.450316 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/698c36e9-5f87-4836-8660-aaceac669005-config-data\") pod \"rabbitmq-server-0\" (UID: \"698c36e9-5f87-4836-8660-aaceac669005\") " pod="openstack/rabbitmq-server-0" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.450364 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/698c36e9-5f87-4836-8660-aaceac669005-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"698c36e9-5f87-4836-8660-aaceac669005\") " pod="openstack/rabbitmq-server-0" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.450389 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/698c36e9-5f87-4836-8660-aaceac669005-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"698c36e9-5f87-4836-8660-aaceac669005\") " pod="openstack/rabbitmq-server-0" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.450421 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/698c36e9-5f87-4836-8660-aaceac669005-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"698c36e9-5f87-4836-8660-aaceac669005\") " pod="openstack/rabbitmq-server-0" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.450462 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/698c36e9-5f87-4836-8660-aaceac669005-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"698c36e9-5f87-4836-8660-aaceac669005\") " pod="openstack/rabbitmq-server-0" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.450524 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/698c36e9-5f87-4836-8660-aaceac669005-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"698c36e9-5f87-4836-8660-aaceac669005\") " pod="openstack/rabbitmq-server-0" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.450554 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/698c36e9-5f87-4836-8660-aaceac669005-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"698c36e9-5f87-4836-8660-aaceac669005\") " pod="openstack/rabbitmq-server-0" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.450610 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-41460aca-532a-4a4a-9959-90e4e175e3d4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-41460aca-532a-4a4a-9959-90e4e175e3d4\") pod \"rabbitmq-server-0\" (UID: \"698c36e9-5f87-4836-8660-aaceac669005\") " pod="openstack/rabbitmq-server-0" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.450705 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqv9f\" (UniqueName: \"kubernetes.io/projected/698c36e9-5f87-4836-8660-aaceac669005-kube-api-access-bqv9f\") pod \"rabbitmq-server-0\" (UID: \"698c36e9-5f87-4836-8660-aaceac669005\") " pod="openstack/rabbitmq-server-0" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.552122 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/698c36e9-5f87-4836-8660-aaceac669005-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"698c36e9-5f87-4836-8660-aaceac669005\") " pod="openstack/rabbitmq-server-0" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.552169 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-41460aca-532a-4a4a-9959-90e4e175e3d4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-41460aca-532a-4a4a-9959-90e4e175e3d4\") pod \"rabbitmq-server-0\" (UID: \"698c36e9-5f87-4836-8660-aaceac669005\") " pod="openstack/rabbitmq-server-0" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.552199 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqv9f\" (UniqueName: \"kubernetes.io/projected/698c36e9-5f87-4836-8660-aaceac669005-kube-api-access-bqv9f\") pod \"rabbitmq-server-0\" (UID: \"698c36e9-5f87-4836-8660-aaceac669005\") " pod="openstack/rabbitmq-server-0" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.552230 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/698c36e9-5f87-4836-8660-aaceac669005-pod-info\") pod \"rabbitmq-server-0\" (UID: \"698c36e9-5f87-4836-8660-aaceac669005\") " pod="openstack/rabbitmq-server-0" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.552251 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/698c36e9-5f87-4836-8660-aaceac669005-server-conf\") pod \"rabbitmq-server-0\" (UID: \"698c36e9-5f87-4836-8660-aaceac669005\") " pod="openstack/rabbitmq-server-0" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.552270 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/698c36e9-5f87-4836-8660-aaceac669005-config-data\") pod \"rabbitmq-server-0\" (UID: \"698c36e9-5f87-4836-8660-aaceac669005\") " pod="openstack/rabbitmq-server-0" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.552315 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/698c36e9-5f87-4836-8660-aaceac669005-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"698c36e9-5f87-4836-8660-aaceac669005\") " pod="openstack/rabbitmq-server-0" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.552336 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/698c36e9-5f87-4836-8660-aaceac669005-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"698c36e9-5f87-4836-8660-aaceac669005\") " pod="openstack/rabbitmq-server-0" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.552363 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/698c36e9-5f87-4836-8660-aaceac669005-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"698c36e9-5f87-4836-8660-aaceac669005\") " pod="openstack/rabbitmq-server-0" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.552486 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/698c36e9-5f87-4836-8660-aaceac669005-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"698c36e9-5f87-4836-8660-aaceac669005\") " pod="openstack/rabbitmq-server-0" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.552509 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/698c36e9-5f87-4836-8660-aaceac669005-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"698c36e9-5f87-4836-8660-aaceac669005\") " pod="openstack/rabbitmq-server-0" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.552995 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/698c36e9-5f87-4836-8660-aaceac669005-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"698c36e9-5f87-4836-8660-aaceac669005\") " pod="openstack/rabbitmq-server-0" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.553342 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/698c36e9-5f87-4836-8660-aaceac669005-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"698c36e9-5f87-4836-8660-aaceac669005\") " pod="openstack/rabbitmq-server-0" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.553488 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/698c36e9-5f87-4836-8660-aaceac669005-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"698c36e9-5f87-4836-8660-aaceac669005\") " pod="openstack/rabbitmq-server-0" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.554812 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/698c36e9-5f87-4836-8660-aaceac669005-config-data\") pod \"rabbitmq-server-0\" (UID: \"698c36e9-5f87-4836-8660-aaceac669005\") " pod="openstack/rabbitmq-server-0" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.555000 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/698c36e9-5f87-4836-8660-aaceac669005-server-conf\") pod \"rabbitmq-server-0\" (UID: \"698c36e9-5f87-4836-8660-aaceac669005\") " pod="openstack/rabbitmq-server-0" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.556611 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/698c36e9-5f87-4836-8660-aaceac669005-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"698c36e9-5f87-4836-8660-aaceac669005\") " pod="openstack/rabbitmq-server-0" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.557231 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/698c36e9-5f87-4836-8660-aaceac669005-pod-info\") pod \"rabbitmq-server-0\" (UID: \"698c36e9-5f87-4836-8660-aaceac669005\") " pod="openstack/rabbitmq-server-0" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.558617 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/698c36e9-5f87-4836-8660-aaceac669005-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"698c36e9-5f87-4836-8660-aaceac669005\") " pod="openstack/rabbitmq-server-0" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.558782 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/698c36e9-5f87-4836-8660-aaceac669005-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"698c36e9-5f87-4836-8660-aaceac669005\") " pod="openstack/rabbitmq-server-0" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.559348 4808 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.559376 4808 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-41460aca-532a-4a4a-9959-90e4e175e3d4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-41460aca-532a-4a4a-9959-90e4e175e3d4\") pod \"rabbitmq-server-0\" (UID: \"698c36e9-5f87-4836-8660-aaceac669005\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/6f412b4a2036f29492410677330a9ca63ffe6d8a8c319c56d242ee67a4a97d25/globalmount\"" pod="openstack/rabbitmq-server-0" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.570139 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqv9f\" (UniqueName: \"kubernetes.io/projected/698c36e9-5f87-4836-8660-aaceac669005-kube-api-access-bqv9f\") pod \"rabbitmq-server-0\" (UID: \"698c36e9-5f87-4836-8660-aaceac669005\") " pod="openstack/rabbitmq-server-0" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.588831 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-41460aca-532a-4a4a-9959-90e4e175e3d4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-41460aca-532a-4a4a-9959-90e4e175e3d4\") pod \"rabbitmq-server-0\" (UID: \"698c36e9-5f87-4836-8660-aaceac669005\") " pod="openstack/rabbitmq-server-0" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.711686 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.713632 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.716842 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.716895 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.717155 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-gsb4q" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.717459 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.717668 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.717765 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.717839 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.722890 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.779167 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.856231 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/59be2048-a5c9-44c9-a3ef-651002555ff0-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"59be2048-a5c9-44c9-a3ef-651002555ff0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.856268 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/59be2048-a5c9-44c9-a3ef-651002555ff0-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"59be2048-a5c9-44c9-a3ef-651002555ff0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.856292 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-768b6430-57c2-4601-b30e-a3b0639286e5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-768b6430-57c2-4601-b30e-a3b0639286e5\") pod \"rabbitmq-cell1-server-0\" (UID: \"59be2048-a5c9-44c9-a3ef-651002555ff0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.856337 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/59be2048-a5c9-44c9-a3ef-651002555ff0-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"59be2048-a5c9-44c9-a3ef-651002555ff0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.856539 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/59be2048-a5c9-44c9-a3ef-651002555ff0-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"59be2048-a5c9-44c9-a3ef-651002555ff0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.856567 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/59be2048-a5c9-44c9-a3ef-651002555ff0-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"59be2048-a5c9-44c9-a3ef-651002555ff0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.856613 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/59be2048-a5c9-44c9-a3ef-651002555ff0-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"59be2048-a5c9-44c9-a3ef-651002555ff0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.856664 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flvtj\" (UniqueName: \"kubernetes.io/projected/59be2048-a5c9-44c9-a3ef-651002555ff0-kube-api-access-flvtj\") pod \"rabbitmq-cell1-server-0\" (UID: \"59be2048-a5c9-44c9-a3ef-651002555ff0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.856682 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/59be2048-a5c9-44c9-a3ef-651002555ff0-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"59be2048-a5c9-44c9-a3ef-651002555ff0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.856815 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/59be2048-a5c9-44c9-a3ef-651002555ff0-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"59be2048-a5c9-44c9-a3ef-651002555ff0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.856857 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/59be2048-a5c9-44c9-a3ef-651002555ff0-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"59be2048-a5c9-44c9-a3ef-651002555ff0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.958388 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-flvtj\" (UniqueName: \"kubernetes.io/projected/59be2048-a5c9-44c9-a3ef-651002555ff0-kube-api-access-flvtj\") pod \"rabbitmq-cell1-server-0\" (UID: \"59be2048-a5c9-44c9-a3ef-651002555ff0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.958444 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/59be2048-a5c9-44c9-a3ef-651002555ff0-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"59be2048-a5c9-44c9-a3ef-651002555ff0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.958490 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/59be2048-a5c9-44c9-a3ef-651002555ff0-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"59be2048-a5c9-44c9-a3ef-651002555ff0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.958513 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/59be2048-a5c9-44c9-a3ef-651002555ff0-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"59be2048-a5c9-44c9-a3ef-651002555ff0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.958558 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/59be2048-a5c9-44c9-a3ef-651002555ff0-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"59be2048-a5c9-44c9-a3ef-651002555ff0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.958595 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/59be2048-a5c9-44c9-a3ef-651002555ff0-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"59be2048-a5c9-44c9-a3ef-651002555ff0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.958622 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-768b6430-57c2-4601-b30e-a3b0639286e5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-768b6430-57c2-4601-b30e-a3b0639286e5\") pod \"rabbitmq-cell1-server-0\" (UID: \"59be2048-a5c9-44c9-a3ef-651002555ff0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.958657 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/59be2048-a5c9-44c9-a3ef-651002555ff0-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"59be2048-a5c9-44c9-a3ef-651002555ff0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.958681 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/59be2048-a5c9-44c9-a3ef-651002555ff0-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"59be2048-a5c9-44c9-a3ef-651002555ff0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.958704 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/59be2048-a5c9-44c9-a3ef-651002555ff0-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"59be2048-a5c9-44c9-a3ef-651002555ff0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.958736 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/59be2048-a5c9-44c9-a3ef-651002555ff0-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"59be2048-a5c9-44c9-a3ef-651002555ff0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.960174 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/59be2048-a5c9-44c9-a3ef-651002555ff0-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"59be2048-a5c9-44c9-a3ef-651002555ff0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.961254 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/59be2048-a5c9-44c9-a3ef-651002555ff0-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"59be2048-a5c9-44c9-a3ef-651002555ff0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.961360 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/59be2048-a5c9-44c9-a3ef-651002555ff0-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"59be2048-a5c9-44c9-a3ef-651002555ff0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.961460 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/59be2048-a5c9-44c9-a3ef-651002555ff0-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"59be2048-a5c9-44c9-a3ef-651002555ff0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.962154 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/59be2048-a5c9-44c9-a3ef-651002555ff0-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"59be2048-a5c9-44c9-a3ef-651002555ff0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.967352 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/59be2048-a5c9-44c9-a3ef-651002555ff0-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"59be2048-a5c9-44c9-a3ef-651002555ff0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.967818 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/59be2048-a5c9-44c9-a3ef-651002555ff0-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"59be2048-a5c9-44c9-a3ef-651002555ff0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.973030 4808 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.973066 4808 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-768b6430-57c2-4601-b30e-a3b0639286e5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-768b6430-57c2-4601-b30e-a3b0639286e5\") pod \"rabbitmq-cell1-server-0\" (UID: \"59be2048-a5c9-44c9-a3ef-651002555ff0\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/be40d6772f21ead376a83ce27352b0ce535ee01ddc50414a5dc6453b6d9bcfec/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.975699 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-flvtj\" (UniqueName: \"kubernetes.io/projected/59be2048-a5c9-44c9-a3ef-651002555ff0-kube-api-access-flvtj\") pod \"rabbitmq-cell1-server-0\" (UID: \"59be2048-a5c9-44c9-a3ef-651002555ff0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.980363 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/59be2048-a5c9-44c9-a3ef-651002555ff0-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"59be2048-a5c9-44c9-a3ef-651002555ff0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.980974 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/59be2048-a5c9-44c9-a3ef-651002555ff0-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"59be2048-a5c9-44c9-a3ef-651002555ff0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:12:32 crc kubenswrapper[4808]: I0217 16:12:32.016730 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-768b6430-57c2-4601-b30e-a3b0639286e5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-768b6430-57c2-4601-b30e-a3b0639286e5\") pod \"rabbitmq-cell1-server-0\" (UID: \"59be2048-a5c9-44c9-a3ef-651002555ff0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:12:32 crc kubenswrapper[4808]: I0217 16:12:32.037373 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:12:32 crc kubenswrapper[4808]: I0217 16:12:32.913393 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Feb 17 16:12:32 crc kubenswrapper[4808]: I0217 16:12:32.915369 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 17 16:12:32 crc kubenswrapper[4808]: I0217 16:12:32.924459 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Feb 17 16:12:32 crc kubenswrapper[4808]: I0217 16:12:32.924803 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-s6nf9" Feb 17 16:12:32 crc kubenswrapper[4808]: I0217 16:12:32.924989 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Feb 17 16:12:32 crc kubenswrapper[4808]: I0217 16:12:32.926417 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Feb 17 16:12:32 crc kubenswrapper[4808]: I0217 16:12:32.933217 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 17 16:12:32 crc kubenswrapper[4808]: I0217 16:12:32.938657 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Feb 17 16:12:33 crc kubenswrapper[4808]: I0217 16:12:33.075126 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a020d38c-5e24-4266-96dc-9050e4d82f46-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"a020d38c-5e24-4266-96dc-9050e4d82f46\") " pod="openstack/openstack-galera-0" Feb 17 16:12:33 crc kubenswrapper[4808]: I0217 16:12:33.075205 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a020d38c-5e24-4266-96dc-9050e4d82f46-kolla-config\") pod \"openstack-galera-0\" (UID: \"a020d38c-5e24-4266-96dc-9050e4d82f46\") " pod="openstack/openstack-galera-0" Feb 17 16:12:33 crc kubenswrapper[4808]: I0217 16:12:33.075229 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a020d38c-5e24-4266-96dc-9050e4d82f46-config-data-default\") pod \"openstack-galera-0\" (UID: \"a020d38c-5e24-4266-96dc-9050e4d82f46\") " pod="openstack/openstack-galera-0" Feb 17 16:12:33 crc kubenswrapper[4808]: I0217 16:12:33.075253 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfxgv\" (UniqueName: \"kubernetes.io/projected/a020d38c-5e24-4266-96dc-9050e4d82f46-kube-api-access-mfxgv\") pod \"openstack-galera-0\" (UID: \"a020d38c-5e24-4266-96dc-9050e4d82f46\") " pod="openstack/openstack-galera-0" Feb 17 16:12:33 crc kubenswrapper[4808]: I0217 16:12:33.075273 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-30394718-1223-46d7-bfe7-4d6809d236ff\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-30394718-1223-46d7-bfe7-4d6809d236ff\") pod \"openstack-galera-0\" (UID: \"a020d38c-5e24-4266-96dc-9050e4d82f46\") " pod="openstack/openstack-galera-0" Feb 17 16:12:33 crc kubenswrapper[4808]: I0217 16:12:33.075443 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a020d38c-5e24-4266-96dc-9050e4d82f46-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"a020d38c-5e24-4266-96dc-9050e4d82f46\") " pod="openstack/openstack-galera-0" Feb 17 16:12:33 crc kubenswrapper[4808]: I0217 16:12:33.075532 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a020d38c-5e24-4266-96dc-9050e4d82f46-config-data-generated\") pod \"openstack-galera-0\" (UID: \"a020d38c-5e24-4266-96dc-9050e4d82f46\") " pod="openstack/openstack-galera-0" Feb 17 16:12:33 crc kubenswrapper[4808]: I0217 16:12:33.075605 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a020d38c-5e24-4266-96dc-9050e4d82f46-operator-scripts\") pod \"openstack-galera-0\" (UID: \"a020d38c-5e24-4266-96dc-9050e4d82f46\") " pod="openstack/openstack-galera-0" Feb 17 16:12:33 crc kubenswrapper[4808]: I0217 16:12:33.177513 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a020d38c-5e24-4266-96dc-9050e4d82f46-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"a020d38c-5e24-4266-96dc-9050e4d82f46\") " pod="openstack/openstack-galera-0" Feb 17 16:12:33 crc kubenswrapper[4808]: I0217 16:12:33.177589 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a020d38c-5e24-4266-96dc-9050e4d82f46-operator-scripts\") pod \"openstack-galera-0\" (UID: \"a020d38c-5e24-4266-96dc-9050e4d82f46\") " pod="openstack/openstack-galera-0" Feb 17 16:12:33 crc kubenswrapper[4808]: I0217 16:12:33.177615 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a020d38c-5e24-4266-96dc-9050e4d82f46-config-data-generated\") pod \"openstack-galera-0\" (UID: \"a020d38c-5e24-4266-96dc-9050e4d82f46\") " pod="openstack/openstack-galera-0" Feb 17 16:12:33 crc kubenswrapper[4808]: I0217 16:12:33.177685 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a020d38c-5e24-4266-96dc-9050e4d82f46-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"a020d38c-5e24-4266-96dc-9050e4d82f46\") " pod="openstack/openstack-galera-0" Feb 17 16:12:33 crc kubenswrapper[4808]: I0217 16:12:33.177737 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a020d38c-5e24-4266-96dc-9050e4d82f46-kolla-config\") pod \"openstack-galera-0\" (UID: \"a020d38c-5e24-4266-96dc-9050e4d82f46\") " pod="openstack/openstack-galera-0" Feb 17 16:12:33 crc kubenswrapper[4808]: I0217 16:12:33.177759 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a020d38c-5e24-4266-96dc-9050e4d82f46-config-data-default\") pod \"openstack-galera-0\" (UID: \"a020d38c-5e24-4266-96dc-9050e4d82f46\") " pod="openstack/openstack-galera-0" Feb 17 16:12:33 crc kubenswrapper[4808]: I0217 16:12:33.177786 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfxgv\" (UniqueName: \"kubernetes.io/projected/a020d38c-5e24-4266-96dc-9050e4d82f46-kube-api-access-mfxgv\") pod \"openstack-galera-0\" (UID: \"a020d38c-5e24-4266-96dc-9050e4d82f46\") " pod="openstack/openstack-galera-0" Feb 17 16:12:33 crc kubenswrapper[4808]: I0217 16:12:33.177815 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-30394718-1223-46d7-bfe7-4d6809d236ff\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-30394718-1223-46d7-bfe7-4d6809d236ff\") pod \"openstack-galera-0\" (UID: \"a020d38c-5e24-4266-96dc-9050e4d82f46\") " pod="openstack/openstack-galera-0" Feb 17 16:12:33 crc kubenswrapper[4808]: I0217 16:12:33.178526 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a020d38c-5e24-4266-96dc-9050e4d82f46-config-data-generated\") pod \"openstack-galera-0\" (UID: \"a020d38c-5e24-4266-96dc-9050e4d82f46\") " pod="openstack/openstack-galera-0" Feb 17 16:12:33 crc kubenswrapper[4808]: I0217 16:12:33.179021 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a020d38c-5e24-4266-96dc-9050e4d82f46-kolla-config\") pod \"openstack-galera-0\" (UID: \"a020d38c-5e24-4266-96dc-9050e4d82f46\") " pod="openstack/openstack-galera-0" Feb 17 16:12:33 crc kubenswrapper[4808]: I0217 16:12:33.179130 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a020d38c-5e24-4266-96dc-9050e4d82f46-operator-scripts\") pod \"openstack-galera-0\" (UID: \"a020d38c-5e24-4266-96dc-9050e4d82f46\") " pod="openstack/openstack-galera-0" Feb 17 16:12:33 crc kubenswrapper[4808]: I0217 16:12:33.179896 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a020d38c-5e24-4266-96dc-9050e4d82f46-config-data-default\") pod \"openstack-galera-0\" (UID: \"a020d38c-5e24-4266-96dc-9050e4d82f46\") " pod="openstack/openstack-galera-0" Feb 17 16:12:33 crc kubenswrapper[4808]: I0217 16:12:33.182498 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a020d38c-5e24-4266-96dc-9050e4d82f46-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"a020d38c-5e24-4266-96dc-9050e4d82f46\") " pod="openstack/openstack-galera-0" Feb 17 16:12:33 crc kubenswrapper[4808]: I0217 16:12:33.185169 4808 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 16:12:33 crc kubenswrapper[4808]: I0217 16:12:33.185249 4808 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-30394718-1223-46d7-bfe7-4d6809d236ff\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-30394718-1223-46d7-bfe7-4d6809d236ff\") pod \"openstack-galera-0\" (UID: \"a020d38c-5e24-4266-96dc-9050e4d82f46\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/3e1c302f0afff268df14962949a9d196999f26ff33f0979bc5549004932fa8ad/globalmount\"" pod="openstack/openstack-galera-0" Feb 17 16:12:33 crc kubenswrapper[4808]: I0217 16:12:33.186508 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a020d38c-5e24-4266-96dc-9050e4d82f46-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"a020d38c-5e24-4266-96dc-9050e4d82f46\") " pod="openstack/openstack-galera-0" Feb 17 16:12:33 crc kubenswrapper[4808]: I0217 16:12:33.205109 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfxgv\" (UniqueName: \"kubernetes.io/projected/a020d38c-5e24-4266-96dc-9050e4d82f46-kube-api-access-mfxgv\") pod \"openstack-galera-0\" (UID: \"a020d38c-5e24-4266-96dc-9050e4d82f46\") " pod="openstack/openstack-galera-0" Feb 17 16:12:33 crc kubenswrapper[4808]: I0217 16:12:33.218145 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-30394718-1223-46d7-bfe7-4d6809d236ff\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-30394718-1223-46d7-bfe7-4d6809d236ff\") pod \"openstack-galera-0\" (UID: \"a020d38c-5e24-4266-96dc-9050e4d82f46\") " pod="openstack/openstack-galera-0" Feb 17 16:12:33 crc kubenswrapper[4808]: I0217 16:12:33.240674 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 17 16:12:34 crc kubenswrapper[4808]: I0217 16:12:34.518920 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 17 16:12:34 crc kubenswrapper[4808]: I0217 16:12:34.520795 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 17 16:12:34 crc kubenswrapper[4808]: I0217 16:12:34.527754 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Feb 17 16:12:34 crc kubenswrapper[4808]: I0217 16:12:34.527779 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Feb 17 16:12:34 crc kubenswrapper[4808]: I0217 16:12:34.529164 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Feb 17 16:12:34 crc kubenswrapper[4808]: I0217 16:12:34.536317 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 17 16:12:34 crc kubenswrapper[4808]: I0217 16:12:34.540914 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-n66xj" Feb 17 16:12:34 crc kubenswrapper[4808]: I0217 16:12:34.709394 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ade81c90-5cdf-45d4-ad2f-52a3514e1596-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"ade81c90-5cdf-45d4-ad2f-52a3514e1596\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:12:34 crc kubenswrapper[4808]: I0217 16:12:34.709689 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjb7d\" (UniqueName: \"kubernetes.io/projected/ade81c90-5cdf-45d4-ad2f-52a3514e1596-kube-api-access-pjb7d\") pod \"openstack-cell1-galera-0\" (UID: \"ade81c90-5cdf-45d4-ad2f-52a3514e1596\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:12:34 crc kubenswrapper[4808]: I0217 16:12:34.709722 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ade81c90-5cdf-45d4-ad2f-52a3514e1596-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"ade81c90-5cdf-45d4-ad2f-52a3514e1596\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:12:34 crc kubenswrapper[4808]: I0217 16:12:34.709738 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/ade81c90-5cdf-45d4-ad2f-52a3514e1596-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"ade81c90-5cdf-45d4-ad2f-52a3514e1596\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:12:34 crc kubenswrapper[4808]: I0217 16:12:34.709816 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/ade81c90-5cdf-45d4-ad2f-52a3514e1596-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"ade81c90-5cdf-45d4-ad2f-52a3514e1596\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:12:34 crc kubenswrapper[4808]: I0217 16:12:34.709864 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/ade81c90-5cdf-45d4-ad2f-52a3514e1596-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"ade81c90-5cdf-45d4-ad2f-52a3514e1596\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:12:34 crc kubenswrapper[4808]: I0217 16:12:34.709900 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-6f58a2ff-3a65-40b3-9aef-dace6fc4982b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6f58a2ff-3a65-40b3-9aef-dace6fc4982b\") pod \"openstack-cell1-galera-0\" (UID: \"ade81c90-5cdf-45d4-ad2f-52a3514e1596\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:12:34 crc kubenswrapper[4808]: I0217 16:12:34.709937 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ade81c90-5cdf-45d4-ad2f-52a3514e1596-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"ade81c90-5cdf-45d4-ad2f-52a3514e1596\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:12:34 crc kubenswrapper[4808]: I0217 16:12:34.811301 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ade81c90-5cdf-45d4-ad2f-52a3514e1596-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"ade81c90-5cdf-45d4-ad2f-52a3514e1596\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:12:34 crc kubenswrapper[4808]: I0217 16:12:34.811357 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/ade81c90-5cdf-45d4-ad2f-52a3514e1596-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"ade81c90-5cdf-45d4-ad2f-52a3514e1596\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:12:34 crc kubenswrapper[4808]: I0217 16:12:34.811449 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/ade81c90-5cdf-45d4-ad2f-52a3514e1596-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"ade81c90-5cdf-45d4-ad2f-52a3514e1596\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:12:34 crc kubenswrapper[4808]: I0217 16:12:34.811483 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/ade81c90-5cdf-45d4-ad2f-52a3514e1596-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"ade81c90-5cdf-45d4-ad2f-52a3514e1596\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:12:34 crc kubenswrapper[4808]: I0217 16:12:34.811520 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-6f58a2ff-3a65-40b3-9aef-dace6fc4982b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6f58a2ff-3a65-40b3-9aef-dace6fc4982b\") pod \"openstack-cell1-galera-0\" (UID: \"ade81c90-5cdf-45d4-ad2f-52a3514e1596\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:12:34 crc kubenswrapper[4808]: I0217 16:12:34.811557 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ade81c90-5cdf-45d4-ad2f-52a3514e1596-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"ade81c90-5cdf-45d4-ad2f-52a3514e1596\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:12:34 crc kubenswrapper[4808]: I0217 16:12:34.811649 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ade81c90-5cdf-45d4-ad2f-52a3514e1596-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"ade81c90-5cdf-45d4-ad2f-52a3514e1596\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:12:34 crc kubenswrapper[4808]: I0217 16:12:34.811690 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjb7d\" (UniqueName: \"kubernetes.io/projected/ade81c90-5cdf-45d4-ad2f-52a3514e1596-kube-api-access-pjb7d\") pod \"openstack-cell1-galera-0\" (UID: \"ade81c90-5cdf-45d4-ad2f-52a3514e1596\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:12:34 crc kubenswrapper[4808]: I0217 16:12:34.813284 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/ade81c90-5cdf-45d4-ad2f-52a3514e1596-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"ade81c90-5cdf-45d4-ad2f-52a3514e1596\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:12:34 crc kubenswrapper[4808]: I0217 16:12:34.813752 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/ade81c90-5cdf-45d4-ad2f-52a3514e1596-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"ade81c90-5cdf-45d4-ad2f-52a3514e1596\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:12:34 crc kubenswrapper[4808]: I0217 16:12:34.814399 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ade81c90-5cdf-45d4-ad2f-52a3514e1596-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"ade81c90-5cdf-45d4-ad2f-52a3514e1596\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:12:34 crc kubenswrapper[4808]: I0217 16:12:34.815034 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ade81c90-5cdf-45d4-ad2f-52a3514e1596-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"ade81c90-5cdf-45d4-ad2f-52a3514e1596\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:12:34 crc kubenswrapper[4808]: I0217 16:12:34.817144 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/ade81c90-5cdf-45d4-ad2f-52a3514e1596-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"ade81c90-5cdf-45d4-ad2f-52a3514e1596\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:12:34 crc kubenswrapper[4808]: I0217 16:12:34.817294 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ade81c90-5cdf-45d4-ad2f-52a3514e1596-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"ade81c90-5cdf-45d4-ad2f-52a3514e1596\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:12:34 crc kubenswrapper[4808]: I0217 16:12:34.830586 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Feb 17 16:12:34 crc kubenswrapper[4808]: I0217 16:12:34.831772 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 17 16:12:34 crc kubenswrapper[4808]: I0217 16:12:34.835468 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Feb 17 16:12:34 crc kubenswrapper[4808]: I0217 16:12:34.835986 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Feb 17 16:12:34 crc kubenswrapper[4808]: I0217 16:12:34.836164 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-n5t75" Feb 17 16:12:34 crc kubenswrapper[4808]: I0217 16:12:34.839745 4808 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 16:12:34 crc kubenswrapper[4808]: I0217 16:12:34.839783 4808 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-6f58a2ff-3a65-40b3-9aef-dace6fc4982b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6f58a2ff-3a65-40b3-9aef-dace6fc4982b\") pod \"openstack-cell1-galera-0\" (UID: \"ade81c90-5cdf-45d4-ad2f-52a3514e1596\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/5270466038ae00cf6871391119df2b111d8d15fa0af733fbdb4f1a590701fc8c/globalmount\"" pod="openstack/openstack-cell1-galera-0" Feb 17 16:12:34 crc kubenswrapper[4808]: I0217 16:12:34.840742 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjb7d\" (UniqueName: \"kubernetes.io/projected/ade81c90-5cdf-45d4-ad2f-52a3514e1596-kube-api-access-pjb7d\") pod \"openstack-cell1-galera-0\" (UID: \"ade81c90-5cdf-45d4-ad2f-52a3514e1596\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:12:34 crc kubenswrapper[4808]: I0217 16:12:34.850049 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 17 16:12:34 crc kubenswrapper[4808]: I0217 16:12:34.879010 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-6f58a2ff-3a65-40b3-9aef-dace6fc4982b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6f58a2ff-3a65-40b3-9aef-dace6fc4982b\") pod \"openstack-cell1-galera-0\" (UID: \"ade81c90-5cdf-45d4-ad2f-52a3514e1596\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:12:34 crc kubenswrapper[4808]: I0217 16:12:34.896424 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 17 16:12:35 crc kubenswrapper[4808]: I0217 16:12:35.017235 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ea38754-3b00-4bcb-93d9-28b60dda0e0a-memcached-tls-certs\") pod \"memcached-0\" (UID: \"2ea38754-3b00-4bcb-93d9-28b60dda0e0a\") " pod="openstack/memcached-0" Feb 17 16:12:35 crc kubenswrapper[4808]: I0217 16:12:35.017311 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2ea38754-3b00-4bcb-93d9-28b60dda0e0a-config-data\") pod \"memcached-0\" (UID: \"2ea38754-3b00-4bcb-93d9-28b60dda0e0a\") " pod="openstack/memcached-0" Feb 17 16:12:35 crc kubenswrapper[4808]: I0217 16:12:35.017345 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqlrz\" (UniqueName: \"kubernetes.io/projected/2ea38754-3b00-4bcb-93d9-28b60dda0e0a-kube-api-access-wqlrz\") pod \"memcached-0\" (UID: \"2ea38754-3b00-4bcb-93d9-28b60dda0e0a\") " pod="openstack/memcached-0" Feb 17 16:12:35 crc kubenswrapper[4808]: I0217 16:12:35.017377 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/2ea38754-3b00-4bcb-93d9-28b60dda0e0a-kolla-config\") pod \"memcached-0\" (UID: \"2ea38754-3b00-4bcb-93d9-28b60dda0e0a\") " pod="openstack/memcached-0" Feb 17 16:12:35 crc kubenswrapper[4808]: I0217 16:12:35.017405 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ea38754-3b00-4bcb-93d9-28b60dda0e0a-combined-ca-bundle\") pod \"memcached-0\" (UID: \"2ea38754-3b00-4bcb-93d9-28b60dda0e0a\") " pod="openstack/memcached-0" Feb 17 16:12:35 crc kubenswrapper[4808]: I0217 16:12:35.119296 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ea38754-3b00-4bcb-93d9-28b60dda0e0a-memcached-tls-certs\") pod \"memcached-0\" (UID: \"2ea38754-3b00-4bcb-93d9-28b60dda0e0a\") " pod="openstack/memcached-0" Feb 17 16:12:35 crc kubenswrapper[4808]: I0217 16:12:35.119384 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2ea38754-3b00-4bcb-93d9-28b60dda0e0a-config-data\") pod \"memcached-0\" (UID: \"2ea38754-3b00-4bcb-93d9-28b60dda0e0a\") " pod="openstack/memcached-0" Feb 17 16:12:35 crc kubenswrapper[4808]: I0217 16:12:35.119418 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqlrz\" (UniqueName: \"kubernetes.io/projected/2ea38754-3b00-4bcb-93d9-28b60dda0e0a-kube-api-access-wqlrz\") pod \"memcached-0\" (UID: \"2ea38754-3b00-4bcb-93d9-28b60dda0e0a\") " pod="openstack/memcached-0" Feb 17 16:12:35 crc kubenswrapper[4808]: I0217 16:12:35.119449 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/2ea38754-3b00-4bcb-93d9-28b60dda0e0a-kolla-config\") pod \"memcached-0\" (UID: \"2ea38754-3b00-4bcb-93d9-28b60dda0e0a\") " pod="openstack/memcached-0" Feb 17 16:12:35 crc kubenswrapper[4808]: I0217 16:12:35.119468 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ea38754-3b00-4bcb-93d9-28b60dda0e0a-combined-ca-bundle\") pod \"memcached-0\" (UID: \"2ea38754-3b00-4bcb-93d9-28b60dda0e0a\") " pod="openstack/memcached-0" Feb 17 16:12:35 crc kubenswrapper[4808]: I0217 16:12:35.120299 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/2ea38754-3b00-4bcb-93d9-28b60dda0e0a-kolla-config\") pod \"memcached-0\" (UID: \"2ea38754-3b00-4bcb-93d9-28b60dda0e0a\") " pod="openstack/memcached-0" Feb 17 16:12:35 crc kubenswrapper[4808]: I0217 16:12:35.120396 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2ea38754-3b00-4bcb-93d9-28b60dda0e0a-config-data\") pod \"memcached-0\" (UID: \"2ea38754-3b00-4bcb-93d9-28b60dda0e0a\") " pod="openstack/memcached-0" Feb 17 16:12:35 crc kubenswrapper[4808]: I0217 16:12:35.122795 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ea38754-3b00-4bcb-93d9-28b60dda0e0a-memcached-tls-certs\") pod \"memcached-0\" (UID: \"2ea38754-3b00-4bcb-93d9-28b60dda0e0a\") " pod="openstack/memcached-0" Feb 17 16:12:35 crc kubenswrapper[4808]: I0217 16:12:35.122994 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ea38754-3b00-4bcb-93d9-28b60dda0e0a-combined-ca-bundle\") pod \"memcached-0\" (UID: \"2ea38754-3b00-4bcb-93d9-28b60dda0e0a\") " pod="openstack/memcached-0" Feb 17 16:12:35 crc kubenswrapper[4808]: I0217 16:12:35.137670 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqlrz\" (UniqueName: \"kubernetes.io/projected/2ea38754-3b00-4bcb-93d9-28b60dda0e0a-kube-api-access-wqlrz\") pod \"memcached-0\" (UID: \"2ea38754-3b00-4bcb-93d9-28b60dda0e0a\") " pod="openstack/memcached-0" Feb 17 16:12:35 crc kubenswrapper[4808]: I0217 16:12:35.214812 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 17 16:12:36 crc kubenswrapper[4808]: I0217 16:12:36.891590 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 17 16:12:36 crc kubenswrapper[4808]: I0217 16:12:36.892910 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 17 16:12:36 crc kubenswrapper[4808]: I0217 16:12:36.895229 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-f9csg" Feb 17 16:12:36 crc kubenswrapper[4808]: I0217 16:12:36.919251 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 17 16:12:37 crc kubenswrapper[4808]: I0217 16:12:37.046329 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrnn8\" (UniqueName: \"kubernetes.io/projected/0a2bf674-1881-41e9-9c0f-93e8f14ac222-kube-api-access-jrnn8\") pod \"kube-state-metrics-0\" (UID: \"0a2bf674-1881-41e9-9c0f-93e8f14ac222\") " pod="openstack/kube-state-metrics-0" Feb 17 16:12:37 crc kubenswrapper[4808]: I0217 16:12:37.147259 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrnn8\" (UniqueName: \"kubernetes.io/projected/0a2bf674-1881-41e9-9c0f-93e8f14ac222-kube-api-access-jrnn8\") pod \"kube-state-metrics-0\" (UID: \"0a2bf674-1881-41e9-9c0f-93e8f14ac222\") " pod="openstack/kube-state-metrics-0" Feb 17 16:12:37 crc kubenswrapper[4808]: I0217 16:12:37.184186 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrnn8\" (UniqueName: \"kubernetes.io/projected/0a2bf674-1881-41e9-9c0f-93e8f14ac222-kube-api-access-jrnn8\") pod \"kube-state-metrics-0\" (UID: \"0a2bf674-1881-41e9-9c0f-93e8f14ac222\") " pod="openstack/kube-state-metrics-0" Feb 17 16:12:37 crc kubenswrapper[4808]: I0217 16:12:37.210713 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 17 16:12:37 crc kubenswrapper[4808]: I0217 16:12:37.641926 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/alertmanager-metric-storage-0"] Feb 17 16:12:37 crc kubenswrapper[4808]: I0217 16:12:37.645365 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Feb 17 16:12:37 crc kubenswrapper[4808]: I0217 16:12:37.659021 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-alertmanager-dockercfg-9fp42" Feb 17 16:12:37 crc kubenswrapper[4808]: I0217 16:12:37.659671 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-web-config" Feb 17 16:12:37 crc kubenswrapper[4808]: I0217 16:12:37.659706 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-generated" Feb 17 16:12:37 crc kubenswrapper[4808]: I0217 16:12:37.659824 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-tls-assets-0" Feb 17 16:12:37 crc kubenswrapper[4808]: I0217 16:12:37.659920 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-cluster-tls-config" Feb 17 16:12:37 crc kubenswrapper[4808]: I0217 16:12:37.674598 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Feb 17 16:12:37 crc kubenswrapper[4808]: I0217 16:12:37.759779 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/56f9931d-b010-4282-9068-16b2e4e4b247-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"56f9931d-b010-4282-9068-16b2e4e4b247\") " pod="openstack/alertmanager-metric-storage-0" Feb 17 16:12:37 crc kubenswrapper[4808]: I0217 16:12:37.760101 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6s2hk\" (UniqueName: \"kubernetes.io/projected/56f9931d-b010-4282-9068-16b2e4e4b247-kube-api-access-6s2hk\") pod \"alertmanager-metric-storage-0\" (UID: \"56f9931d-b010-4282-9068-16b2e4e4b247\") " pod="openstack/alertmanager-metric-storage-0" Feb 17 16:12:37 crc kubenswrapper[4808]: I0217 16:12:37.760216 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/56f9931d-b010-4282-9068-16b2e4e4b247-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"56f9931d-b010-4282-9068-16b2e4e4b247\") " pod="openstack/alertmanager-metric-storage-0" Feb 17 16:12:37 crc kubenswrapper[4808]: I0217 16:12:37.760400 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/56f9931d-b010-4282-9068-16b2e4e4b247-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"56f9931d-b010-4282-9068-16b2e4e4b247\") " pod="openstack/alertmanager-metric-storage-0" Feb 17 16:12:37 crc kubenswrapper[4808]: I0217 16:12:37.760555 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/56f9931d-b010-4282-9068-16b2e4e4b247-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"56f9931d-b010-4282-9068-16b2e4e4b247\") " pod="openstack/alertmanager-metric-storage-0" Feb 17 16:12:37 crc kubenswrapper[4808]: I0217 16:12:37.761228 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/56f9931d-b010-4282-9068-16b2e4e4b247-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"56f9931d-b010-4282-9068-16b2e4e4b247\") " pod="openstack/alertmanager-metric-storage-0" Feb 17 16:12:37 crc kubenswrapper[4808]: I0217 16:12:37.761351 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/56f9931d-b010-4282-9068-16b2e4e4b247-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"56f9931d-b010-4282-9068-16b2e4e4b247\") " pod="openstack/alertmanager-metric-storage-0" Feb 17 16:12:37 crc kubenswrapper[4808]: I0217 16:12:37.862882 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/56f9931d-b010-4282-9068-16b2e4e4b247-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"56f9931d-b010-4282-9068-16b2e4e4b247\") " pod="openstack/alertmanager-metric-storage-0" Feb 17 16:12:37 crc kubenswrapper[4808]: I0217 16:12:37.862934 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6s2hk\" (UniqueName: \"kubernetes.io/projected/56f9931d-b010-4282-9068-16b2e4e4b247-kube-api-access-6s2hk\") pod \"alertmanager-metric-storage-0\" (UID: \"56f9931d-b010-4282-9068-16b2e4e4b247\") " pod="openstack/alertmanager-metric-storage-0" Feb 17 16:12:37 crc kubenswrapper[4808]: I0217 16:12:37.862970 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/56f9931d-b010-4282-9068-16b2e4e4b247-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"56f9931d-b010-4282-9068-16b2e4e4b247\") " pod="openstack/alertmanager-metric-storage-0" Feb 17 16:12:37 crc kubenswrapper[4808]: I0217 16:12:37.862990 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/56f9931d-b010-4282-9068-16b2e4e4b247-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"56f9931d-b010-4282-9068-16b2e4e4b247\") " pod="openstack/alertmanager-metric-storage-0" Feb 17 16:12:37 crc kubenswrapper[4808]: I0217 16:12:37.863007 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/56f9931d-b010-4282-9068-16b2e4e4b247-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"56f9931d-b010-4282-9068-16b2e4e4b247\") " pod="openstack/alertmanager-metric-storage-0" Feb 17 16:12:37 crc kubenswrapper[4808]: I0217 16:12:37.863037 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/56f9931d-b010-4282-9068-16b2e4e4b247-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"56f9931d-b010-4282-9068-16b2e4e4b247\") " pod="openstack/alertmanager-metric-storage-0" Feb 17 16:12:37 crc kubenswrapper[4808]: I0217 16:12:37.863058 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/56f9931d-b010-4282-9068-16b2e4e4b247-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"56f9931d-b010-4282-9068-16b2e4e4b247\") " pod="openstack/alertmanager-metric-storage-0" Feb 17 16:12:37 crc kubenswrapper[4808]: I0217 16:12:37.864263 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/56f9931d-b010-4282-9068-16b2e4e4b247-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"56f9931d-b010-4282-9068-16b2e4e4b247\") " pod="openstack/alertmanager-metric-storage-0" Feb 17 16:12:37 crc kubenswrapper[4808]: I0217 16:12:37.866213 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/56f9931d-b010-4282-9068-16b2e4e4b247-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"56f9931d-b010-4282-9068-16b2e4e4b247\") " pod="openstack/alertmanager-metric-storage-0" Feb 17 16:12:37 crc kubenswrapper[4808]: I0217 16:12:37.866362 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/56f9931d-b010-4282-9068-16b2e4e4b247-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"56f9931d-b010-4282-9068-16b2e4e4b247\") " pod="openstack/alertmanager-metric-storage-0" Feb 17 16:12:37 crc kubenswrapper[4808]: I0217 16:12:37.866743 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/56f9931d-b010-4282-9068-16b2e4e4b247-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"56f9931d-b010-4282-9068-16b2e4e4b247\") " pod="openstack/alertmanager-metric-storage-0" Feb 17 16:12:37 crc kubenswrapper[4808]: I0217 16:12:37.867496 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/56f9931d-b010-4282-9068-16b2e4e4b247-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"56f9931d-b010-4282-9068-16b2e4e4b247\") " pod="openstack/alertmanager-metric-storage-0" Feb 17 16:12:37 crc kubenswrapper[4808]: I0217 16:12:37.872290 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/56f9931d-b010-4282-9068-16b2e4e4b247-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"56f9931d-b010-4282-9068-16b2e4e4b247\") " pod="openstack/alertmanager-metric-storage-0" Feb 17 16:12:37 crc kubenswrapper[4808]: I0217 16:12:37.883676 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6s2hk\" (UniqueName: \"kubernetes.io/projected/56f9931d-b010-4282-9068-16b2e4e4b247-kube-api-access-6s2hk\") pod \"alertmanager-metric-storage-0\" (UID: \"56f9931d-b010-4282-9068-16b2e4e4b247\") " pod="openstack/alertmanager-metric-storage-0" Feb 17 16:12:37 crc kubenswrapper[4808]: I0217 16:12:37.978200 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Feb 17 16:12:38 crc kubenswrapper[4808]: I0217 16:12:38.228237 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 17 16:12:38 crc kubenswrapper[4808]: I0217 16:12:38.233016 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 17 16:12:38 crc kubenswrapper[4808]: I0217 16:12:38.235717 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 17 16:12:38 crc kubenswrapper[4808]: I0217 16:12:38.236432 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 17 16:12:38 crc kubenswrapper[4808]: I0217 16:12:38.236493 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-2wbtf" Feb 17 16:12:38 crc kubenswrapper[4808]: I0217 16:12:38.236711 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 17 16:12:38 crc kubenswrapper[4808]: I0217 16:12:38.236895 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 17 16:12:38 crc kubenswrapper[4808]: I0217 16:12:38.236900 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 17 16:12:38 crc kubenswrapper[4808]: I0217 16:12:38.239504 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 17 16:12:38 crc kubenswrapper[4808]: I0217 16:12:38.247694 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 17 16:12:38 crc kubenswrapper[4808]: I0217 16:12:38.247834 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 17 16:12:38 crc kubenswrapper[4808]: I0217 16:12:38.370894 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/2917eca2-0431-4bd6-ad96-ab8464cc4fd7-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"2917eca2-0431-4bd6-ad96-ab8464cc4fd7\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:12:38 crc kubenswrapper[4808]: I0217 16:12:38.370954 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/2917eca2-0431-4bd6-ad96-ab8464cc4fd7-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"2917eca2-0431-4bd6-ad96-ab8464cc4fd7\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:12:38 crc kubenswrapper[4808]: I0217 16:12:38.371015 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2917eca2-0431-4bd6-ad96-ab8464cc4fd7-config\") pod \"prometheus-metric-storage-0\" (UID: \"2917eca2-0431-4bd6-ad96-ab8464cc4fd7\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:12:38 crc kubenswrapper[4808]: I0217 16:12:38.371035 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/2917eca2-0431-4bd6-ad96-ab8464cc4fd7-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"2917eca2-0431-4bd6-ad96-ab8464cc4fd7\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:12:38 crc kubenswrapper[4808]: I0217 16:12:38.371059 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sh7d7\" (UniqueName: \"kubernetes.io/projected/2917eca2-0431-4bd6-ad96-ab8464cc4fd7-kube-api-access-sh7d7\") pod \"prometheus-metric-storage-0\" (UID: \"2917eca2-0431-4bd6-ad96-ab8464cc4fd7\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:12:38 crc kubenswrapper[4808]: I0217 16:12:38.371259 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/2917eca2-0431-4bd6-ad96-ab8464cc4fd7-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"2917eca2-0431-4bd6-ad96-ab8464cc4fd7\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:12:38 crc kubenswrapper[4808]: I0217 16:12:38.371305 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/2917eca2-0431-4bd6-ad96-ab8464cc4fd7-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"2917eca2-0431-4bd6-ad96-ab8464cc4fd7\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:12:38 crc kubenswrapper[4808]: I0217 16:12:38.371477 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/2917eca2-0431-4bd6-ad96-ab8464cc4fd7-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"2917eca2-0431-4bd6-ad96-ab8464cc4fd7\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:12:38 crc kubenswrapper[4808]: I0217 16:12:38.371509 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/2917eca2-0431-4bd6-ad96-ab8464cc4fd7-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"2917eca2-0431-4bd6-ad96-ab8464cc4fd7\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:12:38 crc kubenswrapper[4808]: I0217 16:12:38.371543 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0040876f-8578-4a75-9f3f-72945b4c5b7a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0040876f-8578-4a75-9f3f-72945b4c5b7a\") pod \"prometheus-metric-storage-0\" (UID: \"2917eca2-0431-4bd6-ad96-ab8464cc4fd7\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:12:38 crc kubenswrapper[4808]: I0217 16:12:38.472857 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/2917eca2-0431-4bd6-ad96-ab8464cc4fd7-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"2917eca2-0431-4bd6-ad96-ab8464cc4fd7\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:12:38 crc kubenswrapper[4808]: I0217 16:12:38.472909 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/2917eca2-0431-4bd6-ad96-ab8464cc4fd7-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"2917eca2-0431-4bd6-ad96-ab8464cc4fd7\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:12:38 crc kubenswrapper[4808]: I0217 16:12:38.472942 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2917eca2-0431-4bd6-ad96-ab8464cc4fd7-config\") pod \"prometheus-metric-storage-0\" (UID: \"2917eca2-0431-4bd6-ad96-ab8464cc4fd7\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:12:38 crc kubenswrapper[4808]: I0217 16:12:38.472960 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/2917eca2-0431-4bd6-ad96-ab8464cc4fd7-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"2917eca2-0431-4bd6-ad96-ab8464cc4fd7\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:12:38 crc kubenswrapper[4808]: I0217 16:12:38.472981 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sh7d7\" (UniqueName: \"kubernetes.io/projected/2917eca2-0431-4bd6-ad96-ab8464cc4fd7-kube-api-access-sh7d7\") pod \"prometheus-metric-storage-0\" (UID: \"2917eca2-0431-4bd6-ad96-ab8464cc4fd7\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:12:38 crc kubenswrapper[4808]: I0217 16:12:38.473011 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/2917eca2-0431-4bd6-ad96-ab8464cc4fd7-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"2917eca2-0431-4bd6-ad96-ab8464cc4fd7\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:12:38 crc kubenswrapper[4808]: I0217 16:12:38.473028 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/2917eca2-0431-4bd6-ad96-ab8464cc4fd7-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"2917eca2-0431-4bd6-ad96-ab8464cc4fd7\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:12:38 crc kubenswrapper[4808]: I0217 16:12:38.473083 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/2917eca2-0431-4bd6-ad96-ab8464cc4fd7-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"2917eca2-0431-4bd6-ad96-ab8464cc4fd7\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:12:38 crc kubenswrapper[4808]: I0217 16:12:38.473101 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/2917eca2-0431-4bd6-ad96-ab8464cc4fd7-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"2917eca2-0431-4bd6-ad96-ab8464cc4fd7\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:12:38 crc kubenswrapper[4808]: I0217 16:12:38.473125 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-0040876f-8578-4a75-9f3f-72945b4c5b7a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0040876f-8578-4a75-9f3f-72945b4c5b7a\") pod \"prometheus-metric-storage-0\" (UID: \"2917eca2-0431-4bd6-ad96-ab8464cc4fd7\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:12:38 crc kubenswrapper[4808]: I0217 16:12:38.474004 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/2917eca2-0431-4bd6-ad96-ab8464cc4fd7-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"2917eca2-0431-4bd6-ad96-ab8464cc4fd7\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:12:38 crc kubenswrapper[4808]: I0217 16:12:38.474153 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/2917eca2-0431-4bd6-ad96-ab8464cc4fd7-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"2917eca2-0431-4bd6-ad96-ab8464cc4fd7\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:12:38 crc kubenswrapper[4808]: I0217 16:12:38.474160 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/2917eca2-0431-4bd6-ad96-ab8464cc4fd7-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"2917eca2-0431-4bd6-ad96-ab8464cc4fd7\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:12:38 crc kubenswrapper[4808]: I0217 16:12:38.477943 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/2917eca2-0431-4bd6-ad96-ab8464cc4fd7-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"2917eca2-0431-4bd6-ad96-ab8464cc4fd7\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:12:38 crc kubenswrapper[4808]: I0217 16:12:38.478603 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/2917eca2-0431-4bd6-ad96-ab8464cc4fd7-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"2917eca2-0431-4bd6-ad96-ab8464cc4fd7\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:12:38 crc kubenswrapper[4808]: I0217 16:12:38.479108 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/2917eca2-0431-4bd6-ad96-ab8464cc4fd7-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"2917eca2-0431-4bd6-ad96-ab8464cc4fd7\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:12:38 crc kubenswrapper[4808]: I0217 16:12:38.479175 4808 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 16:12:38 crc kubenswrapper[4808]: I0217 16:12:38.479198 4808 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-0040876f-8578-4a75-9f3f-72945b4c5b7a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0040876f-8578-4a75-9f3f-72945b4c5b7a\") pod \"prometheus-metric-storage-0\" (UID: \"2917eca2-0431-4bd6-ad96-ab8464cc4fd7\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f40780962e64d13d6799d8a1c9a177793dc18d1eb26c87512c3b4aff3215b0d/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 17 16:12:38 crc kubenswrapper[4808]: I0217 16:12:38.480160 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/2917eca2-0431-4bd6-ad96-ab8464cc4fd7-config\") pod \"prometheus-metric-storage-0\" (UID: \"2917eca2-0431-4bd6-ad96-ab8464cc4fd7\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:12:38 crc kubenswrapper[4808]: I0217 16:12:38.487212 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/2917eca2-0431-4bd6-ad96-ab8464cc4fd7-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"2917eca2-0431-4bd6-ad96-ab8464cc4fd7\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:12:38 crc kubenswrapper[4808]: I0217 16:12:38.500452 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sh7d7\" (UniqueName: \"kubernetes.io/projected/2917eca2-0431-4bd6-ad96-ab8464cc4fd7-kube-api-access-sh7d7\") pod \"prometheus-metric-storage-0\" (UID: \"2917eca2-0431-4bd6-ad96-ab8464cc4fd7\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:12:38 crc kubenswrapper[4808]: I0217 16:12:38.536851 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-0040876f-8578-4a75-9f3f-72945b4c5b7a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0040876f-8578-4a75-9f3f-72945b4c5b7a\") pod \"prometheus-metric-storage-0\" (UID: \"2917eca2-0431-4bd6-ad96-ab8464cc4fd7\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:12:38 crc kubenswrapper[4808]: I0217 16:12:38.560362 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 17 16:12:40 crc kubenswrapper[4808]: I0217 16:12:40.419701 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-pfcvm"] Feb 17 16:12:40 crc kubenswrapper[4808]: I0217 16:12:40.421047 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-pfcvm" Feb 17 16:12:40 crc kubenswrapper[4808]: I0217 16:12:40.423457 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Feb 17 16:12:40 crc kubenswrapper[4808]: I0217 16:12:40.423537 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-6vzxz" Feb 17 16:12:40 crc kubenswrapper[4808]: I0217 16:12:40.423704 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Feb 17 16:12:40 crc kubenswrapper[4808]: I0217 16:12:40.425606 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-wkzp6"] Feb 17 16:12:40 crc kubenswrapper[4808]: I0217 16:12:40.427077 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-wkzp6" Feb 17 16:12:40 crc kubenswrapper[4808]: I0217 16:12:40.439091 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-pfcvm"] Feb 17 16:12:40 crc kubenswrapper[4808]: I0217 16:12:40.467852 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-wkzp6"] Feb 17 16:12:40 crc kubenswrapper[4808]: I0217 16:12:40.503893 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a76a2ff-ed1a-4279-898c-54e85973f024-ovn-controller-tls-certs\") pod \"ovn-controller-pfcvm\" (UID: \"8a76a2ff-ed1a-4279-898c-54e85973f024\") " pod="openstack/ovn-controller-pfcvm" Feb 17 16:12:40 crc kubenswrapper[4808]: I0217 16:12:40.503967 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5sdf\" (UniqueName: \"kubernetes.io/projected/8a76a2ff-ed1a-4279-898c-54e85973f024-kube-api-access-h5sdf\") pod \"ovn-controller-pfcvm\" (UID: \"8a76a2ff-ed1a-4279-898c-54e85973f024\") " pod="openstack/ovn-controller-pfcvm" Feb 17 16:12:40 crc kubenswrapper[4808]: I0217 16:12:40.503996 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8a76a2ff-ed1a-4279-898c-54e85973f024-var-run\") pod \"ovn-controller-pfcvm\" (UID: \"8a76a2ff-ed1a-4279-898c-54e85973f024\") " pod="openstack/ovn-controller-pfcvm" Feb 17 16:12:40 crc kubenswrapper[4808]: I0217 16:12:40.504065 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8a76a2ff-ed1a-4279-898c-54e85973f024-scripts\") pod \"ovn-controller-pfcvm\" (UID: \"8a76a2ff-ed1a-4279-898c-54e85973f024\") " pod="openstack/ovn-controller-pfcvm" Feb 17 16:12:40 crc kubenswrapper[4808]: I0217 16:12:40.504089 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/8a76a2ff-ed1a-4279-898c-54e85973f024-var-run-ovn\") pod \"ovn-controller-pfcvm\" (UID: \"8a76a2ff-ed1a-4279-898c-54e85973f024\") " pod="openstack/ovn-controller-pfcvm" Feb 17 16:12:40 crc kubenswrapper[4808]: I0217 16:12:40.504139 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a76a2ff-ed1a-4279-898c-54e85973f024-combined-ca-bundle\") pod \"ovn-controller-pfcvm\" (UID: \"8a76a2ff-ed1a-4279-898c-54e85973f024\") " pod="openstack/ovn-controller-pfcvm" Feb 17 16:12:40 crc kubenswrapper[4808]: I0217 16:12:40.504158 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/8a76a2ff-ed1a-4279-898c-54e85973f024-var-log-ovn\") pod \"ovn-controller-pfcvm\" (UID: \"8a76a2ff-ed1a-4279-898c-54e85973f024\") " pod="openstack/ovn-controller-pfcvm" Feb 17 16:12:40 crc kubenswrapper[4808]: I0217 16:12:40.605384 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a76a2ff-ed1a-4279-898c-54e85973f024-ovn-controller-tls-certs\") pod \"ovn-controller-pfcvm\" (UID: \"8a76a2ff-ed1a-4279-898c-54e85973f024\") " pod="openstack/ovn-controller-pfcvm" Feb 17 16:12:40 crc kubenswrapper[4808]: I0217 16:12:40.605451 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5sdf\" (UniqueName: \"kubernetes.io/projected/8a76a2ff-ed1a-4279-898c-54e85973f024-kube-api-access-h5sdf\") pod \"ovn-controller-pfcvm\" (UID: \"8a76a2ff-ed1a-4279-898c-54e85973f024\") " pod="openstack/ovn-controller-pfcvm" Feb 17 16:12:40 crc kubenswrapper[4808]: I0217 16:12:40.605477 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8a76a2ff-ed1a-4279-898c-54e85973f024-var-run\") pod \"ovn-controller-pfcvm\" (UID: \"8a76a2ff-ed1a-4279-898c-54e85973f024\") " pod="openstack/ovn-controller-pfcvm" Feb 17 16:12:40 crc kubenswrapper[4808]: I0217 16:12:40.605504 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/30b7fc5a-690b-4ac6-b37c-9c1ec074f962-var-log\") pod \"ovn-controller-ovs-wkzp6\" (UID: \"30b7fc5a-690b-4ac6-b37c-9c1ec074f962\") " pod="openstack/ovn-controller-ovs-wkzp6" Feb 17 16:12:40 crc kubenswrapper[4808]: I0217 16:12:40.605548 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/30b7fc5a-690b-4ac6-b37c-9c1ec074f962-var-run\") pod \"ovn-controller-ovs-wkzp6\" (UID: \"30b7fc5a-690b-4ac6-b37c-9c1ec074f962\") " pod="openstack/ovn-controller-ovs-wkzp6" Feb 17 16:12:40 crc kubenswrapper[4808]: I0217 16:12:40.605588 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8a76a2ff-ed1a-4279-898c-54e85973f024-scripts\") pod \"ovn-controller-pfcvm\" (UID: \"8a76a2ff-ed1a-4279-898c-54e85973f024\") " pod="openstack/ovn-controller-pfcvm" Feb 17 16:12:40 crc kubenswrapper[4808]: I0217 16:12:40.605605 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/30b7fc5a-690b-4ac6-b37c-9c1ec074f962-etc-ovs\") pod \"ovn-controller-ovs-wkzp6\" (UID: \"30b7fc5a-690b-4ac6-b37c-9c1ec074f962\") " pod="openstack/ovn-controller-ovs-wkzp6" Feb 17 16:12:40 crc kubenswrapper[4808]: I0217 16:12:40.605628 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/8a76a2ff-ed1a-4279-898c-54e85973f024-var-run-ovn\") pod \"ovn-controller-pfcvm\" (UID: \"8a76a2ff-ed1a-4279-898c-54e85973f024\") " pod="openstack/ovn-controller-pfcvm" Feb 17 16:12:40 crc kubenswrapper[4808]: I0217 16:12:40.605644 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/30b7fc5a-690b-4ac6-b37c-9c1ec074f962-scripts\") pod \"ovn-controller-ovs-wkzp6\" (UID: \"30b7fc5a-690b-4ac6-b37c-9c1ec074f962\") " pod="openstack/ovn-controller-ovs-wkzp6" Feb 17 16:12:40 crc kubenswrapper[4808]: I0217 16:12:40.605682 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdjtn\" (UniqueName: \"kubernetes.io/projected/30b7fc5a-690b-4ac6-b37c-9c1ec074f962-kube-api-access-bdjtn\") pod \"ovn-controller-ovs-wkzp6\" (UID: \"30b7fc5a-690b-4ac6-b37c-9c1ec074f962\") " pod="openstack/ovn-controller-ovs-wkzp6" Feb 17 16:12:40 crc kubenswrapper[4808]: I0217 16:12:40.605704 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/30b7fc5a-690b-4ac6-b37c-9c1ec074f962-var-lib\") pod \"ovn-controller-ovs-wkzp6\" (UID: \"30b7fc5a-690b-4ac6-b37c-9c1ec074f962\") " pod="openstack/ovn-controller-ovs-wkzp6" Feb 17 16:12:40 crc kubenswrapper[4808]: I0217 16:12:40.605725 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a76a2ff-ed1a-4279-898c-54e85973f024-combined-ca-bundle\") pod \"ovn-controller-pfcvm\" (UID: \"8a76a2ff-ed1a-4279-898c-54e85973f024\") " pod="openstack/ovn-controller-pfcvm" Feb 17 16:12:40 crc kubenswrapper[4808]: I0217 16:12:40.605740 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/8a76a2ff-ed1a-4279-898c-54e85973f024-var-log-ovn\") pod \"ovn-controller-pfcvm\" (UID: \"8a76a2ff-ed1a-4279-898c-54e85973f024\") " pod="openstack/ovn-controller-pfcvm" Feb 17 16:12:40 crc kubenswrapper[4808]: I0217 16:12:40.606509 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8a76a2ff-ed1a-4279-898c-54e85973f024-var-run\") pod \"ovn-controller-pfcvm\" (UID: \"8a76a2ff-ed1a-4279-898c-54e85973f024\") " pod="openstack/ovn-controller-pfcvm" Feb 17 16:12:40 crc kubenswrapper[4808]: I0217 16:12:40.606647 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/8a76a2ff-ed1a-4279-898c-54e85973f024-var-run-ovn\") pod \"ovn-controller-pfcvm\" (UID: \"8a76a2ff-ed1a-4279-898c-54e85973f024\") " pod="openstack/ovn-controller-pfcvm" Feb 17 16:12:40 crc kubenswrapper[4808]: I0217 16:12:40.606774 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/8a76a2ff-ed1a-4279-898c-54e85973f024-var-log-ovn\") pod \"ovn-controller-pfcvm\" (UID: \"8a76a2ff-ed1a-4279-898c-54e85973f024\") " pod="openstack/ovn-controller-pfcvm" Feb 17 16:12:40 crc kubenswrapper[4808]: I0217 16:12:40.608287 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8a76a2ff-ed1a-4279-898c-54e85973f024-scripts\") pod \"ovn-controller-pfcvm\" (UID: \"8a76a2ff-ed1a-4279-898c-54e85973f024\") " pod="openstack/ovn-controller-pfcvm" Feb 17 16:12:40 crc kubenswrapper[4808]: I0217 16:12:40.613132 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a76a2ff-ed1a-4279-898c-54e85973f024-ovn-controller-tls-certs\") pod \"ovn-controller-pfcvm\" (UID: \"8a76a2ff-ed1a-4279-898c-54e85973f024\") " pod="openstack/ovn-controller-pfcvm" Feb 17 16:12:40 crc kubenswrapper[4808]: I0217 16:12:40.613855 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a76a2ff-ed1a-4279-898c-54e85973f024-combined-ca-bundle\") pod \"ovn-controller-pfcvm\" (UID: \"8a76a2ff-ed1a-4279-898c-54e85973f024\") " pod="openstack/ovn-controller-pfcvm" Feb 17 16:12:40 crc kubenswrapper[4808]: I0217 16:12:40.638853 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5sdf\" (UniqueName: \"kubernetes.io/projected/8a76a2ff-ed1a-4279-898c-54e85973f024-kube-api-access-h5sdf\") pod \"ovn-controller-pfcvm\" (UID: \"8a76a2ff-ed1a-4279-898c-54e85973f024\") " pod="openstack/ovn-controller-pfcvm" Feb 17 16:12:40 crc kubenswrapper[4808]: I0217 16:12:40.706522 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/30b7fc5a-690b-4ac6-b37c-9c1ec074f962-var-log\") pod \"ovn-controller-ovs-wkzp6\" (UID: \"30b7fc5a-690b-4ac6-b37c-9c1ec074f962\") " pod="openstack/ovn-controller-ovs-wkzp6" Feb 17 16:12:40 crc kubenswrapper[4808]: I0217 16:12:40.706607 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/30b7fc5a-690b-4ac6-b37c-9c1ec074f962-var-run\") pod \"ovn-controller-ovs-wkzp6\" (UID: \"30b7fc5a-690b-4ac6-b37c-9c1ec074f962\") " pod="openstack/ovn-controller-ovs-wkzp6" Feb 17 16:12:40 crc kubenswrapper[4808]: I0217 16:12:40.706630 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/30b7fc5a-690b-4ac6-b37c-9c1ec074f962-etc-ovs\") pod \"ovn-controller-ovs-wkzp6\" (UID: \"30b7fc5a-690b-4ac6-b37c-9c1ec074f962\") " pod="openstack/ovn-controller-ovs-wkzp6" Feb 17 16:12:40 crc kubenswrapper[4808]: I0217 16:12:40.706649 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/30b7fc5a-690b-4ac6-b37c-9c1ec074f962-scripts\") pod \"ovn-controller-ovs-wkzp6\" (UID: \"30b7fc5a-690b-4ac6-b37c-9c1ec074f962\") " pod="openstack/ovn-controller-ovs-wkzp6" Feb 17 16:12:40 crc kubenswrapper[4808]: I0217 16:12:40.706685 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdjtn\" (UniqueName: \"kubernetes.io/projected/30b7fc5a-690b-4ac6-b37c-9c1ec074f962-kube-api-access-bdjtn\") pod \"ovn-controller-ovs-wkzp6\" (UID: \"30b7fc5a-690b-4ac6-b37c-9c1ec074f962\") " pod="openstack/ovn-controller-ovs-wkzp6" Feb 17 16:12:40 crc kubenswrapper[4808]: I0217 16:12:40.706705 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/30b7fc5a-690b-4ac6-b37c-9c1ec074f962-var-lib\") pod \"ovn-controller-ovs-wkzp6\" (UID: \"30b7fc5a-690b-4ac6-b37c-9c1ec074f962\") " pod="openstack/ovn-controller-ovs-wkzp6" Feb 17 16:12:40 crc kubenswrapper[4808]: I0217 16:12:40.706836 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/30b7fc5a-690b-4ac6-b37c-9c1ec074f962-var-run\") pod \"ovn-controller-ovs-wkzp6\" (UID: \"30b7fc5a-690b-4ac6-b37c-9c1ec074f962\") " pod="openstack/ovn-controller-ovs-wkzp6" Feb 17 16:12:40 crc kubenswrapper[4808]: I0217 16:12:40.706919 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/30b7fc5a-690b-4ac6-b37c-9c1ec074f962-var-log\") pod \"ovn-controller-ovs-wkzp6\" (UID: \"30b7fc5a-690b-4ac6-b37c-9c1ec074f962\") " pod="openstack/ovn-controller-ovs-wkzp6" Feb 17 16:12:40 crc kubenswrapper[4808]: I0217 16:12:40.706939 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/30b7fc5a-690b-4ac6-b37c-9c1ec074f962-var-lib\") pod \"ovn-controller-ovs-wkzp6\" (UID: \"30b7fc5a-690b-4ac6-b37c-9c1ec074f962\") " pod="openstack/ovn-controller-ovs-wkzp6" Feb 17 16:12:40 crc kubenswrapper[4808]: I0217 16:12:40.706966 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/30b7fc5a-690b-4ac6-b37c-9c1ec074f962-etc-ovs\") pod \"ovn-controller-ovs-wkzp6\" (UID: \"30b7fc5a-690b-4ac6-b37c-9c1ec074f962\") " pod="openstack/ovn-controller-ovs-wkzp6" Feb 17 16:12:40 crc kubenswrapper[4808]: I0217 16:12:40.708928 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/30b7fc5a-690b-4ac6-b37c-9c1ec074f962-scripts\") pod \"ovn-controller-ovs-wkzp6\" (UID: \"30b7fc5a-690b-4ac6-b37c-9c1ec074f962\") " pod="openstack/ovn-controller-ovs-wkzp6" Feb 17 16:12:40 crc kubenswrapper[4808]: I0217 16:12:40.722535 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdjtn\" (UniqueName: \"kubernetes.io/projected/30b7fc5a-690b-4ac6-b37c-9c1ec074f962-kube-api-access-bdjtn\") pod \"ovn-controller-ovs-wkzp6\" (UID: \"30b7fc5a-690b-4ac6-b37c-9c1ec074f962\") " pod="openstack/ovn-controller-ovs-wkzp6" Feb 17 16:12:40 crc kubenswrapper[4808]: I0217 16:12:40.745130 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-pfcvm" Feb 17 16:12:40 crc kubenswrapper[4808]: I0217 16:12:40.756411 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-wkzp6" Feb 17 16:12:41 crc kubenswrapper[4808]: I0217 16:12:41.306554 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 17 16:12:41 crc kubenswrapper[4808]: I0217 16:12:41.307856 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 17 16:12:41 crc kubenswrapper[4808]: I0217 16:12:41.310135 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Feb 17 16:12:41 crc kubenswrapper[4808]: I0217 16:12:41.310317 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Feb 17 16:12:41 crc kubenswrapper[4808]: I0217 16:12:41.313285 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Feb 17 16:12:41 crc kubenswrapper[4808]: I0217 16:12:41.313367 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-zvwsl" Feb 17 16:12:41 crc kubenswrapper[4808]: I0217 16:12:41.313497 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Feb 17 16:12:41 crc kubenswrapper[4808]: I0217 16:12:41.339908 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 17 16:12:41 crc kubenswrapper[4808]: I0217 16:12:41.421336 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c434a76-4dcf-4c69-aefa-5cda8b120a26-config\") pod \"ovsdbserver-nb-0\" (UID: \"8c434a76-4dcf-4c69-aefa-5cda8b120a26\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:12:41 crc kubenswrapper[4808]: I0217 16:12:41.421378 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8c434a76-4dcf-4c69-aefa-5cda8b120a26-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"8c434a76-4dcf-4c69-aefa-5cda8b120a26\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:12:41 crc kubenswrapper[4808]: I0217 16:12:41.421399 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8c434a76-4dcf-4c69-aefa-5cda8b120a26-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"8c434a76-4dcf-4c69-aefa-5cda8b120a26\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:12:41 crc kubenswrapper[4808]: I0217 16:12:41.421446 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c434a76-4dcf-4c69-aefa-5cda8b120a26-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"8c434a76-4dcf-4c69-aefa-5cda8b120a26\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:12:41 crc kubenswrapper[4808]: I0217 16:12:41.421490 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpcqk\" (UniqueName: \"kubernetes.io/projected/8c434a76-4dcf-4c69-aefa-5cda8b120a26-kube-api-access-dpcqk\") pod \"ovsdbserver-nb-0\" (UID: \"8c434a76-4dcf-4c69-aefa-5cda8b120a26\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:12:41 crc kubenswrapper[4808]: I0217 16:12:41.421519 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-242c0ec6-a2ba-44b9-be5e-88a23761bae3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-242c0ec6-a2ba-44b9-be5e-88a23761bae3\") pod \"ovsdbserver-nb-0\" (UID: \"8c434a76-4dcf-4c69-aefa-5cda8b120a26\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:12:41 crc kubenswrapper[4808]: I0217 16:12:41.421548 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c434a76-4dcf-4c69-aefa-5cda8b120a26-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"8c434a76-4dcf-4c69-aefa-5cda8b120a26\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:12:41 crc kubenswrapper[4808]: I0217 16:12:41.421566 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c434a76-4dcf-4c69-aefa-5cda8b120a26-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"8c434a76-4dcf-4c69-aefa-5cda8b120a26\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:12:41 crc kubenswrapper[4808]: I0217 16:12:41.523103 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dpcqk\" (UniqueName: \"kubernetes.io/projected/8c434a76-4dcf-4c69-aefa-5cda8b120a26-kube-api-access-dpcqk\") pod \"ovsdbserver-nb-0\" (UID: \"8c434a76-4dcf-4c69-aefa-5cda8b120a26\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:12:41 crc kubenswrapper[4808]: I0217 16:12:41.523385 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-242c0ec6-a2ba-44b9-be5e-88a23761bae3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-242c0ec6-a2ba-44b9-be5e-88a23761bae3\") pod \"ovsdbserver-nb-0\" (UID: \"8c434a76-4dcf-4c69-aefa-5cda8b120a26\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:12:41 crc kubenswrapper[4808]: I0217 16:12:41.523420 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c434a76-4dcf-4c69-aefa-5cda8b120a26-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"8c434a76-4dcf-4c69-aefa-5cda8b120a26\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:12:41 crc kubenswrapper[4808]: I0217 16:12:41.523439 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c434a76-4dcf-4c69-aefa-5cda8b120a26-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"8c434a76-4dcf-4c69-aefa-5cda8b120a26\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:12:41 crc kubenswrapper[4808]: I0217 16:12:41.523485 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c434a76-4dcf-4c69-aefa-5cda8b120a26-config\") pod \"ovsdbserver-nb-0\" (UID: \"8c434a76-4dcf-4c69-aefa-5cda8b120a26\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:12:41 crc kubenswrapper[4808]: I0217 16:12:41.523501 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8c434a76-4dcf-4c69-aefa-5cda8b120a26-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"8c434a76-4dcf-4c69-aefa-5cda8b120a26\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:12:41 crc kubenswrapper[4808]: I0217 16:12:41.523515 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8c434a76-4dcf-4c69-aefa-5cda8b120a26-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"8c434a76-4dcf-4c69-aefa-5cda8b120a26\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:12:41 crc kubenswrapper[4808]: I0217 16:12:41.523558 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c434a76-4dcf-4c69-aefa-5cda8b120a26-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"8c434a76-4dcf-4c69-aefa-5cda8b120a26\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:12:41 crc kubenswrapper[4808]: I0217 16:12:41.524596 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8c434a76-4dcf-4c69-aefa-5cda8b120a26-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"8c434a76-4dcf-4c69-aefa-5cda8b120a26\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:12:41 crc kubenswrapper[4808]: I0217 16:12:41.525205 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8c434a76-4dcf-4c69-aefa-5cda8b120a26-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"8c434a76-4dcf-4c69-aefa-5cda8b120a26\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:12:41 crc kubenswrapper[4808]: I0217 16:12:41.526349 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c434a76-4dcf-4c69-aefa-5cda8b120a26-config\") pod \"ovsdbserver-nb-0\" (UID: \"8c434a76-4dcf-4c69-aefa-5cda8b120a26\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:12:41 crc kubenswrapper[4808]: I0217 16:12:41.527055 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c434a76-4dcf-4c69-aefa-5cda8b120a26-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"8c434a76-4dcf-4c69-aefa-5cda8b120a26\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:12:41 crc kubenswrapper[4808]: I0217 16:12:41.528230 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c434a76-4dcf-4c69-aefa-5cda8b120a26-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"8c434a76-4dcf-4c69-aefa-5cda8b120a26\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:12:41 crc kubenswrapper[4808]: I0217 16:12:41.538444 4808 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 16:12:41 crc kubenswrapper[4808]: I0217 16:12:41.538499 4808 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-242c0ec6-a2ba-44b9-be5e-88a23761bae3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-242c0ec6-a2ba-44b9-be5e-88a23761bae3\") pod \"ovsdbserver-nb-0\" (UID: \"8c434a76-4dcf-4c69-aefa-5cda8b120a26\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/da8c1c1c5898d14f5525cc39e1da9a0aa08af59ceda5dda5b3c382b0baabdf5a/globalmount\"" pod="openstack/ovsdbserver-nb-0" Feb 17 16:12:41 crc kubenswrapper[4808]: I0217 16:12:41.540402 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c434a76-4dcf-4c69-aefa-5cda8b120a26-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"8c434a76-4dcf-4c69-aefa-5cda8b120a26\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:12:41 crc kubenswrapper[4808]: I0217 16:12:41.542047 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dpcqk\" (UniqueName: \"kubernetes.io/projected/8c434a76-4dcf-4c69-aefa-5cda8b120a26-kube-api-access-dpcqk\") pod \"ovsdbserver-nb-0\" (UID: \"8c434a76-4dcf-4c69-aefa-5cda8b120a26\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:12:41 crc kubenswrapper[4808]: I0217 16:12:41.576435 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-242c0ec6-a2ba-44b9-be5e-88a23761bae3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-242c0ec6-a2ba-44b9-be5e-88a23761bae3\") pod \"ovsdbserver-nb-0\" (UID: \"8c434a76-4dcf-4c69-aefa-5cda8b120a26\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:12:41 crc kubenswrapper[4808]: I0217 16:12:41.624757 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 17 16:12:45 crc kubenswrapper[4808]: I0217 16:12:45.375969 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 17 16:12:45 crc kubenswrapper[4808]: I0217 16:12:45.378179 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 17 16:12:45 crc kubenswrapper[4808]: I0217 16:12:45.382058 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-bsn6p" Feb 17 16:12:45 crc kubenswrapper[4808]: I0217 16:12:45.382404 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Feb 17 16:12:45 crc kubenswrapper[4808]: I0217 16:12:45.382612 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Feb 17 16:12:45 crc kubenswrapper[4808]: I0217 16:12:45.383959 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Feb 17 16:12:45 crc kubenswrapper[4808]: I0217 16:12:45.385405 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 17 16:12:45 crc kubenswrapper[4808]: I0217 16:12:45.488000 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/220c5de1-b4bf-454c-b013-17d78d86cca3-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"220c5de1-b4bf-454c-b013-17d78d86cca3\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:12:45 crc kubenswrapper[4808]: I0217 16:12:45.488070 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-1712f5df-d8e4-41d4-93e0-280b68db7631\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1712f5df-d8e4-41d4-93e0-280b68db7631\") pod \"ovsdbserver-sb-0\" (UID: \"220c5de1-b4bf-454c-b013-17d78d86cca3\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:12:45 crc kubenswrapper[4808]: I0217 16:12:45.488142 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/220c5de1-b4bf-454c-b013-17d78d86cca3-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"220c5de1-b4bf-454c-b013-17d78d86cca3\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:12:45 crc kubenswrapper[4808]: I0217 16:12:45.488184 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfw9v\" (UniqueName: \"kubernetes.io/projected/220c5de1-b4bf-454c-b013-17d78d86cca3-kube-api-access-dfw9v\") pod \"ovsdbserver-sb-0\" (UID: \"220c5de1-b4bf-454c-b013-17d78d86cca3\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:12:45 crc kubenswrapper[4808]: I0217 16:12:45.488222 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/220c5de1-b4bf-454c-b013-17d78d86cca3-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"220c5de1-b4bf-454c-b013-17d78d86cca3\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:12:45 crc kubenswrapper[4808]: I0217 16:12:45.489131 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/220c5de1-b4bf-454c-b013-17d78d86cca3-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"220c5de1-b4bf-454c-b013-17d78d86cca3\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:12:45 crc kubenswrapper[4808]: I0217 16:12:45.489191 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/220c5de1-b4bf-454c-b013-17d78d86cca3-config\") pod \"ovsdbserver-sb-0\" (UID: \"220c5de1-b4bf-454c-b013-17d78d86cca3\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:12:45 crc kubenswrapper[4808]: I0217 16:12:45.489218 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/220c5de1-b4bf-454c-b013-17d78d86cca3-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"220c5de1-b4bf-454c-b013-17d78d86cca3\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:12:45 crc kubenswrapper[4808]: I0217 16:12:45.590812 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/220c5de1-b4bf-454c-b013-17d78d86cca3-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"220c5de1-b4bf-454c-b013-17d78d86cca3\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:12:45 crc kubenswrapper[4808]: I0217 16:12:45.590861 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-1712f5df-d8e4-41d4-93e0-280b68db7631\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1712f5df-d8e4-41d4-93e0-280b68db7631\") pod \"ovsdbserver-sb-0\" (UID: \"220c5de1-b4bf-454c-b013-17d78d86cca3\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:12:45 crc kubenswrapper[4808]: I0217 16:12:45.590902 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/220c5de1-b4bf-454c-b013-17d78d86cca3-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"220c5de1-b4bf-454c-b013-17d78d86cca3\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:12:45 crc kubenswrapper[4808]: I0217 16:12:45.590925 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfw9v\" (UniqueName: \"kubernetes.io/projected/220c5de1-b4bf-454c-b013-17d78d86cca3-kube-api-access-dfw9v\") pod \"ovsdbserver-sb-0\" (UID: \"220c5de1-b4bf-454c-b013-17d78d86cca3\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:12:45 crc kubenswrapper[4808]: I0217 16:12:45.590954 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/220c5de1-b4bf-454c-b013-17d78d86cca3-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"220c5de1-b4bf-454c-b013-17d78d86cca3\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:12:45 crc kubenswrapper[4808]: I0217 16:12:45.591001 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/220c5de1-b4bf-454c-b013-17d78d86cca3-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"220c5de1-b4bf-454c-b013-17d78d86cca3\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:12:45 crc kubenswrapper[4808]: I0217 16:12:45.591035 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/220c5de1-b4bf-454c-b013-17d78d86cca3-config\") pod \"ovsdbserver-sb-0\" (UID: \"220c5de1-b4bf-454c-b013-17d78d86cca3\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:12:45 crc kubenswrapper[4808]: I0217 16:12:45.591054 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/220c5de1-b4bf-454c-b013-17d78d86cca3-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"220c5de1-b4bf-454c-b013-17d78d86cca3\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:12:45 crc kubenswrapper[4808]: I0217 16:12:45.591744 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/220c5de1-b4bf-454c-b013-17d78d86cca3-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"220c5de1-b4bf-454c-b013-17d78d86cca3\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:12:45 crc kubenswrapper[4808]: I0217 16:12:45.592363 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/220c5de1-b4bf-454c-b013-17d78d86cca3-config\") pod \"ovsdbserver-sb-0\" (UID: \"220c5de1-b4bf-454c-b013-17d78d86cca3\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:12:45 crc kubenswrapper[4808]: I0217 16:12:45.592932 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/220c5de1-b4bf-454c-b013-17d78d86cca3-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"220c5de1-b4bf-454c-b013-17d78d86cca3\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:12:45 crc kubenswrapper[4808]: I0217 16:12:45.597822 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/220c5de1-b4bf-454c-b013-17d78d86cca3-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"220c5de1-b4bf-454c-b013-17d78d86cca3\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:12:45 crc kubenswrapper[4808]: I0217 16:12:45.601572 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/220c5de1-b4bf-454c-b013-17d78d86cca3-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"220c5de1-b4bf-454c-b013-17d78d86cca3\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:12:45 crc kubenswrapper[4808]: I0217 16:12:45.604051 4808 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 16:12:45 crc kubenswrapper[4808]: I0217 16:12:45.604091 4808 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-1712f5df-d8e4-41d4-93e0-280b68db7631\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1712f5df-d8e4-41d4-93e0-280b68db7631\") pod \"ovsdbserver-sb-0\" (UID: \"220c5de1-b4bf-454c-b013-17d78d86cca3\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/bce1e0661817a74b72ed4a389aa718a5527213e8b53598d1402b5c61339dc163/globalmount\"" pod="openstack/ovsdbserver-sb-0" Feb 17 16:12:45 crc kubenswrapper[4808]: I0217 16:12:45.607783 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/220c5de1-b4bf-454c-b013-17d78d86cca3-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"220c5de1-b4bf-454c-b013-17d78d86cca3\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:12:45 crc kubenswrapper[4808]: I0217 16:12:45.613645 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfw9v\" (UniqueName: \"kubernetes.io/projected/220c5de1-b4bf-454c-b013-17d78d86cca3-kube-api-access-dfw9v\") pod \"ovsdbserver-sb-0\" (UID: \"220c5de1-b4bf-454c-b013-17d78d86cca3\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:12:45 crc kubenswrapper[4808]: I0217 16:12:45.641776 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-1712f5df-d8e4-41d4-93e0-280b68db7631\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1712f5df-d8e4-41d4-93e0-280b68db7631\") pod \"ovsdbserver-sb-0\" (UID: \"220c5de1-b4bf-454c-b013-17d78d86cca3\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:12:45 crc kubenswrapper[4808]: I0217 16:12:45.708183 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 17 16:12:46 crc kubenswrapper[4808]: I0217 16:12:46.682982 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-distributor-585d9bcbc-zfhfg"] Feb 17 16:12:46 crc kubenswrapper[4808]: I0217 16:12:46.684261 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-zfhfg" Feb 17 16:12:46 crc kubenswrapper[4808]: I0217 16:12:46.690610 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-distributor-http" Feb 17 16:12:46 crc kubenswrapper[4808]: I0217 16:12:46.690734 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"cloudkitty-lokistack-config" Feb 17 16:12:46 crc kubenswrapper[4808]: I0217 16:12:46.690747 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-dockercfg-7v6q4" Feb 17 16:12:46 crc kubenswrapper[4808]: I0217 16:12:46.690871 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-distributor-grpc" Feb 17 16:12:46 crc kubenswrapper[4808]: I0217 16:12:46.690900 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"cloudkitty-lokistack-ca-bundle" Feb 17 16:12:46 crc kubenswrapper[4808]: I0217 16:12:46.704014 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-distributor-585d9bcbc-zfhfg"] Feb 17 16:12:46 crc kubenswrapper[4808]: I0217 16:12:46.706916 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7t4x\" (UniqueName: \"kubernetes.io/projected/4fa85572-1552-4a27-8974-b1e2d376167c-kube-api-access-h7t4x\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-zfhfg\" (UID: \"4fa85572-1552-4a27-8974-b1e2d376167c\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-zfhfg" Feb 17 16:12:46 crc kubenswrapper[4808]: I0217 16:12:46.707106 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-distributor-http\" (UniqueName: \"kubernetes.io/secret/4fa85572-1552-4a27-8974-b1e2d376167c-cloudkitty-lokistack-distributor-http\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-zfhfg\" (UID: \"4fa85572-1552-4a27-8974-b1e2d376167c\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-zfhfg" Feb 17 16:12:46 crc kubenswrapper[4808]: I0217 16:12:46.707215 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4fa85572-1552-4a27-8974-b1e2d376167c-config\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-zfhfg\" (UID: \"4fa85572-1552-4a27-8974-b1e2d376167c\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-zfhfg" Feb 17 16:12:46 crc kubenswrapper[4808]: I0217 16:12:46.707260 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4fa85572-1552-4a27-8974-b1e2d376167c-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-zfhfg\" (UID: \"4fa85572-1552-4a27-8974-b1e2d376167c\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-zfhfg" Feb 17 16:12:46 crc kubenswrapper[4808]: I0217 16:12:46.707288 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/4fa85572-1552-4a27-8974-b1e2d376167c-cloudkitty-lokistack-distributor-grpc\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-zfhfg\" (UID: \"4fa85572-1552-4a27-8974-b1e2d376167c\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-zfhfg" Feb 17 16:12:46 crc kubenswrapper[4808]: I0217 16:12:46.808978 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-distributor-http\" (UniqueName: \"kubernetes.io/secret/4fa85572-1552-4a27-8974-b1e2d376167c-cloudkitty-lokistack-distributor-http\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-zfhfg\" (UID: \"4fa85572-1552-4a27-8974-b1e2d376167c\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-zfhfg" Feb 17 16:12:46 crc kubenswrapper[4808]: I0217 16:12:46.809048 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4fa85572-1552-4a27-8974-b1e2d376167c-config\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-zfhfg\" (UID: \"4fa85572-1552-4a27-8974-b1e2d376167c\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-zfhfg" Feb 17 16:12:46 crc kubenswrapper[4808]: I0217 16:12:46.809071 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4fa85572-1552-4a27-8974-b1e2d376167c-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-zfhfg\" (UID: \"4fa85572-1552-4a27-8974-b1e2d376167c\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-zfhfg" Feb 17 16:12:46 crc kubenswrapper[4808]: I0217 16:12:46.809090 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/4fa85572-1552-4a27-8974-b1e2d376167c-cloudkitty-lokistack-distributor-grpc\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-zfhfg\" (UID: \"4fa85572-1552-4a27-8974-b1e2d376167c\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-zfhfg" Feb 17 16:12:46 crc kubenswrapper[4808]: I0217 16:12:46.809150 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7t4x\" (UniqueName: \"kubernetes.io/projected/4fa85572-1552-4a27-8974-b1e2d376167c-kube-api-access-h7t4x\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-zfhfg\" (UID: \"4fa85572-1552-4a27-8974-b1e2d376167c\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-zfhfg" Feb 17 16:12:46 crc kubenswrapper[4808]: I0217 16:12:46.810213 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4fa85572-1552-4a27-8974-b1e2d376167c-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-zfhfg\" (UID: \"4fa85572-1552-4a27-8974-b1e2d376167c\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-zfhfg" Feb 17 16:12:46 crc kubenswrapper[4808]: I0217 16:12:46.810277 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4fa85572-1552-4a27-8974-b1e2d376167c-config\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-zfhfg\" (UID: \"4fa85572-1552-4a27-8974-b1e2d376167c\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-zfhfg" Feb 17 16:12:46 crc kubenswrapper[4808]: I0217 16:12:46.827159 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/4fa85572-1552-4a27-8974-b1e2d376167c-cloudkitty-lokistack-distributor-grpc\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-zfhfg\" (UID: \"4fa85572-1552-4a27-8974-b1e2d376167c\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-zfhfg" Feb 17 16:12:46 crc kubenswrapper[4808]: I0217 16:12:46.830188 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-distributor-http\" (UniqueName: \"kubernetes.io/secret/4fa85572-1552-4a27-8974-b1e2d376167c-cloudkitty-lokistack-distributor-http\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-zfhfg\" (UID: \"4fa85572-1552-4a27-8974-b1e2d376167c\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-zfhfg" Feb 17 16:12:46 crc kubenswrapper[4808]: I0217 16:12:46.856498 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7t4x\" (UniqueName: \"kubernetes.io/projected/4fa85572-1552-4a27-8974-b1e2d376167c-kube-api-access-h7t4x\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-zfhfg\" (UID: \"4fa85572-1552-4a27-8974-b1e2d376167c\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-zfhfg" Feb 17 16:12:46 crc kubenswrapper[4808]: I0217 16:12:46.923903 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-querier-58c84b5844-pkj8k"] Feb 17 16:12:46 crc kubenswrapper[4808]: I0217 16:12:46.925236 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-querier-58c84b5844-pkj8k" Feb 17 16:12:46 crc kubenswrapper[4808]: I0217 16:12:46.927753 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-querier-58c84b5844-pkj8k"] Feb 17 16:12:46 crc kubenswrapper[4808]: I0217 16:12:46.931816 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-querier-http" Feb 17 16:12:46 crc kubenswrapper[4808]: I0217 16:12:46.932053 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-loki-s3" Feb 17 16:12:46 crc kubenswrapper[4808]: I0217 16:12:46.932274 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-querier-grpc" Feb 17 16:12:46 crc kubenswrapper[4808]: I0217 16:12:46.994396 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-52cj4"] Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.034435 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-52cj4" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.038803 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-query-frontend-grpc" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.039027 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-query-frontend-http" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.040922 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-zfhfg" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.050831 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/6df15762-0f06-48ff-89bf-00f5118c6ced-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-querier-58c84b5844-pkj8k\" (UID: \"6df15762-0f06-48ff-89bf-00f5118c6ced\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-pkj8k" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.050888 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-querier-grpc\" (UniqueName: \"kubernetes.io/secret/6df15762-0f06-48ff-89bf-00f5118c6ced-cloudkitty-lokistack-querier-grpc\") pod \"cloudkitty-lokistack-querier-58c84b5844-pkj8k\" (UID: \"6df15762-0f06-48ff-89bf-00f5118c6ced\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-pkj8k" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.051036 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6df15762-0f06-48ff-89bf-00f5118c6ced-config\") pod \"cloudkitty-lokistack-querier-58c84b5844-pkj8k\" (UID: \"6df15762-0f06-48ff-89bf-00f5118c6ced\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-pkj8k" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.051197 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6df15762-0f06-48ff-89bf-00f5118c6ced-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-querier-58c84b5844-pkj8k\" (UID: \"6df15762-0f06-48ff-89bf-00f5118c6ced\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-pkj8k" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.051271 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28nlg\" (UniqueName: \"kubernetes.io/projected/6df15762-0f06-48ff-89bf-00f5118c6ced-kube-api-access-28nlg\") pod \"cloudkitty-lokistack-querier-58c84b5844-pkj8k\" (UID: \"6df15762-0f06-48ff-89bf-00f5118c6ced\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-pkj8k" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.051322 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-querier-http\" (UniqueName: \"kubernetes.io/secret/6df15762-0f06-48ff-89bf-00f5118c6ced-cloudkitty-lokistack-querier-http\") pod \"cloudkitty-lokistack-querier-58c84b5844-pkj8k\" (UID: \"6df15762-0f06-48ff-89bf-00f5118c6ced\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-pkj8k" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.088413 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-52cj4"] Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.127047 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-gateway-7f8685b49f-77rbq"] Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.128215 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-77rbq" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.131561 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-gateway-client-http" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.132465 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-gateway-http" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.132637 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"cloudkitty-lokistack-ca" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.133075 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"cloudkitty-lokistack-gateway-ca-bundle" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.133229 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-gateway-dockercfg-gwrp6" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.133323 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-gateway" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.135414 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"cloudkitty-lokistack-gateway" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.136348 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-gateway-7f8685b49f-77rbq"] Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.144274 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-gateway-7f8685b49f-mdlhq"] Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.155745 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/c4fa7a6a-b7fc-464c-b529-dcf8d20de97e-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-77rbq\" (UID: \"c4fa7a6a-b7fc-464c-b529-dcf8d20de97e\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-77rbq" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.156060 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c4fa7a6a-b7fc-464c-b529-dcf8d20de97e-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-77rbq\" (UID: \"c4fa7a6a-b7fc-464c-b529-dcf8d20de97e\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-77rbq" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.156193 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/c4fa7a6a-b7fc-464c-b529-dcf8d20de97e-tls-secret\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-77rbq\" (UID: \"c4fa7a6a-b7fc-464c-b529-dcf8d20de97e\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-77rbq" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.156716 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/c4fa7a6a-b7fc-464c-b529-dcf8d20de97e-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-77rbq\" (UID: \"c4fa7a6a-b7fc-464c-b529-dcf8d20de97e\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-77rbq" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.156876 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6df15762-0f06-48ff-89bf-00f5118c6ced-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-querier-58c84b5844-pkj8k\" (UID: \"6df15762-0f06-48ff-89bf-00f5118c6ced\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-pkj8k" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.157412 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/c4fa7a6a-b7fc-464c-b529-dcf8d20de97e-rbac\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-77rbq\" (UID: \"c4fa7a6a-b7fc-464c-b529-dcf8d20de97e\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-77rbq" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.157559 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be29c259-d619-4326-b866-2a8560d9b818-config\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-52cj4\" (UID: \"be29c259-d619-4326-b866-2a8560d9b818\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-52cj4" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.157720 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c4fa7a6a-b7fc-464c-b529-dcf8d20de97e-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-77rbq\" (UID: \"c4fa7a6a-b7fc-464c-b529-dcf8d20de97e\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-77rbq" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.158216 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/be29c259-d619-4326-b866-2a8560d9b818-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-52cj4\" (UID: \"be29c259-d619-4326-b866-2a8560d9b818\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-52cj4" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.158354 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28nlg\" (UniqueName: \"kubernetes.io/projected/6df15762-0f06-48ff-89bf-00f5118c6ced-kube-api-access-28nlg\") pod \"cloudkitty-lokistack-querier-58c84b5844-pkj8k\" (UID: \"6df15762-0f06-48ff-89bf-00f5118c6ced\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-pkj8k" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.158744 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-querier-http\" (UniqueName: \"kubernetes.io/secret/6df15762-0f06-48ff-89bf-00f5118c6ced-cloudkitty-lokistack-querier-http\") pod \"cloudkitty-lokistack-querier-58c84b5844-pkj8k\" (UID: \"6df15762-0f06-48ff-89bf-00f5118c6ced\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-pkj8k" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.158892 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c4fa7a6a-b7fc-464c-b529-dcf8d20de97e-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-77rbq\" (UID: \"c4fa7a6a-b7fc-464c-b529-dcf8d20de97e\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-77rbq" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.159492 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/6df15762-0f06-48ff-89bf-00f5118c6ced-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-querier-58c84b5844-pkj8k\" (UID: \"6df15762-0f06-48ff-89bf-00f5118c6ced\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-pkj8k" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.159623 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-querier-grpc\" (UniqueName: \"kubernetes.io/secret/6df15762-0f06-48ff-89bf-00f5118c6ced-cloudkitty-lokistack-querier-grpc\") pod \"cloudkitty-lokistack-querier-58c84b5844-pkj8k\" (UID: \"6df15762-0f06-48ff-89bf-00f5118c6ced\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-pkj8k" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.159713 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/c4fa7a6a-b7fc-464c-b529-dcf8d20de97e-tenants\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-77rbq\" (UID: \"c4fa7a6a-b7fc-464c-b529-dcf8d20de97e\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-77rbq" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.159793 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/be29c259-d619-4326-b866-2a8560d9b818-cloudkitty-lokistack-query-frontend-grpc\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-52cj4\" (UID: \"be29c259-d619-4326-b866-2a8560d9b818\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-52cj4" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.159895 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2x5n\" (UniqueName: \"kubernetes.io/projected/be29c259-d619-4326-b866-2a8560d9b818-kube-api-access-c2x5n\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-52cj4\" (UID: \"be29c259-d619-4326-b866-2a8560d9b818\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-52cj4" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.160476 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6df15762-0f06-48ff-89bf-00f5118c6ced-config\") pod \"cloudkitty-lokistack-querier-58c84b5844-pkj8k\" (UID: \"6df15762-0f06-48ff-89bf-00f5118c6ced\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-pkj8k" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.160601 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkrw8\" (UniqueName: \"kubernetes.io/projected/c4fa7a6a-b7fc-464c-b529-dcf8d20de97e-kube-api-access-gkrw8\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-77rbq\" (UID: \"c4fa7a6a-b7fc-464c-b529-dcf8d20de97e\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-77rbq" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.160723 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/be29c259-d619-4326-b866-2a8560d9b818-cloudkitty-lokistack-query-frontend-http\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-52cj4\" (UID: \"be29c259-d619-4326-b866-2a8560d9b818\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-52cj4" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.163481 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6df15762-0f06-48ff-89bf-00f5118c6ced-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-querier-58c84b5844-pkj8k\" (UID: \"6df15762-0f06-48ff-89bf-00f5118c6ced\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-pkj8k" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.165573 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-querier-http\" (UniqueName: \"kubernetes.io/secret/6df15762-0f06-48ff-89bf-00f5118c6ced-cloudkitty-lokistack-querier-http\") pod \"cloudkitty-lokistack-querier-58c84b5844-pkj8k\" (UID: \"6df15762-0f06-48ff-89bf-00f5118c6ced\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-pkj8k" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.165708 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-mdlhq" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.166385 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6df15762-0f06-48ff-89bf-00f5118c6ced-config\") pod \"cloudkitty-lokistack-querier-58c84b5844-pkj8k\" (UID: \"6df15762-0f06-48ff-89bf-00f5118c6ced\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-pkj8k" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.175156 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/6df15762-0f06-48ff-89bf-00f5118c6ced-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-querier-58c84b5844-pkj8k\" (UID: \"6df15762-0f06-48ff-89bf-00f5118c6ced\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-pkj8k" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.179195 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-querier-grpc\" (UniqueName: \"kubernetes.io/secret/6df15762-0f06-48ff-89bf-00f5118c6ced-cloudkitty-lokistack-querier-grpc\") pod \"cloudkitty-lokistack-querier-58c84b5844-pkj8k\" (UID: \"6df15762-0f06-48ff-89bf-00f5118c6ced\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-pkj8k" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.199441 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28nlg\" (UniqueName: \"kubernetes.io/projected/6df15762-0f06-48ff-89bf-00f5118c6ced-kube-api-access-28nlg\") pod \"cloudkitty-lokistack-querier-58c84b5844-pkj8k\" (UID: \"6df15762-0f06-48ff-89bf-00f5118c6ced\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-pkj8k" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.214835 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-gateway-7f8685b49f-mdlhq"] Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.262754 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/be29c259-d619-4326-b866-2a8560d9b818-cloudkitty-lokistack-query-frontend-http\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-52cj4\" (UID: \"be29c259-d619-4326-b866-2a8560d9b818\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-52cj4" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.263380 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/c4fa7a6a-b7fc-464c-b529-dcf8d20de97e-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-77rbq\" (UID: \"c4fa7a6a-b7fc-464c-b529-dcf8d20de97e\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-77rbq" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.263409 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c4fa7a6a-b7fc-464c-b529-dcf8d20de97e-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-77rbq\" (UID: \"c4fa7a6a-b7fc-464c-b529-dcf8d20de97e\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-77rbq" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.263435 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/c4fa7a6a-b7fc-464c-b529-dcf8d20de97e-tls-secret\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-77rbq\" (UID: \"c4fa7a6a-b7fc-464c-b529-dcf8d20de97e\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-77rbq" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.263546 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/dc9fa7d9-5340-4cb0-adbb-980e7ae2acb0-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-mdlhq\" (UID: \"dc9fa7d9-5340-4cb0-adbb-980e7ae2acb0\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-mdlhq" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.263567 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/c4fa7a6a-b7fc-464c-b529-dcf8d20de97e-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-77rbq\" (UID: \"c4fa7a6a-b7fc-464c-b529-dcf8d20de97e\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-77rbq" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.263599 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dc9fa7d9-5340-4cb0-adbb-980e7ae2acb0-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-mdlhq\" (UID: \"dc9fa7d9-5340-4cb0-adbb-980e7ae2acb0\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-mdlhq" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.263645 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/c4fa7a6a-b7fc-464c-b529-dcf8d20de97e-rbac\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-77rbq\" (UID: \"c4fa7a6a-b7fc-464c-b529-dcf8d20de97e\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-77rbq" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.263663 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be29c259-d619-4326-b866-2a8560d9b818-config\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-52cj4\" (UID: \"be29c259-d619-4326-b866-2a8560d9b818\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-52cj4" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.264043 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c4fa7a6a-b7fc-464c-b529-dcf8d20de97e-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-77rbq\" (UID: \"c4fa7a6a-b7fc-464c-b529-dcf8d20de97e\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-77rbq" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.264898 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/c4fa7a6a-b7fc-464c-b529-dcf8d20de97e-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-77rbq\" (UID: \"c4fa7a6a-b7fc-464c-b529-dcf8d20de97e\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-77rbq" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.266325 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be29c259-d619-4326-b866-2a8560d9b818-config\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-52cj4\" (UID: \"be29c259-d619-4326-b866-2a8560d9b818\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-52cj4" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.266607 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dc9fa7d9-5340-4cb0-adbb-980e7ae2acb0-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-mdlhq\" (UID: \"dc9fa7d9-5340-4cb0-adbb-980e7ae2acb0\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-mdlhq" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.266807 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/be29c259-d619-4326-b866-2a8560d9b818-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-52cj4\" (UID: \"be29c259-d619-4326-b866-2a8560d9b818\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-52cj4" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.266887 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-297vp\" (UniqueName: \"kubernetes.io/projected/dc9fa7d9-5340-4cb0-adbb-980e7ae2acb0-kube-api-access-297vp\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-mdlhq\" (UID: \"dc9fa7d9-5340-4cb0-adbb-980e7ae2acb0\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-mdlhq" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.266952 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/dc9fa7d9-5340-4cb0-adbb-980e7ae2acb0-tls-secret\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-mdlhq\" (UID: \"dc9fa7d9-5340-4cb0-adbb-980e7ae2acb0\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-mdlhq" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.266984 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c4fa7a6a-b7fc-464c-b529-dcf8d20de97e-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-77rbq\" (UID: \"c4fa7a6a-b7fc-464c-b529-dcf8d20de97e\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-77rbq" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.267048 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/dc9fa7d9-5340-4cb0-adbb-980e7ae2acb0-rbac\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-mdlhq\" (UID: \"dc9fa7d9-5340-4cb0-adbb-980e7ae2acb0\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-mdlhq" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.267140 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/dc9fa7d9-5340-4cb0-adbb-980e7ae2acb0-tenants\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-mdlhq\" (UID: \"dc9fa7d9-5340-4cb0-adbb-980e7ae2acb0\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-mdlhq" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.267173 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/c4fa7a6a-b7fc-464c-b529-dcf8d20de97e-tenants\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-77rbq\" (UID: \"c4fa7a6a-b7fc-464c-b529-dcf8d20de97e\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-77rbq" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.267196 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/be29c259-d619-4326-b866-2a8560d9b818-cloudkitty-lokistack-query-frontend-grpc\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-52cj4\" (UID: \"be29c259-d619-4326-b866-2a8560d9b818\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-52cj4" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.267214 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dc9fa7d9-5340-4cb0-adbb-980e7ae2acb0-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-mdlhq\" (UID: \"dc9fa7d9-5340-4cb0-adbb-980e7ae2acb0\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-mdlhq" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.267257 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/dc9fa7d9-5340-4cb0-adbb-980e7ae2acb0-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-mdlhq\" (UID: \"dc9fa7d9-5340-4cb0-adbb-980e7ae2acb0\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-mdlhq" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.267277 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2x5n\" (UniqueName: \"kubernetes.io/projected/be29c259-d619-4326-b866-2a8560d9b818-kube-api-access-c2x5n\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-52cj4\" (UID: \"be29c259-d619-4326-b866-2a8560d9b818\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-52cj4" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.268495 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkrw8\" (UniqueName: \"kubernetes.io/projected/c4fa7a6a-b7fc-464c-b529-dcf8d20de97e-kube-api-access-gkrw8\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-77rbq\" (UID: \"c4fa7a6a-b7fc-464c-b529-dcf8d20de97e\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-77rbq" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.268921 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-querier-58c84b5844-pkj8k" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.269409 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/c4fa7a6a-b7fc-464c-b529-dcf8d20de97e-rbac\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-77rbq\" (UID: \"c4fa7a6a-b7fc-464c-b529-dcf8d20de97e\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-77rbq" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.270392 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/be29c259-d619-4326-b866-2a8560d9b818-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-52cj4\" (UID: \"be29c259-d619-4326-b866-2a8560d9b818\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-52cj4" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.270729 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c4fa7a6a-b7fc-464c-b529-dcf8d20de97e-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-77rbq\" (UID: \"c4fa7a6a-b7fc-464c-b529-dcf8d20de97e\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-77rbq" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.271302 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c4fa7a6a-b7fc-464c-b529-dcf8d20de97e-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-77rbq\" (UID: \"c4fa7a6a-b7fc-464c-b529-dcf8d20de97e\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-77rbq" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.271735 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c4fa7a6a-b7fc-464c-b529-dcf8d20de97e-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-77rbq\" (UID: \"c4fa7a6a-b7fc-464c-b529-dcf8d20de97e\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-77rbq" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.273986 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/c4fa7a6a-b7fc-464c-b529-dcf8d20de97e-tls-secret\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-77rbq\" (UID: \"c4fa7a6a-b7fc-464c-b529-dcf8d20de97e\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-77rbq" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.276005 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/be29c259-d619-4326-b866-2a8560d9b818-cloudkitty-lokistack-query-frontend-grpc\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-52cj4\" (UID: \"be29c259-d619-4326-b866-2a8560d9b818\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-52cj4" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.276808 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/be29c259-d619-4326-b866-2a8560d9b818-cloudkitty-lokistack-query-frontend-http\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-52cj4\" (UID: \"be29c259-d619-4326-b866-2a8560d9b818\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-52cj4" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.277127 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/c4fa7a6a-b7fc-464c-b529-dcf8d20de97e-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-77rbq\" (UID: \"c4fa7a6a-b7fc-464c-b529-dcf8d20de97e\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-77rbq" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.281310 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/c4fa7a6a-b7fc-464c-b529-dcf8d20de97e-tenants\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-77rbq\" (UID: \"c4fa7a6a-b7fc-464c-b529-dcf8d20de97e\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-77rbq" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.291839 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2x5n\" (UniqueName: \"kubernetes.io/projected/be29c259-d619-4326-b866-2a8560d9b818-kube-api-access-c2x5n\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-52cj4\" (UID: \"be29c259-d619-4326-b866-2a8560d9b818\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-52cj4" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.295555 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkrw8\" (UniqueName: \"kubernetes.io/projected/c4fa7a6a-b7fc-464c-b529-dcf8d20de97e-kube-api-access-gkrw8\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-77rbq\" (UID: \"c4fa7a6a-b7fc-464c-b529-dcf8d20de97e\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-77rbq" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.357370 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-52cj4" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.370190 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-297vp\" (UniqueName: \"kubernetes.io/projected/dc9fa7d9-5340-4cb0-adbb-980e7ae2acb0-kube-api-access-297vp\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-mdlhq\" (UID: \"dc9fa7d9-5340-4cb0-adbb-980e7ae2acb0\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-mdlhq" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.370229 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/dc9fa7d9-5340-4cb0-adbb-980e7ae2acb0-tls-secret\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-mdlhq\" (UID: \"dc9fa7d9-5340-4cb0-adbb-980e7ae2acb0\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-mdlhq" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.370256 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/dc9fa7d9-5340-4cb0-adbb-980e7ae2acb0-rbac\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-mdlhq\" (UID: \"dc9fa7d9-5340-4cb0-adbb-980e7ae2acb0\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-mdlhq" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.370289 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/dc9fa7d9-5340-4cb0-adbb-980e7ae2acb0-tenants\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-mdlhq\" (UID: \"dc9fa7d9-5340-4cb0-adbb-980e7ae2acb0\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-mdlhq" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.370310 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dc9fa7d9-5340-4cb0-adbb-980e7ae2acb0-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-mdlhq\" (UID: \"dc9fa7d9-5340-4cb0-adbb-980e7ae2acb0\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-mdlhq" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.370329 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/dc9fa7d9-5340-4cb0-adbb-980e7ae2acb0-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-mdlhq\" (UID: \"dc9fa7d9-5340-4cb0-adbb-980e7ae2acb0\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-mdlhq" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.370385 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/dc9fa7d9-5340-4cb0-adbb-980e7ae2acb0-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-mdlhq\" (UID: \"dc9fa7d9-5340-4cb0-adbb-980e7ae2acb0\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-mdlhq" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.370401 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dc9fa7d9-5340-4cb0-adbb-980e7ae2acb0-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-mdlhq\" (UID: \"dc9fa7d9-5340-4cb0-adbb-980e7ae2acb0\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-mdlhq" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.370427 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dc9fa7d9-5340-4cb0-adbb-980e7ae2acb0-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-mdlhq\" (UID: \"dc9fa7d9-5340-4cb0-adbb-980e7ae2acb0\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-mdlhq" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.371636 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dc9fa7d9-5340-4cb0-adbb-980e7ae2acb0-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-mdlhq\" (UID: \"dc9fa7d9-5340-4cb0-adbb-980e7ae2acb0\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-mdlhq" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.372671 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/dc9fa7d9-5340-4cb0-adbb-980e7ae2acb0-rbac\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-mdlhq\" (UID: \"dc9fa7d9-5340-4cb0-adbb-980e7ae2acb0\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-mdlhq" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.373230 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dc9fa7d9-5340-4cb0-adbb-980e7ae2acb0-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-mdlhq\" (UID: \"dc9fa7d9-5340-4cb0-adbb-980e7ae2acb0\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-mdlhq" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.373744 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/dc9fa7d9-5340-4cb0-adbb-980e7ae2acb0-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-mdlhq\" (UID: \"dc9fa7d9-5340-4cb0-adbb-980e7ae2acb0\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-mdlhq" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.374357 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/dc9fa7d9-5340-4cb0-adbb-980e7ae2acb0-tenants\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-mdlhq\" (UID: \"dc9fa7d9-5340-4cb0-adbb-980e7ae2acb0\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-mdlhq" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.374690 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dc9fa7d9-5340-4cb0-adbb-980e7ae2acb0-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-mdlhq\" (UID: \"dc9fa7d9-5340-4cb0-adbb-980e7ae2acb0\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-mdlhq" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.378096 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/dc9fa7d9-5340-4cb0-adbb-980e7ae2acb0-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-mdlhq\" (UID: \"dc9fa7d9-5340-4cb0-adbb-980e7ae2acb0\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-mdlhq" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.385033 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/dc9fa7d9-5340-4cb0-adbb-980e7ae2acb0-tls-secret\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-mdlhq\" (UID: \"dc9fa7d9-5340-4cb0-adbb-980e7ae2acb0\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-mdlhq" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.387214 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-297vp\" (UniqueName: \"kubernetes.io/projected/dc9fa7d9-5340-4cb0-adbb-980e7ae2acb0-kube-api-access-297vp\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-mdlhq\" (UID: \"dc9fa7d9-5340-4cb0-adbb-980e7ae2acb0\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-mdlhq" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.468151 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-77rbq" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.501391 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-mdlhq" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.831796 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-ingester-0"] Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.832950 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-ingester-0" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.835020 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-ingester-http" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.836867 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-ingester-grpc" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.846859 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-ingester-0"] Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.946903 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-compactor-0"] Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.948402 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-compactor-0" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.952462 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-compactor-http" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.952651 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-compactor-grpc" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.958341 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-compactor-0"] Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.984260 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"c7929d5b-e791-419e-8039-50cc9f8202f2\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.984308 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/c7929d5b-e791-419e-8039-50cc9f8202f2-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"c7929d5b-e791-419e-8039-50cc9f8202f2\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.984347 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"c7929d5b-e791-419e-8039-50cc9f8202f2\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.984368 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7929d5b-e791-419e-8039-50cc9f8202f2-config\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"c7929d5b-e791-419e-8039-50cc9f8202f2\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.984383 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ingester-http\" (UniqueName: \"kubernetes.io/secret/c7929d5b-e791-419e-8039-50cc9f8202f2-cloudkitty-lokistack-ingester-http\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"c7929d5b-e791-419e-8039-50cc9f8202f2\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.984449 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhjkw\" (UniqueName: \"kubernetes.io/projected/c7929d5b-e791-419e-8039-50cc9f8202f2-kube-api-access-nhjkw\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"c7929d5b-e791-419e-8039-50cc9f8202f2\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.984517 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/c7929d5b-e791-419e-8039-50cc9f8202f2-cloudkitty-lokistack-ingester-grpc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"c7929d5b-e791-419e-8039-50cc9f8202f2\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.984551 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c7929d5b-e791-419e-8039-50cc9f8202f2-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"c7929d5b-e791-419e-8039-50cc9f8202f2\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.023284 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-index-gateway-0"] Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.024457 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.029407 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-index-gateway-grpc" Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.030321 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-index-gateway-http" Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.051106 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-index-gateway-0"] Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.086441 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c7929d5b-e791-419e-8039-50cc9f8202f2-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"c7929d5b-e791-419e-8039-50cc9f8202f2\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.086497 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/c850b5fe-4c28-4136-8136-fae52e38371b-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"c850b5fe-4c28-4136-8136-fae52e38371b\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.086546 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"c7929d5b-e791-419e-8039-50cc9f8202f2\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.086577 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/c7929d5b-e791-419e-8039-50cc9f8202f2-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"c7929d5b-e791-419e-8039-50cc9f8202f2\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.086612 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"c850b5fe-4c28-4136-8136-fae52e38371b\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.086633 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-compactor-http\" (UniqueName: \"kubernetes.io/secret/c850b5fe-4c28-4136-8136-fae52e38371b-cloudkitty-lokistack-compactor-http\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"c850b5fe-4c28-4136-8136-fae52e38371b\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.086673 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"c7929d5b-e791-419e-8039-50cc9f8202f2\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.086697 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7929d5b-e791-419e-8039-50cc9f8202f2-config\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"c7929d5b-e791-419e-8039-50cc9f8202f2\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.086717 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ingester-http\" (UniqueName: \"kubernetes.io/secret/c7929d5b-e791-419e-8039-50cc9f8202f2-cloudkitty-lokistack-ingester-http\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"c7929d5b-e791-419e-8039-50cc9f8202f2\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.086746 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5x5h\" (UniqueName: \"kubernetes.io/projected/c850b5fe-4c28-4136-8136-fae52e38371b-kube-api-access-g5x5h\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"c850b5fe-4c28-4136-8136-fae52e38371b\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.086772 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c850b5fe-4c28-4136-8136-fae52e38371b-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"c850b5fe-4c28-4136-8136-fae52e38371b\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.086798 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/c850b5fe-4c28-4136-8136-fae52e38371b-cloudkitty-lokistack-compactor-grpc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"c850b5fe-4c28-4136-8136-fae52e38371b\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.086832 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c850b5fe-4c28-4136-8136-fae52e38371b-config\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"c850b5fe-4c28-4136-8136-fae52e38371b\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.086879 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhjkw\" (UniqueName: \"kubernetes.io/projected/c7929d5b-e791-419e-8039-50cc9f8202f2-kube-api-access-nhjkw\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"c7929d5b-e791-419e-8039-50cc9f8202f2\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.086906 4808 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"c7929d5b-e791-419e-8039-50cc9f8202f2\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/cloudkitty-lokistack-ingester-0" Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.087764 4808 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"c7929d5b-e791-419e-8039-50cc9f8202f2\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/cloudkitty-lokistack-ingester-0" Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.087804 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/c7929d5b-e791-419e-8039-50cc9f8202f2-cloudkitty-lokistack-ingester-grpc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"c7929d5b-e791-419e-8039-50cc9f8202f2\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.092278 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7929d5b-e791-419e-8039-50cc9f8202f2-config\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"c7929d5b-e791-419e-8039-50cc9f8202f2\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.092317 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/c7929d5b-e791-419e-8039-50cc9f8202f2-cloudkitty-lokistack-ingester-grpc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"c7929d5b-e791-419e-8039-50cc9f8202f2\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.092428 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ingester-http\" (UniqueName: \"kubernetes.io/secret/c7929d5b-e791-419e-8039-50cc9f8202f2-cloudkitty-lokistack-ingester-http\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"c7929d5b-e791-419e-8039-50cc9f8202f2\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.094386 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/c7929d5b-e791-419e-8039-50cc9f8202f2-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"c7929d5b-e791-419e-8039-50cc9f8202f2\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.104248 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c7929d5b-e791-419e-8039-50cc9f8202f2-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"c7929d5b-e791-419e-8039-50cc9f8202f2\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.112340 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhjkw\" (UniqueName: \"kubernetes.io/projected/c7929d5b-e791-419e-8039-50cc9f8202f2-kube-api-access-nhjkw\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"c7929d5b-e791-419e-8039-50cc9f8202f2\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.113382 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"c7929d5b-e791-419e-8039-50cc9f8202f2\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.114586 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"c7929d5b-e791-419e-8039-50cc9f8202f2\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.190030 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/c850b5fe-4c28-4136-8136-fae52e38371b-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"c850b5fe-4c28-4136-8136-fae52e38371b\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.190097 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/d6dbebd3-2b7c-4afa-8937-5c47b749e8b0-cloudkitty-lokistack-index-gateway-grpc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"d6dbebd3-2b7c-4afa-8937-5c47b749e8b0\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.190147 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/d6dbebd3-2b7c-4afa-8937-5c47b749e8b0-cloudkitty-lokistack-index-gateway-http\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"d6dbebd3-2b7c-4afa-8937-5c47b749e8b0\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.190181 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"c850b5fe-4c28-4136-8136-fae52e38371b\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.190204 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-compactor-http\" (UniqueName: \"kubernetes.io/secret/c850b5fe-4c28-4136-8136-fae52e38371b-cloudkitty-lokistack-compactor-http\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"c850b5fe-4c28-4136-8136-fae52e38371b\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.190265 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpxpb\" (UniqueName: \"kubernetes.io/projected/d6dbebd3-2b7c-4afa-8937-5c47b749e8b0-kube-api-access-tpxpb\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"d6dbebd3-2b7c-4afa-8937-5c47b749e8b0\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.190381 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5x5h\" (UniqueName: \"kubernetes.io/projected/c850b5fe-4c28-4136-8136-fae52e38371b-kube-api-access-g5x5h\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"c850b5fe-4c28-4136-8136-fae52e38371b\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.190422 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c850b5fe-4c28-4136-8136-fae52e38371b-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"c850b5fe-4c28-4136-8136-fae52e38371b\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.190452 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/c850b5fe-4c28-4136-8136-fae52e38371b-cloudkitty-lokistack-compactor-grpc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"c850b5fe-4c28-4136-8136-fae52e38371b\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.190480 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d6dbebd3-2b7c-4afa-8937-5c47b749e8b0-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"d6dbebd3-2b7c-4afa-8937-5c47b749e8b0\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.190506 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c850b5fe-4c28-4136-8136-fae52e38371b-config\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"c850b5fe-4c28-4136-8136-fae52e38371b\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.190544 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6dbebd3-2b7c-4afa-8937-5c47b749e8b0-config\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"d6dbebd3-2b7c-4afa-8937-5c47b749e8b0\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.190577 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/d6dbebd3-2b7c-4afa-8937-5c47b749e8b0-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"d6dbebd3-2b7c-4afa-8937-5c47b749e8b0\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.190694 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"d6dbebd3-2b7c-4afa-8937-5c47b749e8b0\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.190385 4808 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"c850b5fe-4c28-4136-8136-fae52e38371b\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/cloudkitty-lokistack-compactor-0" Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.191500 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c850b5fe-4c28-4136-8136-fae52e38371b-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"c850b5fe-4c28-4136-8136-fae52e38371b\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.191683 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c850b5fe-4c28-4136-8136-fae52e38371b-config\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"c850b5fe-4c28-4136-8136-fae52e38371b\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.194777 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/c850b5fe-4c28-4136-8136-fae52e38371b-cloudkitty-lokistack-compactor-grpc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"c850b5fe-4c28-4136-8136-fae52e38371b\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.194925 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-compactor-http\" (UniqueName: \"kubernetes.io/secret/c850b5fe-4c28-4136-8136-fae52e38371b-cloudkitty-lokistack-compactor-http\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"c850b5fe-4c28-4136-8136-fae52e38371b\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.195540 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/c850b5fe-4c28-4136-8136-fae52e38371b-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"c850b5fe-4c28-4136-8136-fae52e38371b\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.207945 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5x5h\" (UniqueName: \"kubernetes.io/projected/c850b5fe-4c28-4136-8136-fae52e38371b-kube-api-access-g5x5h\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"c850b5fe-4c28-4136-8136-fae52e38371b\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.209142 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-ingester-0" Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.211152 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"c850b5fe-4c28-4136-8136-fae52e38371b\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.270059 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-compactor-0" Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.293637 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/d6dbebd3-2b7c-4afa-8937-5c47b749e8b0-cloudkitty-lokistack-index-gateway-http\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"d6dbebd3-2b7c-4afa-8937-5c47b749e8b0\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.293737 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tpxpb\" (UniqueName: \"kubernetes.io/projected/d6dbebd3-2b7c-4afa-8937-5c47b749e8b0-kube-api-access-tpxpb\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"d6dbebd3-2b7c-4afa-8937-5c47b749e8b0\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.293790 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d6dbebd3-2b7c-4afa-8937-5c47b749e8b0-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"d6dbebd3-2b7c-4afa-8937-5c47b749e8b0\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.293841 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6dbebd3-2b7c-4afa-8937-5c47b749e8b0-config\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"d6dbebd3-2b7c-4afa-8937-5c47b749e8b0\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.293874 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/d6dbebd3-2b7c-4afa-8937-5c47b749e8b0-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"d6dbebd3-2b7c-4afa-8937-5c47b749e8b0\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.293900 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"d6dbebd3-2b7c-4afa-8937-5c47b749e8b0\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.293979 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/d6dbebd3-2b7c-4afa-8937-5c47b749e8b0-cloudkitty-lokistack-index-gateway-grpc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"d6dbebd3-2b7c-4afa-8937-5c47b749e8b0\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.296036 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d6dbebd3-2b7c-4afa-8937-5c47b749e8b0-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"d6dbebd3-2b7c-4afa-8937-5c47b749e8b0\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.300669 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/d6dbebd3-2b7c-4afa-8937-5c47b749e8b0-cloudkitty-lokistack-index-gateway-http\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"d6dbebd3-2b7c-4afa-8937-5c47b749e8b0\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.302744 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6dbebd3-2b7c-4afa-8937-5c47b749e8b0-config\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"d6dbebd3-2b7c-4afa-8937-5c47b749e8b0\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.302854 4808 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"d6dbebd3-2b7c-4afa-8937-5c47b749e8b0\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.320494 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/d6dbebd3-2b7c-4afa-8937-5c47b749e8b0-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"d6dbebd3-2b7c-4afa-8937-5c47b749e8b0\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.320985 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/d6dbebd3-2b7c-4afa-8937-5c47b749e8b0-cloudkitty-lokistack-index-gateway-grpc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"d6dbebd3-2b7c-4afa-8937-5c47b749e8b0\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.322539 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tpxpb\" (UniqueName: \"kubernetes.io/projected/d6dbebd3-2b7c-4afa-8937-5c47b749e8b0-kube-api-access-tpxpb\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"d6dbebd3-2b7c-4afa-8937-5c47b749e8b0\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 17 16:12:48 crc kubenswrapper[4808]: E0217 16:12:48.323221 4808 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 17 16:12:48 crc kubenswrapper[4808]: E0217 16:12:48.323381 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d88gz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-8jstw_openstack(973eee94-2439-415c-b9b8-2f6f72738ac9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 16:12:48 crc kubenswrapper[4808]: E0217 16:12:48.324889 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-8jstw" podUID="973eee94-2439-415c-b9b8-2f6f72738ac9" Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.345950 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"d6dbebd3-2b7c-4afa-8937-5c47b749e8b0\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.352641 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 17 16:12:48 crc kubenswrapper[4808]: E0217 16:12:48.359224 4808 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 17 16:12:48 crc kubenswrapper[4808]: E0217 16:12:48.359348 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2kwnk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-g8xlz_openstack(38d70adc-e16e-4470-9b59-1c728c29318d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 16:12:48 crc kubenswrapper[4808]: E0217 16:12:48.361765 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-g8xlz" podUID="38d70adc-e16e-4470-9b59-1c728c29318d" Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.751907 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 17 16:12:49 crc kubenswrapper[4808]: I0217 16:12:49.189982 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"59be2048-a5c9-44c9-a3ef-651002555ff0","Type":"ContainerStarted","Data":"f86bb416640f1c93ce31ac0513d794573c83b4fcf30431f9c4619fd3c48ca73d"} Feb 17 16:12:49 crc kubenswrapper[4808]: I0217 16:12:49.408123 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 17 16:12:49 crc kubenswrapper[4808]: W0217 16:12:49.410286 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda020d38c_5e24_4266_96dc_9050e4d82f46.slice/crio-b661f963ccd127b4dcaef38f6d6413ba4a49bc3411581e5053b0b86666c263d1 WatchSource:0}: Error finding container b661f963ccd127b4dcaef38f6d6413ba4a49bc3411581e5053b0b86666c263d1: Status 404 returned error can't find the container with id b661f963ccd127b4dcaef38f6d6413ba4a49bc3411581e5053b0b86666c263d1 Feb 17 16:12:49 crc kubenswrapper[4808]: I0217 16:12:49.421130 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 17 16:12:49 crc kubenswrapper[4808]: I0217 16:12:49.888486 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-pfcvm"] Feb 17 16:12:50 crc kubenswrapper[4808]: I0217 16:12:50.005942 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-g8xlz" Feb 17 16:12:50 crc kubenswrapper[4808]: I0217 16:12:50.028634 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-8jstw" Feb 17 16:12:50 crc kubenswrapper[4808]: I0217 16:12:50.037622 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2kwnk\" (UniqueName: \"kubernetes.io/projected/38d70adc-e16e-4470-9b59-1c728c29318d-kube-api-access-2kwnk\") pod \"38d70adc-e16e-4470-9b59-1c728c29318d\" (UID: \"38d70adc-e16e-4470-9b59-1c728c29318d\") " Feb 17 16:12:50 crc kubenswrapper[4808]: I0217 16:12:50.037698 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38d70adc-e16e-4470-9b59-1c728c29318d-config\") pod \"38d70adc-e16e-4470-9b59-1c728c29318d\" (UID: \"38d70adc-e16e-4470-9b59-1c728c29318d\") " Feb 17 16:12:50 crc kubenswrapper[4808]: I0217 16:12:50.037762 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/38d70adc-e16e-4470-9b59-1c728c29318d-dns-svc\") pod \"38d70adc-e16e-4470-9b59-1c728c29318d\" (UID: \"38d70adc-e16e-4470-9b59-1c728c29318d\") " Feb 17 16:12:50 crc kubenswrapper[4808]: I0217 16:12:50.038354 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38d70adc-e16e-4470-9b59-1c728c29318d-config" (OuterVolumeSpecName: "config") pod "38d70adc-e16e-4470-9b59-1c728c29318d" (UID: "38d70adc-e16e-4470-9b59-1c728c29318d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:12:50 crc kubenswrapper[4808]: I0217 16:12:50.039026 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38d70adc-e16e-4470-9b59-1c728c29318d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "38d70adc-e16e-4470-9b59-1c728c29318d" (UID: "38d70adc-e16e-4470-9b59-1c728c29318d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:12:50 crc kubenswrapper[4808]: I0217 16:12:50.039062 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-querier-58c84b5844-pkj8k"] Feb 17 16:12:50 crc kubenswrapper[4808]: I0217 16:12:50.039728 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38d70adc-e16e-4470-9b59-1c728c29318d-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:12:50 crc kubenswrapper[4808]: I0217 16:12:50.039746 4808 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/38d70adc-e16e-4470-9b59-1c728c29318d-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 16:12:50 crc kubenswrapper[4808]: W0217 16:12:50.045056 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6df15762_0f06_48ff_89bf_00f5118c6ced.slice/crio-1d159a168bbd1922669ef46ab9dfc149a4e68d656a62cbcfc3691d5c0d8648f1 WatchSource:0}: Error finding container 1d159a168bbd1922669ef46ab9dfc149a4e68d656a62cbcfc3691d5c0d8648f1: Status 404 returned error can't find the container with id 1d159a168bbd1922669ef46ab9dfc149a4e68d656a62cbcfc3691d5c0d8648f1 Feb 17 16:12:50 crc kubenswrapper[4808]: I0217 16:12:50.045215 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38d70adc-e16e-4470-9b59-1c728c29318d-kube-api-access-2kwnk" (OuterVolumeSpecName: "kube-api-access-2kwnk") pod "38d70adc-e16e-4470-9b59-1c728c29318d" (UID: "38d70adc-e16e-4470-9b59-1c728c29318d"). InnerVolumeSpecName "kube-api-access-2kwnk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:12:50 crc kubenswrapper[4808]: I0217 16:12:50.085985 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-5wrzq"] Feb 17 16:12:50 crc kubenswrapper[4808]: W0217 16:12:50.135622 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2ea38754_3b00_4bcb_93d9_28b60dda0e0a.slice/crio-8e584d33e0716dd03a9a8239a014677a0b4e6765f9efdd4b2ed136a42830d11a WatchSource:0}: Error finding container 8e584d33e0716dd03a9a8239a014677a0b4e6765f9efdd4b2ed136a42830d11a: Status 404 returned error can't find the container with id 8e584d33e0716dd03a9a8239a014677a0b4e6765f9efdd4b2ed136a42830d11a Feb 17 16:12:50 crc kubenswrapper[4808]: W0217 16:12:50.139822 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbac5f26b_ff81_49e2_854f_9cad23a57593.slice/crio-83aebd7060ebf58080acd8dda61d0160f4457ae1b4e3e4db27d61232cdd028e3 WatchSource:0}: Error finding container 83aebd7060ebf58080acd8dda61d0160f4457ae1b4e3e4db27d61232cdd028e3: Status 404 returned error can't find the container with id 83aebd7060ebf58080acd8dda61d0160f4457ae1b4e3e4db27d61232cdd028e3 Feb 17 16:12:50 crc kubenswrapper[4808]: I0217 16:12:50.140314 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d88gz\" (UniqueName: \"kubernetes.io/projected/973eee94-2439-415c-b9b8-2f6f72738ac9-kube-api-access-d88gz\") pod \"973eee94-2439-415c-b9b8-2f6f72738ac9\" (UID: \"973eee94-2439-415c-b9b8-2f6f72738ac9\") " Feb 17 16:12:50 crc kubenswrapper[4808]: I0217 16:12:50.140346 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/973eee94-2439-415c-b9b8-2f6f72738ac9-config\") pod \"973eee94-2439-415c-b9b8-2f6f72738ac9\" (UID: \"973eee94-2439-415c-b9b8-2f6f72738ac9\") " Feb 17 16:12:50 crc kubenswrapper[4808]: I0217 16:12:50.140712 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2kwnk\" (UniqueName: \"kubernetes.io/projected/38d70adc-e16e-4470-9b59-1c728c29318d-kube-api-access-2kwnk\") on node \"crc\" DevicePath \"\"" Feb 17 16:12:50 crc kubenswrapper[4808]: I0217 16:12:50.141091 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/973eee94-2439-415c-b9b8-2f6f72738ac9-config" (OuterVolumeSpecName: "config") pod "973eee94-2439-415c-b9b8-2f6f72738ac9" (UID: "973eee94-2439-415c-b9b8-2f6f72738ac9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:12:50 crc kubenswrapper[4808]: W0217 16:12:50.143706 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podade81c90_5cdf_45d4_ad2f_52a3514e1596.slice/crio-38d81e1f90b082445ee66ef12a169b7e78ae9af1be78970dc6491d62d66db85d WatchSource:0}: Error finding container 38d81e1f90b082445ee66ef12a169b7e78ae9af1be78970dc6491d62d66db85d: Status 404 returned error can't find the container with id 38d81e1f90b082445ee66ef12a169b7e78ae9af1be78970dc6491d62d66db85d Feb 17 16:12:50 crc kubenswrapper[4808]: I0217 16:12:50.144467 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/973eee94-2439-415c-b9b8-2f6f72738ac9-kube-api-access-d88gz" (OuterVolumeSpecName: "kube-api-access-d88gz") pod "973eee94-2439-415c-b9b8-2f6f72738ac9" (UID: "973eee94-2439-415c-b9b8-2f6f72738ac9"). InnerVolumeSpecName "kube-api-access-d88gz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:12:50 crc kubenswrapper[4808]: W0217 16:12:50.146373 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod56f9931d_b010_4282_9068_16b2e4e4b247.slice/crio-25179e355abb25d773555e2205dd9a0a8245b979b1d8cf45a66e547633879c94 WatchSource:0}: Error finding container 25179e355abb25d773555e2205dd9a0a8245b979b1d8cf45a66e547633879c94: Status 404 returned error can't find the container with id 25179e355abb25d773555e2205dd9a0a8245b979b1d8cf45a66e547633879c94 Feb 17 16:12:50 crc kubenswrapper[4808]: I0217 16:12:50.156452 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 17 16:12:50 crc kubenswrapper[4808]: I0217 16:12:50.164182 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-g8xlz" Feb 17 16:12:50 crc kubenswrapper[4808]: I0217 16:12:50.165643 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 17 16:12:50 crc kubenswrapper[4808]: I0217 16:12:50.165672 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-g8xlz" event={"ID":"38d70adc-e16e-4470-9b59-1c728c29318d","Type":"ContainerDied","Data":"36e351405a8f30735cdfbd65ebbfe018758adcc5855f9db2bc133ed0f4654c84"} Feb 17 16:12:50 crc kubenswrapper[4808]: I0217 16:12:50.168674 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"a020d38c-5e24-4266-96dc-9050e4d82f46","Type":"ContainerStarted","Data":"b661f963ccd127b4dcaef38f6d6413ba4a49bc3411581e5053b0b86666c263d1"} Feb 17 16:12:50 crc kubenswrapper[4808]: I0217 16:12:50.169144 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-52cj4"] Feb 17 16:12:50 crc kubenswrapper[4808]: I0217 16:12:50.175911 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-8sg8r"] Feb 17 16:12:50 crc kubenswrapper[4808]: I0217 16:12:50.175950 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-52cj4" event={"ID":"be29c259-d619-4326-b866-2a8560d9b818","Type":"ContainerStarted","Data":"082ca6b4e12db56a0a0d12947f1627dbd9e1570aebf8a6e79f97728342a05ecc"} Feb 17 16:12:50 crc kubenswrapper[4808]: I0217 16:12:50.176889 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"698c36e9-5f87-4836-8660-aaceac669005","Type":"ContainerStarted","Data":"57ad7e9e95603b9e00dced5aff567d0fff1bbfb9d96b8bfdb7074f711d80c274"} Feb 17 16:12:50 crc kubenswrapper[4808]: I0217 16:12:50.178295 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"2ea38754-3b00-4bcb-93d9-28b60dda0e0a","Type":"ContainerStarted","Data":"8e584d33e0716dd03a9a8239a014677a0b4e6765f9efdd4b2ed136a42830d11a"} Feb 17 16:12:50 crc kubenswrapper[4808]: I0217 16:12:50.179794 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"56f9931d-b010-4282-9068-16b2e4e4b247","Type":"ContainerStarted","Data":"25179e355abb25d773555e2205dd9a0a8245b979b1d8cf45a66e547633879c94"} Feb 17 16:12:50 crc kubenswrapper[4808]: I0217 16:12:50.180654 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 17 16:12:50 crc kubenswrapper[4808]: I0217 16:12:50.181242 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-5wrzq" event={"ID":"24cc6fe1-da44-4d61-98bf-3088b398903b","Type":"ContainerStarted","Data":"4a7ab805f716d84e3d73f9394b1b45757927f27450dd37708e63205a258bb4f5"} Feb 17 16:12:50 crc kubenswrapper[4808]: I0217 16:12:50.182460 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-8sg8r" event={"ID":"bac5f26b-ff81-49e2-854f-9cad23a57593","Type":"ContainerStarted","Data":"83aebd7060ebf58080acd8dda61d0160f4457ae1b4e3e4db27d61232cdd028e3"} Feb 17 16:12:50 crc kubenswrapper[4808]: I0217 16:12:50.183816 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-pfcvm" event={"ID":"8a76a2ff-ed1a-4279-898c-54e85973f024","Type":"ContainerStarted","Data":"48f92b9e6e4aae0fd714e91be23901f5268bea1eaceba93c5365d9d0bcb08fa6"} Feb 17 16:12:50 crc kubenswrapper[4808]: I0217 16:12:50.186122 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Feb 17 16:12:50 crc kubenswrapper[4808]: I0217 16:12:50.187971 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"2917eca2-0431-4bd6-ad96-ab8464cc4fd7","Type":"ContainerStarted","Data":"c5db49362fb8e196d602a48475009fd093a64b0b760100ed93c1a54dba3d1832"} Feb 17 16:12:50 crc kubenswrapper[4808]: I0217 16:12:50.191401 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"ade81c90-5cdf-45d4-ad2f-52a3514e1596","Type":"ContainerStarted","Data":"38d81e1f90b082445ee66ef12a169b7e78ae9af1be78970dc6491d62d66db85d"} Feb 17 16:12:50 crc kubenswrapper[4808]: I0217 16:12:50.193369 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-8jstw" event={"ID":"973eee94-2439-415c-b9b8-2f6f72738ac9","Type":"ContainerDied","Data":"8041177f9f605013ae787b3681b3a5558dd54bee858e7ca6318f63453fa6a01c"} Feb 17 16:12:50 crc kubenswrapper[4808]: I0217 16:12:50.193401 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-8jstw" Feb 17 16:12:50 crc kubenswrapper[4808]: I0217 16:12:50.195794 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-querier-58c84b5844-pkj8k" event={"ID":"6df15762-0f06-48ff-89bf-00f5118c6ced","Type":"ContainerStarted","Data":"1d159a168bbd1922669ef46ab9dfc149a4e68d656a62cbcfc3691d5c0d8648f1"} Feb 17 16:12:50 crc kubenswrapper[4808]: I0217 16:12:50.244874 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d88gz\" (UniqueName: \"kubernetes.io/projected/973eee94-2439-415c-b9b8-2f6f72738ac9-kube-api-access-d88gz\") on node \"crc\" DevicePath \"\"" Feb 17 16:12:50 crc kubenswrapper[4808]: I0217 16:12:50.244904 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/973eee94-2439-415c-b9b8-2f6f72738ac9-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:12:50 crc kubenswrapper[4808]: I0217 16:12:50.256386 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-g8xlz"] Feb 17 16:12:50 crc kubenswrapper[4808]: I0217 16:12:50.271723 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-g8xlz"] Feb 17 16:12:50 crc kubenswrapper[4808]: I0217 16:12:50.314041 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-8jstw"] Feb 17 16:12:50 crc kubenswrapper[4808]: I0217 16:12:50.324688 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-8jstw"] Feb 17 16:12:50 crc kubenswrapper[4808]: I0217 16:12:50.336803 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 17 16:12:50 crc kubenswrapper[4808]: I0217 16:12:50.356194 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-gateway-7f8685b49f-mdlhq"] Feb 17 16:12:50 crc kubenswrapper[4808]: I0217 16:12:50.373773 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-index-gateway-0"] Feb 17 16:12:50 crc kubenswrapper[4808]: E0217 16:12:50.385795 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:loki-index-gateway,Image:registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:2988df223331c4653649c064d533a3f2b23aa5b11711ea8aede7338146b69981,Command:[],Args:[-target=index-gateway -config.file=/etc/loki/config/config.yaml -runtime-config.file=/etc/loki/config/runtime-config.yaml -config.expand-env=true],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:3100,Protocol:TCP,HostIP:,},ContainerPort{Name:grpclb,HostPort:0,ContainerPort:9095,Protocol:TCP,HostIP:,},ContainerPort{Name:healthchecks,HostPort:0,ContainerPort:3101,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:AWS_ACCESS_KEY_ID,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:cloudkitty-loki-s3,},Key:access_key_id,Optional:nil,},},},EnvVar{Name:AWS_ACCESS_KEY_SECRET,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:cloudkitty-loki-s3,},Key:access_key_secret,Optional:nil,},},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/etc/loki/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:storage,ReadOnly:false,MountPath:/tmp/loki,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-index-gateway-http,ReadOnly:false,MountPath:/var/run/tls/http/server,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-loki-s3,ReadOnly:false,MountPath:/etc/storage/secrets,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-index-gateway-grpc,ReadOnly:false,MountPath:/var/run/tls/grpc/server,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-ca-bundle,ReadOnly:false,MountPath:/var/run/ca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tpxpb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/loki/api/v1/status/buildinfo,Port:{0 3101 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:2,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/ready,Port:{0 3101 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-lokistack-index-gateway-0_openstack(d6dbebd3-2b7c-4afa-8937-5c47b749e8b0): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 17 16:12:50 crc kubenswrapper[4808]: E0217 16:12:50.386994 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"loki-index-gateway\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack/cloudkitty-lokistack-index-gateway-0" podUID="d6dbebd3-2b7c-4afa-8937-5c47b749e8b0" Feb 17 16:12:50 crc kubenswrapper[4808]: I0217 16:12:50.388526 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-ingester-0"] Feb 17 16:12:50 crc kubenswrapper[4808]: E0217 16:12:50.396458 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:loki-distributor,Image:registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:2988df223331c4653649c064d533a3f2b23aa5b11711ea8aede7338146b69981,Command:[],Args:[-target=distributor -config.file=/etc/loki/config/config.yaml -runtime-config.file=/etc/loki/config/runtime-config.yaml -config.expand-env=true],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:3100,Protocol:TCP,HostIP:,},ContainerPort{Name:grpclb,HostPort:0,ContainerPort:9095,Protocol:TCP,HostIP:,},ContainerPort{Name:gossip-ring,HostPort:0,ContainerPort:7946,Protocol:TCP,HostIP:,},ContainerPort{Name:healthchecks,HostPort:0,ContainerPort:3101,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/etc/loki/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-distributor-http,ReadOnly:false,MountPath:/var/run/tls/http/server,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-distributor-grpc,ReadOnly:false,MountPath:/var/run/tls/grpc/server,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-ca-bundle,ReadOnly:false,MountPath:/var/run/ca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h7t4x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/loki/api/v1/status/buildinfo,Port:{0 3101 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:2,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/ready,Port:{0 3101 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-lokistack-distributor-585d9bcbc-zfhfg_openstack(4fa85572-1552-4a27-8974-b1e2d376167c): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 17 16:12:50 crc kubenswrapper[4808]: E0217 16:12:50.397871 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"loki-distributor\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-zfhfg" podUID="4fa85572-1552-4a27-8974-b1e2d376167c" Feb 17 16:12:50 crc kubenswrapper[4808]: E0217 16:12:50.401882 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:gateway,Image:registry.redhat.io/openshift-logging/lokistack-gateway-rhel9@sha256:41eda20b890c200ee7fce0b56b5d168445cd9a6486d560f39ce73d0704e03934,Command:[],Args:[--debug.name=lokistack-gateway --web.listen=0.0.0.0:8080 --web.internal.listen=0.0.0.0:8081 --web.healthchecks.url=https://localhost:8080 --log.level=warn --logs.read.endpoint=https://cloudkitty-lokistack-query-frontend-http.openstack.svc.cluster.local:3100 --logs.tail.endpoint=https://cloudkitty-lokistack-query-frontend-http.openstack.svc.cluster.local:3100 --logs.write.endpoint=https://cloudkitty-lokistack-distributor-http.openstack.svc.cluster.local:3100 --logs.write-timeout=4m0s --rbac.config=/etc/lokistack-gateway/rbac.yaml --tenants.config=/etc/lokistack-gateway/tenants.yaml --server.read-timeout=48s --server.write-timeout=6m0s --tls.min-version=VersionTLS12 --tls.server.cert-file=/var/run/tls/http/server/tls.crt --tls.server.key-file=/var/run/tls/http/server/tls.key --tls.healthchecks.server-ca-file=/var/run/ca/server/service-ca.crt --tls.healthchecks.server-name=cloudkitty-lokistack-gateway-http.openstack.svc.cluster.local --tls.internal.server.cert-file=/var/run/tls/http/server/tls.crt --tls.internal.server.key-file=/var/run/tls/http/server/tls.key --tls.min-version=VersionTLS12 --tls.cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --logs.tls.ca-file=/var/run/ca/upstream/service-ca.crt --logs.tls.cert-file=/var/run/tls/http/upstream/tls.crt --logs.tls.key-file=/var/run/tls/http/upstream/tls.key --tls.client-auth-type=RequestClientCert],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},ContainerPort{Name:public,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rbac,ReadOnly:true,MountPath:/etc/lokistack-gateway/rbac.yaml,SubPath:rbac.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tenants,ReadOnly:true,MountPath:/etc/lokistack-gateway/tenants.yaml,SubPath:tenants.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:lokistack-gateway,ReadOnly:true,MountPath:/etc/lokistack-gateway/lokistack-gateway.rego,SubPath:lokistack-gateway.rego,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tls-secret,ReadOnly:true,MountPath:/var/run/tls/http/server,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-gateway-client-http,ReadOnly:true,MountPath:/var/run/tls/http/upstream,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-ca-bundle,ReadOnly:true,MountPath:/var/run/ca/upstream,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-gateway-ca-bundle,ReadOnly:true,MountPath:/var/run/ca/server,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-ca-bundle,ReadOnly:false,MountPath:/var/run/tenants-ca/cloudkitty,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gkrw8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/live,Port:{0 8081 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:2,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/ready,Port:{0 8081 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:12,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-lokistack-gateway-7f8685b49f-77rbq_openstack(c4fa7a6a-b7fc-464c-b529-dcf8d20de97e): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 17 16:12:50 crc kubenswrapper[4808]: E0217 16:12:50.404338 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gateway\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-77rbq" podUID="c4fa7a6a-b7fc-464c-b529-dcf8d20de97e" Feb 17 16:12:50 crc kubenswrapper[4808]: E0217 16:12:50.404528 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:loki-compactor,Image:registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:2988df223331c4653649c064d533a3f2b23aa5b11711ea8aede7338146b69981,Command:[],Args:[-target=compactor -config.file=/etc/loki/config/config.yaml -runtime-config.file=/etc/loki/config/runtime-config.yaml -config.expand-env=true],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:3100,Protocol:TCP,HostIP:,},ContainerPort{Name:grpclb,HostPort:0,ContainerPort:9095,Protocol:TCP,HostIP:,},ContainerPort{Name:healthchecks,HostPort:0,ContainerPort:3101,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:AWS_ACCESS_KEY_ID,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:cloudkitty-loki-s3,},Key:access_key_id,Optional:nil,},},},EnvVar{Name:AWS_ACCESS_KEY_SECRET,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:cloudkitty-loki-s3,},Key:access_key_secret,Optional:nil,},},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/etc/loki/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:storage,ReadOnly:false,MountPath:/tmp/loki,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-compactor-http,ReadOnly:false,MountPath:/var/run/tls/http/server,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-loki-s3,ReadOnly:false,MountPath:/etc/storage/secrets,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-compactor-grpc,ReadOnly:false,MountPath:/var/run/tls/grpc/server,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-ca-bundle,ReadOnly:false,MountPath:/var/run/ca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g5x5h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/loki/api/v1/status/buildinfo,Port:{0 3101 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:2,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/ready,Port:{0 3101 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-lokistack-compactor-0_openstack(c850b5fe-4c28-4136-8136-fae52e38371b): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 17 16:12:50 crc kubenswrapper[4808]: E0217 16:12:50.405783 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"loki-compactor\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack/cloudkitty-lokistack-compactor-0" podUID="c850b5fe-4c28-4136-8136-fae52e38371b" Feb 17 16:12:50 crc kubenswrapper[4808]: I0217 16:12:50.424389 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-compactor-0"] Feb 17 16:12:50 crc kubenswrapper[4808]: I0217 16:12:50.437685 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-distributor-585d9bcbc-zfhfg"] Feb 17 16:12:50 crc kubenswrapper[4808]: I0217 16:12:50.447848 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-gateway-7f8685b49f-77rbq"] Feb 17 16:12:50 crc kubenswrapper[4808]: W0217 16:12:50.513650 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8c434a76_4dcf_4c69_aefa_5cda8b120a26.slice/crio-98ee19382e2fd4eea1cfca969f2386b40dbc276d79b826c8e0a4477fb46127a4 WatchSource:0}: Error finding container 98ee19382e2fd4eea1cfca969f2386b40dbc276d79b826c8e0a4477fb46127a4: Status 404 returned error can't find the container with id 98ee19382e2fd4eea1cfca969f2386b40dbc276d79b826c8e0a4477fb46127a4 Feb 17 16:12:50 crc kubenswrapper[4808]: I0217 16:12:50.520440 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 17 16:12:51 crc kubenswrapper[4808]: I0217 16:12:51.161864 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38d70adc-e16e-4470-9b59-1c728c29318d" path="/var/lib/kubelet/pods/38d70adc-e16e-4470-9b59-1c728c29318d/volumes" Feb 17 16:12:51 crc kubenswrapper[4808]: I0217 16:12:51.162457 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="973eee94-2439-415c-b9b8-2f6f72738ac9" path="/var/lib/kubelet/pods/973eee94-2439-415c-b9b8-2f6f72738ac9/volumes" Feb 17 16:12:51 crc kubenswrapper[4808]: I0217 16:12:51.204039 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-ingester-0" event={"ID":"c7929d5b-e791-419e-8039-50cc9f8202f2","Type":"ContainerStarted","Data":"ac175af8c51c60196e3db1cdaa1115158cb3fe980bc2271fba02c2b52c653e09"} Feb 17 16:12:51 crc kubenswrapper[4808]: I0217 16:12:51.204995 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"8c434a76-4dcf-4c69-aefa-5cda8b120a26","Type":"ContainerStarted","Data":"98ee19382e2fd4eea1cfca969f2386b40dbc276d79b826c8e0a4477fb46127a4"} Feb 17 16:12:51 crc kubenswrapper[4808]: I0217 16:12:51.206315 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"0a2bf674-1881-41e9-9c0f-93e8f14ac222","Type":"ContainerStarted","Data":"fe6c047a841d65d85a9f0e609ea1b96b4c6bc76859984c45d4fc65974fb15811"} Feb 17 16:12:51 crc kubenswrapper[4808]: I0217 16:12:51.207327 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-index-gateway-0" event={"ID":"d6dbebd3-2b7c-4afa-8937-5c47b749e8b0","Type":"ContainerStarted","Data":"63329b52a0c8247b74093b8acc28b39c130f2ee05c18ab46ad443269a2d5312e"} Feb 17 16:12:51 crc kubenswrapper[4808]: E0217 16:12:51.208781 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"loki-index-gateway\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:2988df223331c4653649c064d533a3f2b23aa5b11711ea8aede7338146b69981\\\"\"" pod="openstack/cloudkitty-lokistack-index-gateway-0" podUID="d6dbebd3-2b7c-4afa-8937-5c47b749e8b0" Feb 17 16:12:51 crc kubenswrapper[4808]: I0217 16:12:51.211448 4808 generic.go:334] "Generic (PLEG): container finished" podID="24cc6fe1-da44-4d61-98bf-3088b398903b" containerID="5eef31ccf738b712b92d96f8cbf9367f57cb6ada66d559cdc21e7d0e94df0e1d" exitCode=0 Feb 17 16:12:51 crc kubenswrapper[4808]: I0217 16:12:51.211494 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-5wrzq" event={"ID":"24cc6fe1-da44-4d61-98bf-3088b398903b","Type":"ContainerDied","Data":"5eef31ccf738b712b92d96f8cbf9367f57cb6ada66d559cdc21e7d0e94df0e1d"} Feb 17 16:12:51 crc kubenswrapper[4808]: I0217 16:12:51.213486 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-zfhfg" event={"ID":"4fa85572-1552-4a27-8974-b1e2d376167c","Type":"ContainerStarted","Data":"087e41c46374c7d3fbc02456f1d41ea551c9e915163061c15c14bdcab6cad92e"} Feb 17 16:12:51 crc kubenswrapper[4808]: I0217 16:12:51.214460 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-77rbq" event={"ID":"c4fa7a6a-b7fc-464c-b529-dcf8d20de97e","Type":"ContainerStarted","Data":"feadba16ace8e9ce88dd690f086be86ebf2a225876af032846cd52e794d3b6a1"} Feb 17 16:12:51 crc kubenswrapper[4808]: E0217 16:12:51.215495 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"loki-distributor\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:2988df223331c4653649c064d533a3f2b23aa5b11711ea8aede7338146b69981\\\"\"" pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-zfhfg" podUID="4fa85572-1552-4a27-8974-b1e2d376167c" Feb 17 16:12:51 crc kubenswrapper[4808]: E0217 16:12:51.216072 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gateway\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift-logging/lokistack-gateway-rhel9@sha256:41eda20b890c200ee7fce0b56b5d168445cd9a6486d560f39ce73d0704e03934\\\"\"" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-77rbq" podUID="c4fa7a6a-b7fc-464c-b529-dcf8d20de97e" Feb 17 16:12:51 crc kubenswrapper[4808]: I0217 16:12:51.223477 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-mdlhq" event={"ID":"dc9fa7d9-5340-4cb0-adbb-980e7ae2acb0","Type":"ContainerStarted","Data":"c7fb597c1c2f36ad981298a1d507b4e4aae1c17ec9b1318e1b62e7efe004abd2"} Feb 17 16:12:51 crc kubenswrapper[4808]: I0217 16:12:51.225950 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-compactor-0" event={"ID":"c850b5fe-4c28-4136-8136-fae52e38371b","Type":"ContainerStarted","Data":"92ba2bbb03d437b99f78a1aae60b10118b23cff12e044974d037b8b0e94570f5"} Feb 17 16:12:51 crc kubenswrapper[4808]: E0217 16:12:51.227386 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"loki-compactor\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:2988df223331c4653649c064d533a3f2b23aa5b11711ea8aede7338146b69981\\\"\"" pod="openstack/cloudkitty-lokistack-compactor-0" podUID="c850b5fe-4c28-4136-8136-fae52e38371b" Feb 17 16:12:51 crc kubenswrapper[4808]: I0217 16:12:51.241891 4808 generic.go:334] "Generic (PLEG): container finished" podID="bac5f26b-ff81-49e2-854f-9cad23a57593" containerID="33437dcb06d23989d40121f3a469434526c25c910f4a2965d927d0bdfc5b08ce" exitCode=0 Feb 17 16:12:51 crc kubenswrapper[4808]: I0217 16:12:51.241968 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-8sg8r" event={"ID":"bac5f26b-ff81-49e2-854f-9cad23a57593","Type":"ContainerDied","Data":"33437dcb06d23989d40121f3a469434526c25c910f4a2965d927d0bdfc5b08ce"} Feb 17 16:12:51 crc kubenswrapper[4808]: I0217 16:12:51.468804 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 17 16:12:51 crc kubenswrapper[4808]: I0217 16:12:51.572154 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-wkzp6"] Feb 17 16:12:51 crc kubenswrapper[4808]: I0217 16:12:51.592415 4808 patch_prober.go:28] interesting pod/machine-config-daemon-k8v8k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:12:51 crc kubenswrapper[4808]: I0217 16:12:51.592464 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:12:51 crc kubenswrapper[4808]: I0217 16:12:51.592502 4808 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" Feb 17 16:12:51 crc kubenswrapper[4808]: I0217 16:12:51.593134 4808 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"12b4e957316b11ee081f9acecacedfdbabeee0248dc83ade7fe5f8b084a798ba"} pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 16:12:51 crc kubenswrapper[4808]: I0217 16:12:51.593180 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" containerID="cri-o://12b4e957316b11ee081f9acecacedfdbabeee0248dc83ade7fe5f8b084a798ba" gracePeriod=600 Feb 17 16:12:51 crc kubenswrapper[4808]: W0217 16:12:51.995192 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod220c5de1_b4bf_454c_b013_17d78d86cca3.slice/crio-e5865d5bc9f70b4f0846b6ae06a0bf8e8a806db07740cf0303d524d08a4ecea1 WatchSource:0}: Error finding container e5865d5bc9f70b4f0846b6ae06a0bf8e8a806db07740cf0303d524d08a4ecea1: Status 404 returned error can't find the container with id e5865d5bc9f70b4f0846b6ae06a0bf8e8a806db07740cf0303d524d08a4ecea1 Feb 17 16:12:52 crc kubenswrapper[4808]: I0217 16:12:52.291110 4808 generic.go:334] "Generic (PLEG): container finished" podID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerID="12b4e957316b11ee081f9acecacedfdbabeee0248dc83ade7fe5f8b084a798ba" exitCode=0 Feb 17 16:12:52 crc kubenswrapper[4808]: I0217 16:12:52.291209 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" event={"ID":"ca38b6e7-b21c-453d-8b6c-a163dac84b35","Type":"ContainerDied","Data":"12b4e957316b11ee081f9acecacedfdbabeee0248dc83ade7fe5f8b084a798ba"} Feb 17 16:12:52 crc kubenswrapper[4808]: I0217 16:12:52.291255 4808 scope.go:117] "RemoveContainer" containerID="284430f1fb330ef6ae53b6d6dd49c2af767ae61ae02d682d5cba6dbd7c4ce02d" Feb 17 16:12:52 crc kubenswrapper[4808]: I0217 16:12:52.299467 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"220c5de1-b4bf-454c-b013-17d78d86cca3","Type":"ContainerStarted","Data":"e5865d5bc9f70b4f0846b6ae06a0bf8e8a806db07740cf0303d524d08a4ecea1"} Feb 17 16:12:52 crc kubenswrapper[4808]: E0217 16:12:52.307248 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"loki-distributor\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:2988df223331c4653649c064d533a3f2b23aa5b11711ea8aede7338146b69981\\\"\"" pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-zfhfg" podUID="4fa85572-1552-4a27-8974-b1e2d376167c" Feb 17 16:12:52 crc kubenswrapper[4808]: E0217 16:12:52.307665 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"loki-compactor\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:2988df223331c4653649c064d533a3f2b23aa5b11711ea8aede7338146b69981\\\"\"" pod="openstack/cloudkitty-lokistack-compactor-0" podUID="c850b5fe-4c28-4136-8136-fae52e38371b" Feb 17 16:12:52 crc kubenswrapper[4808]: E0217 16:12:52.307743 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"loki-index-gateway\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:2988df223331c4653649c064d533a3f2b23aa5b11711ea8aede7338146b69981\\\"\"" pod="openstack/cloudkitty-lokistack-index-gateway-0" podUID="d6dbebd3-2b7c-4afa-8937-5c47b749e8b0" Feb 17 16:12:52 crc kubenswrapper[4808]: E0217 16:12:52.307781 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gateway\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift-logging/lokistack-gateway-rhel9@sha256:41eda20b890c200ee7fce0b56b5d168445cd9a6486d560f39ce73d0704e03934\\\"\"" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-77rbq" podUID="c4fa7a6a-b7fc-464c-b529-dcf8d20de97e" Feb 17 16:12:54 crc kubenswrapper[4808]: I0217 16:12:54.314381 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-wkzp6" event={"ID":"30b7fc5a-690b-4ac6-b37c-9c1ec074f962","Type":"ContainerStarted","Data":"b7e5aef974fc8a45b3d23dcb43254aa563342f33d66ab4d6df979b8972ab7483"} Feb 17 16:13:02 crc kubenswrapper[4808]: E0217 16:13:02.300270 4808 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-prometheus-config-reloader-rhel9@sha256:9a2097bc5b2e02bc1703f64c452ce8fe4bc6775b732db930ff4770b76ae4653a" Feb 17 16:13:02 crc kubenswrapper[4808]: E0217 16:13:02.301236 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init-config-reloader,Image:registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-prometheus-config-reloader-rhel9@sha256:9a2097bc5b2e02bc1703f64c452ce8fe4bc6775b732db930ff4770b76ae4653a,Command:[/bin/prometheus-config-reloader],Args:[--watch-interval=0 --listen-address=:8081 --config-file=/etc/prometheus/config/prometheus.yaml.gz --config-envsubst-file=/etc/prometheus/config_out/prometheus.env.yaml --watched-dir=/etc/prometheus/rules/prometheus-metric-storage-rulefiles-0 --watched-dir=/etc/prometheus/rules/prometheus-metric-storage-rulefiles-1 --watched-dir=/etc/prometheus/rules/prometheus-metric-storage-rulefiles-2],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:reloader-init,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:SHARD,Value:0,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/etc/prometheus/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-out,ReadOnly:false,MountPath:/etc/prometheus/config_out,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:prometheus-metric-storage-rulefiles-0,ReadOnly:false,MountPath:/etc/prometheus/rules/prometheus-metric-storage-rulefiles-0,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:prometheus-metric-storage-rulefiles-1,ReadOnly:false,MountPath:/etc/prometheus/rules/prometheus-metric-storage-rulefiles-1,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:prometheus-metric-storage-rulefiles-2,ReadOnly:false,MountPath:/etc/prometheus/rules/prometheus-metric-storage-rulefiles-2,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sh7d7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod prometheus-metric-storage-0_openstack(2917eca2-0431-4bd6-ad96-ab8464cc4fd7): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 17 16:13:02 crc kubenswrapper[4808]: E0217 16:13:02.302195 4808 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:2988df223331c4653649c064d533a3f2b23aa5b11711ea8aede7338146b69981" Feb 17 16:13:02 crc kubenswrapper[4808]: E0217 16:13:02.302488 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init-config-reloader\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/prometheus-metric-storage-0" podUID="2917eca2-0431-4bd6-ad96-ab8464cc4fd7" Feb 17 16:13:02 crc kubenswrapper[4808]: E0217 16:13:02.302807 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:loki-querier,Image:registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:2988df223331c4653649c064d533a3f2b23aa5b11711ea8aede7338146b69981,Command:[],Args:[-target=querier -config.file=/etc/loki/config/config.yaml -runtime-config.file=/etc/loki/config/runtime-config.yaml -config.expand-env=true],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:3100,Protocol:TCP,HostIP:,},ContainerPort{Name:grpclb,HostPort:0,ContainerPort:9095,Protocol:TCP,HostIP:,},ContainerPort{Name:gossip-ring,HostPort:0,ContainerPort:7946,Protocol:TCP,HostIP:,},ContainerPort{Name:healthchecks,HostPort:0,ContainerPort:3101,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:AWS_ACCESS_KEY_ID,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:cloudkitty-loki-s3,},Key:access_key_id,Optional:nil,},},},EnvVar{Name:AWS_ACCESS_KEY_SECRET,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:cloudkitty-loki-s3,},Key:access_key_secret,Optional:nil,},},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/etc/loki/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-querier-http,ReadOnly:false,MountPath:/var/run/tls/http/server,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-loki-s3,ReadOnly:false,MountPath:/etc/storage/secrets,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-querier-grpc,ReadOnly:false,MountPath:/var/run/tls/grpc/server,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-ca-bundle,ReadOnly:false,MountPath:/var/run/ca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-28nlg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/loki/api/v1/status/buildinfo,Port:{0 3101 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:2,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/ready,Port:{0 3101 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-lokistack-querier-58c84b5844-pkj8k_openstack(6df15762-0f06-48ff-89bf-00f5118c6ced): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 17 16:13:02 crc kubenswrapper[4808]: E0217 16:13:02.304391 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"loki-querier\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/cloudkitty-lokistack-querier-58c84b5844-pkj8k" podUID="6df15762-0f06-48ff-89bf-00f5118c6ced" Feb 17 16:13:02 crc kubenswrapper[4808]: E0217 16:13:02.387121 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"loki-querier\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:2988df223331c4653649c064d533a3f2b23aa5b11711ea8aede7338146b69981\\\"\"" pod="openstack/cloudkitty-lokistack-querier-58c84b5844-pkj8k" podUID="6df15762-0f06-48ff-89bf-00f5118c6ced" Feb 17 16:13:02 crc kubenswrapper[4808]: E0217 16:13:02.387212 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init-config-reloader\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-prometheus-config-reloader-rhel9@sha256:9a2097bc5b2e02bc1703f64c452ce8fe4bc6775b732db930ff4770b76ae4653a\\\"\"" pod="openstack/prometheus-metric-storage-0" podUID="2917eca2-0431-4bd6-ad96-ab8464cc4fd7" Feb 17 16:13:03 crc kubenswrapper[4808]: E0217 16:13:03.076279 4808 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-memcached:current-podified" Feb 17 16:13:03 crc kubenswrapper[4808]: E0217 16:13:03.076483 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:memcached,Image:quay.io/podified-antelope-centos9/openstack-memcached:current-podified,Command:[/usr/bin/dumb-init -- /usr/local/bin/kolla_start],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:memcached,HostPort:0,ContainerPort:11211,Protocol:TCP,HostIP:,},ContainerPort{Name:memcached-tls,HostPort:0,ContainerPort:11212,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:POD_IPS,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIPs,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:CONFIG_HASH,Value:n88h9dh57bh676h554h58fhdch656h597h556hd9h666h5bchddh56ch57fhf4h659h54bh558h665h5bbh575h8bh685h5ffhc4h5ch5d6hddh646h545q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/src,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/certs/memcached.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/private/memcached.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wqlrz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42457,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42457,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod memcached-0_openstack(2ea38754-3b00-4bcb-93d9-28b60dda0e0a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 16:13:03 crc kubenswrapper[4808]: E0217 16:13:03.077646 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/memcached-0" podUID="2ea38754-3b00-4bcb-93d9-28b60dda0e0a" Feb 17 16:13:03 crc kubenswrapper[4808]: E0217 16:13:03.392009 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-memcached:current-podified\\\"\"" pod="openstack/memcached-0" podUID="2ea38754-3b00-4bcb-93d9-28b60dda0e0a" Feb 17 16:13:06 crc kubenswrapper[4808]: E0217 16:13:06.130534 4808 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Feb 17 16:13:06 crc kubenswrapper[4808]: E0217 16:13:06.131013 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pjb7d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-cell1-galera-0_openstack(ade81c90-5cdf-45d4-ad2f-52a3514e1596): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 16:13:06 crc kubenswrapper[4808]: E0217 16:13:06.132734 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-cell1-galera-0" podUID="ade81c90-5cdf-45d4-ad2f-52a3514e1596" Feb 17 16:13:06 crc kubenswrapper[4808]: E0217 16:13:06.397477 4808 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Feb 17 16:13:06 crc kubenswrapper[4808]: E0217 16:13:06.397664 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mfxgv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_openstack(a020d38c-5e24-4266-96dc-9050e4d82f46): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 16:13:06 crc kubenswrapper[4808]: E0217 16:13:06.398932 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-galera-0" podUID="a020d38c-5e24-4266-96dc-9050e4d82f46" Feb 17 16:13:06 crc kubenswrapper[4808]: E0217 16:13:06.419702 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-cell1-galera-0" podUID="ade81c90-5cdf-45d4-ad2f-52a3514e1596" Feb 17 16:13:06 crc kubenswrapper[4808]: E0217 16:13:06.419728 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-galera-0" podUID="a020d38c-5e24-4266-96dc-9050e4d82f46" Feb 17 16:13:06 crc kubenswrapper[4808]: E0217 16:13:06.700652 4808 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server:current-podified" Feb 17 16:13:06 crc kubenswrapper[4808]: E0217 16:13:06.700839 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ovsdbserver-nb,Image:quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server:current-podified,Command:[/usr/bin/dumb-init],Args:[/usr/local/bin/container-scripts/setup.sh],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n56ch549h694h697h8h589h5b4h578h5cbhbdh684hc8h57bh575h4h7ch576h5f7h88h68ch699h88h5ddh697h94h5f4h58h55dh5dh57bh6fh65cq,ValueFrom:nil,},EnvVar{Name:OVN_LOGDIR,Value:/tmp,ValueFrom:nil,},EnvVar{Name:OVN_RUNDIR,Value:/tmp,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovndbcluster-nb-etc-ovn,ReadOnly:false,MountPath:/etc/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdb-rundir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndb.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovndb.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dpcqk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/cleanup.sh],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:20,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovsdbserver-nb-0_openstack(8c434a76-4dcf-4c69-aefa-5cda8b120a26): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 16:13:08 crc kubenswrapper[4808]: I0217 16:13:08.444035 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" event={"ID":"ca38b6e7-b21c-453d-8b6c-a163dac84b35","Type":"ContainerStarted","Data":"34e69d9ce6b54cc95e099ff98c49ef8661be9798a1b5f5a56fc276247e76ba49"} Feb 17 16:13:08 crc kubenswrapper[4808]: E0217 16:13:08.657803 4808 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Feb 17 16:13:08 crc kubenswrapper[4808]: E0217 16:13:08.657870 4808 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Feb 17 16:13:08 crc kubenswrapper[4808]: E0217 16:13:08.658033 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-state-metrics,Image:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,Command:[],Args:[--resources=pods --namespaces=openstack],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},ContainerPort{Name:telemetry,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jrnn8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-state-metrics-0_openstack(0a2bf674-1881-41e9-9c0f-93e8f14ac222): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 17 16:13:08 crc kubenswrapper[4808]: E0217 16:13:08.659880 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/kube-state-metrics-0" podUID="0a2bf674-1881-41e9-9c0f-93e8f14ac222" Feb 17 16:13:09 crc kubenswrapper[4808]: I0217 16:13:09.460486 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-5wrzq" event={"ID":"24cc6fe1-da44-4d61-98bf-3088b398903b","Type":"ContainerStarted","Data":"3df2b6c8480475dff990f580da87d30f986cfab5664d5aa6987e96c0458e40ce"} Feb 17 16:13:09 crc kubenswrapper[4808]: I0217 16:13:09.461605 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-5wrzq" Feb 17 16:13:09 crc kubenswrapper[4808]: E0217 16:13:09.477058 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0\\\"\"" pod="openstack/kube-state-metrics-0" podUID="0a2bf674-1881-41e9-9c0f-93e8f14ac222" Feb 17 16:13:09 crc kubenswrapper[4808]: I0217 16:13:09.501442 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-5wrzq" podStartSLOduration=38.903571664 podStartE2EDuration="39.501421161s" podCreationTimestamp="2026-02-17 16:12:30 +0000 UTC" firstStartedPulling="2026-02-17 16:12:50.06274317 +0000 UTC m=+1133.579102243" lastFinishedPulling="2026-02-17 16:12:50.660592667 +0000 UTC m=+1134.176951740" observedRunningTime="2026-02-17 16:13:09.494471472 +0000 UTC m=+1153.010830545" watchObservedRunningTime="2026-02-17 16:13:09.501421161 +0000 UTC m=+1153.017780234" Feb 17 16:13:10 crc kubenswrapper[4808]: I0217 16:13:10.472266 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-52cj4" event={"ID":"be29c259-d619-4326-b866-2a8560d9b818","Type":"ContainerStarted","Data":"ad1db0549960832f0c52d19a16630dbc313a477607dbb1efac4387c34900ecb9"} Feb 17 16:13:10 crc kubenswrapper[4808]: I0217 16:13:10.473768 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-52cj4" Feb 17 16:13:10 crc kubenswrapper[4808]: I0217 16:13:10.513319 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-52cj4" podStartSLOduration=7.104670292 podStartE2EDuration="24.513295388s" podCreationTimestamp="2026-02-17 16:12:46 +0000 UTC" firstStartedPulling="2026-02-17 16:12:50.104000007 +0000 UTC m=+1133.620359080" lastFinishedPulling="2026-02-17 16:13:07.512625093 +0000 UTC m=+1151.028984176" observedRunningTime="2026-02-17 16:13:10.511822128 +0000 UTC m=+1154.028181211" watchObservedRunningTime="2026-02-17 16:13:10.513295388 +0000 UTC m=+1154.029654471" Feb 17 16:13:11 crc kubenswrapper[4808]: E0217 16:13:11.238696 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdbserver-nb\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovsdbserver-nb-0" podUID="8c434a76-4dcf-4c69-aefa-5cda8b120a26" Feb 17 16:13:11 crc kubenswrapper[4808]: I0217 16:13:11.485363 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-compactor-0" event={"ID":"c850b5fe-4c28-4136-8136-fae52e38371b","Type":"ContainerStarted","Data":"365d1fda7dc08a45bbf79c14ba335b4273126085b4fea9654c779f8c356a92d4"} Feb 17 16:13:11 crc kubenswrapper[4808]: I0217 16:13:11.486143 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-compactor-0" Feb 17 16:13:11 crc kubenswrapper[4808]: I0217 16:13:11.488480 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"8c434a76-4dcf-4c69-aefa-5cda8b120a26","Type":"ContainerStarted","Data":"d056ba09093e2b7fcfc74f1bbf4fae4b8d0c36df395ee8b95e6ebeaf91c294e9"} Feb 17 16:13:11 crc kubenswrapper[4808]: E0217 16:13:11.492135 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdbserver-nb\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server:current-podified\\\"\"" pod="openstack/ovsdbserver-nb-0" podUID="8c434a76-4dcf-4c69-aefa-5cda8b120a26" Feb 17 16:13:11 crc kubenswrapper[4808]: I0217 16:13:11.493416 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-index-gateway-0" event={"ID":"d6dbebd3-2b7c-4afa-8937-5c47b749e8b0","Type":"ContainerStarted","Data":"fbda8631bae74da6b76563d2704fb46101b4e20134f4b7d112690b3486ec41cf"} Feb 17 16:13:11 crc kubenswrapper[4808]: I0217 16:13:11.493778 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 17 16:13:11 crc kubenswrapper[4808]: I0217 16:13:11.496414 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-wkzp6" event={"ID":"30b7fc5a-690b-4ac6-b37c-9c1ec074f962","Type":"ContainerStarted","Data":"9668c0913113779d6a3c7f672c39d2f4905fbbea560063417a4444ac286de562"} Feb 17 16:13:11 crc kubenswrapper[4808]: I0217 16:13:11.501358 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-8sg8r" event={"ID":"bac5f26b-ff81-49e2-854f-9cad23a57593","Type":"ContainerStarted","Data":"84853abf40c69f53c1f33037c497f55962bc9212b54400898031ca8bed97c77a"} Feb 17 16:13:11 crc kubenswrapper[4808]: I0217 16:13:11.501498 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-8sg8r" Feb 17 16:13:11 crc kubenswrapper[4808]: I0217 16:13:11.505866 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-ingester-0" event={"ID":"c7929d5b-e791-419e-8039-50cc9f8202f2","Type":"ContainerStarted","Data":"1e0a3f64a1d9304e54c45d6a329fe87b933bf3d74ea52279becd1608617a25aa"} Feb 17 16:13:11 crc kubenswrapper[4808]: I0217 16:13:11.506097 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-ingester-0" Feb 17 16:13:11 crc kubenswrapper[4808]: I0217 16:13:11.508283 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-mdlhq" event={"ID":"dc9fa7d9-5340-4cb0-adbb-980e7ae2acb0","Type":"ContainerStarted","Data":"0024fe61e7e5edce8a413484d1e11d9c581c5cc963e9ea54babc75e64715cd46"} Feb 17 16:13:11 crc kubenswrapper[4808]: I0217 16:13:11.509069 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-mdlhq" Feb 17 16:13:11 crc kubenswrapper[4808]: I0217 16:13:11.514801 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-compactor-0" podStartSLOduration=-9223372011.339989 podStartE2EDuration="25.514786634s" podCreationTimestamp="2026-02-17 16:12:46 +0000 UTC" firstStartedPulling="2026-02-17 16:12:50.404425401 +0000 UTC m=+1133.920784474" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:13:11.510861497 +0000 UTC m=+1155.027220600" watchObservedRunningTime="2026-02-17 16:13:11.514786634 +0000 UTC m=+1155.031145707" Feb 17 16:13:11 crc kubenswrapper[4808]: I0217 16:13:11.518130 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-zfhfg" event={"ID":"4fa85572-1552-4a27-8974-b1e2d376167c","Type":"ContainerStarted","Data":"f98d913f2e956d9c296144d39839f95499e60c922196f5702dc321f27dfa499c"} Feb 17 16:13:11 crc kubenswrapper[4808]: I0217 16:13:11.518502 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-zfhfg" Feb 17 16:13:11 crc kubenswrapper[4808]: I0217 16:13:11.522023 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-77rbq" event={"ID":"c4fa7a6a-b7fc-464c-b529-dcf8d20de97e","Type":"ContainerStarted","Data":"aa839321232d9ef7ebe06b138c51f6a574df0569526c3cedb08419ce7f22a465"} Feb 17 16:13:11 crc kubenswrapper[4808]: I0217 16:13:11.524422 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-77rbq" Feb 17 16:13:11 crc kubenswrapper[4808]: I0217 16:13:11.527872 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-pfcvm" event={"ID":"8a76a2ff-ed1a-4279-898c-54e85973f024","Type":"ContainerStarted","Data":"62fcd90b140ef708febe681e2940a5eb938b5105c6ca9115b5284e8bef67d870"} Feb 17 16:13:11 crc kubenswrapper[4808]: I0217 16:13:11.527920 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-pfcvm" Feb 17 16:13:11 crc kubenswrapper[4808]: I0217 16:13:11.531320 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-mdlhq" Feb 17 16:13:11 crc kubenswrapper[4808]: I0217 16:13:11.537018 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-77rbq" Feb 17 16:13:11 crc kubenswrapper[4808]: I0217 16:13:11.584405 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-8sg8r" podStartSLOduration=41.069339022 podStartE2EDuration="41.584381158s" podCreationTimestamp="2026-02-17 16:12:30 +0000 UTC" firstStartedPulling="2026-02-17 16:12:50.146333683 +0000 UTC m=+1133.662692756" lastFinishedPulling="2026-02-17 16:12:50.661375819 +0000 UTC m=+1134.177734892" observedRunningTime="2026-02-17 16:13:11.570524793 +0000 UTC m=+1155.086883916" watchObservedRunningTime="2026-02-17 16:13:11.584381158 +0000 UTC m=+1155.100740231" Feb 17 16:13:11 crc kubenswrapper[4808]: I0217 16:13:11.614944 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-mdlhq" podStartSLOduration=7.170532481 podStartE2EDuration="24.614922735s" podCreationTimestamp="2026-02-17 16:12:47 +0000 UTC" firstStartedPulling="2026-02-17 16:12:50.378583852 +0000 UTC m=+1133.894942925" lastFinishedPulling="2026-02-17 16:13:07.822974106 +0000 UTC m=+1151.339333179" observedRunningTime="2026-02-17 16:13:11.611221875 +0000 UTC m=+1155.127580968" watchObservedRunningTime="2026-02-17 16:13:11.614922735 +0000 UTC m=+1155.131281808" Feb 17 16:13:11 crc kubenswrapper[4808]: I0217 16:13:11.617664 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-index-gateway-0" podStartSLOduration=6.938370954 podStartE2EDuration="24.617642228s" podCreationTimestamp="2026-02-17 16:12:47 +0000 UTC" firstStartedPulling="2026-02-17 16:12:50.38556406 +0000 UTC m=+1133.901923133" lastFinishedPulling="2026-02-17 16:13:08.064835304 +0000 UTC m=+1151.581194407" observedRunningTime="2026-02-17 16:13:11.594984475 +0000 UTC m=+1155.111343568" watchObservedRunningTime="2026-02-17 16:13:11.617642228 +0000 UTC m=+1155.134001311" Feb 17 16:13:11 crc kubenswrapper[4808]: I0217 16:13:11.652184 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-ingester-0" podStartSLOduration=8.303942073 podStartE2EDuration="25.652162614s" podCreationTimestamp="2026-02-17 16:12:46 +0000 UTC" firstStartedPulling="2026-02-17 16:12:50.385503269 +0000 UTC m=+1133.901862342" lastFinishedPulling="2026-02-17 16:13:07.73372381 +0000 UTC m=+1151.250082883" observedRunningTime="2026-02-17 16:13:11.642991755 +0000 UTC m=+1155.159350828" watchObservedRunningTime="2026-02-17 16:13:11.652162614 +0000 UTC m=+1155.168521677" Feb 17 16:13:11 crc kubenswrapper[4808]: I0217 16:13:11.670742 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-pfcvm" podStartSLOduration=13.965100699 podStartE2EDuration="31.670726006s" podCreationTimestamp="2026-02-17 16:12:40 +0000 UTC" firstStartedPulling="2026-02-17 16:12:49.897720012 +0000 UTC m=+1133.414079085" lastFinishedPulling="2026-02-17 16:13:07.603345319 +0000 UTC m=+1151.119704392" observedRunningTime="2026-02-17 16:13:11.665956127 +0000 UTC m=+1155.182315200" watchObservedRunningTime="2026-02-17 16:13:11.670726006 +0000 UTC m=+1155.187085079" Feb 17 16:13:11 crc kubenswrapper[4808]: I0217 16:13:11.692171 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-zfhfg" podStartSLOduration=8.372070016 podStartE2EDuration="25.692155236s" podCreationTimestamp="2026-02-17 16:12:46 +0000 UTC" firstStartedPulling="2026-02-17 16:12:50.396235409 +0000 UTC m=+1133.912594482" lastFinishedPulling="2026-02-17 16:13:07.716320629 +0000 UTC m=+1151.232679702" observedRunningTime="2026-02-17 16:13:11.684882949 +0000 UTC m=+1155.201242032" watchObservedRunningTime="2026-02-17 16:13:11.692155236 +0000 UTC m=+1155.208514299" Feb 17 16:13:11 crc kubenswrapper[4808]: I0217 16:13:11.716447 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-77rbq" podStartSLOduration=7.067012269 podStartE2EDuration="24.716427504s" podCreationTimestamp="2026-02-17 16:12:47 +0000 UTC" firstStartedPulling="2026-02-17 16:12:50.401747919 +0000 UTC m=+1133.918106992" lastFinishedPulling="2026-02-17 16:13:08.051163154 +0000 UTC m=+1151.567522227" observedRunningTime="2026-02-17 16:13:11.704375567 +0000 UTC m=+1155.220734640" watchObservedRunningTime="2026-02-17 16:13:11.716427504 +0000 UTC m=+1155.232786577" Feb 17 16:13:12 crc kubenswrapper[4808]: I0217 16:13:12.541379 4808 generic.go:334] "Generic (PLEG): container finished" podID="30b7fc5a-690b-4ac6-b37c-9c1ec074f962" containerID="9668c0913113779d6a3c7f672c39d2f4905fbbea560063417a4444ac286de562" exitCode=0 Feb 17 16:13:12 crc kubenswrapper[4808]: I0217 16:13:12.541452 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-wkzp6" event={"ID":"30b7fc5a-690b-4ac6-b37c-9c1ec074f962","Type":"ContainerDied","Data":"9668c0913113779d6a3c7f672c39d2f4905fbbea560063417a4444ac286de562"} Feb 17 16:13:12 crc kubenswrapper[4808]: I0217 16:13:12.551792 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"698c36e9-5f87-4836-8660-aaceac669005","Type":"ContainerStarted","Data":"19fb997acb847b4585d9f3a1732ebf382a63b29716209b27bb21be0c936a6430"} Feb 17 16:13:12 crc kubenswrapper[4808]: I0217 16:13:12.554626 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"59be2048-a5c9-44c9-a3ef-651002555ff0","Type":"ContainerStarted","Data":"5486e6dc5697e1e74b776b15f38831dacbc3e1b4bd9ce88391352b7167a44fe9"} Feb 17 16:13:12 crc kubenswrapper[4808]: I0217 16:13:12.559670 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"56f9931d-b010-4282-9068-16b2e4e4b247","Type":"ContainerStarted","Data":"eaab0a6bfd8b2f49bb5b0419ebf83f83f3a7d7db298ba6d150f0ad5ee4951a2a"} Feb 17 16:13:12 crc kubenswrapper[4808]: I0217 16:13:12.565871 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"220c5de1-b4bf-454c-b013-17d78d86cca3","Type":"ContainerStarted","Data":"af3e2a009a7197d0992be49640be58e7c23e3d5086195401a2da944ebba0e803"} Feb 17 16:13:12 crc kubenswrapper[4808]: I0217 16:13:12.567099 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"220c5de1-b4bf-454c-b013-17d78d86cca3","Type":"ContainerStarted","Data":"4e7c685fa6fff63dbe53be62bc471d8379634655c88d7bbf8d325e45d53ca65c"} Feb 17 16:13:12 crc kubenswrapper[4808]: E0217 16:13:12.571120 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdbserver-nb\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server:current-podified\\\"\"" pod="openstack/ovsdbserver-nb-0" podUID="8c434a76-4dcf-4c69-aefa-5cda8b120a26" Feb 17 16:13:12 crc kubenswrapper[4808]: I0217 16:13:12.694370 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=12.622768516 podStartE2EDuration="28.694350311s" podCreationTimestamp="2026-02-17 16:12:44 +0000 UTC" firstStartedPulling="2026-02-17 16:12:51.99769176 +0000 UTC m=+1135.514050833" lastFinishedPulling="2026-02-17 16:13:08.069273525 +0000 UTC m=+1151.585632628" observedRunningTime="2026-02-17 16:13:12.692485651 +0000 UTC m=+1156.208844724" watchObservedRunningTime="2026-02-17 16:13:12.694350311 +0000 UTC m=+1156.210709394" Feb 17 16:13:12 crc kubenswrapper[4808]: I0217 16:13:12.708895 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Feb 17 16:13:13 crc kubenswrapper[4808]: I0217 16:13:13.580554 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-wkzp6" event={"ID":"30b7fc5a-690b-4ac6-b37c-9c1ec074f962","Type":"ContainerStarted","Data":"6a1c89af93d94efd5543256071b315797cc20e0d74a7e5c42b8ddd0d1c80278d"} Feb 17 16:13:13 crc kubenswrapper[4808]: I0217 16:13:13.580636 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-wkzp6" event={"ID":"30b7fc5a-690b-4ac6-b37c-9c1ec074f962","Type":"ContainerStarted","Data":"17b1effd602c5d79c34fa01cdf78b27d41c205829b975aff02552a21c69842e5"} Feb 17 16:13:13 crc kubenswrapper[4808]: I0217 16:13:13.620160 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-wkzp6" podStartSLOduration=19.65116461 podStartE2EDuration="33.620129547s" podCreationTimestamp="2026-02-17 16:12:40 +0000 UTC" firstStartedPulling="2026-02-17 16:12:53.635301958 +0000 UTC m=+1137.151661031" lastFinishedPulling="2026-02-17 16:13:07.604266885 +0000 UTC m=+1151.120625968" observedRunningTime="2026-02-17 16:13:13.602035577 +0000 UTC m=+1157.118394690" watchObservedRunningTime="2026-02-17 16:13:13.620129547 +0000 UTC m=+1157.136488660" Feb 17 16:13:14 crc kubenswrapper[4808]: I0217 16:13:14.586765 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-wkzp6" Feb 17 16:13:14 crc kubenswrapper[4808]: I0217 16:13:14.587236 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-wkzp6" Feb 17 16:13:15 crc kubenswrapper[4808]: I0217 16:13:15.617793 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-666b6646f7-8sg8r" Feb 17 16:13:15 crc kubenswrapper[4808]: I0217 16:13:15.708673 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Feb 17 16:13:15 crc kubenswrapper[4808]: I0217 16:13:15.753668 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Feb 17 16:13:15 crc kubenswrapper[4808]: I0217 16:13:15.923841 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-57d769cc4f-5wrzq" Feb 17 16:13:15 crc kubenswrapper[4808]: I0217 16:13:15.978750 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-8sg8r"] Feb 17 16:13:16 crc kubenswrapper[4808]: I0217 16:13:16.602750 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-querier-58c84b5844-pkj8k" event={"ID":"6df15762-0f06-48ff-89bf-00f5118c6ced","Type":"ContainerStarted","Data":"b375cfb7110702e40d0ee78d64b6a20b4645c6a0ae1c5f875a9acfef15ecbf18"} Feb 17 16:13:16 crc kubenswrapper[4808]: I0217 16:13:16.603380 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-8sg8r" podUID="bac5f26b-ff81-49e2-854f-9cad23a57593" containerName="dnsmasq-dns" containerID="cri-o://84853abf40c69f53c1f33037c497f55962bc9212b54400898031ca8bed97c77a" gracePeriod=10 Feb 17 16:13:16 crc kubenswrapper[4808]: I0217 16:13:16.629221 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-querier-58c84b5844-pkj8k" podStartSLOduration=-9223372006.225574 podStartE2EDuration="30.629201659s" podCreationTimestamp="2026-02-17 16:12:46 +0000 UTC" firstStartedPulling="2026-02-17 16:12:50.062847413 +0000 UTC m=+1133.579206476" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:13:16.623551316 +0000 UTC m=+1160.139910419" watchObservedRunningTime="2026-02-17 16:13:16.629201659 +0000 UTC m=+1160.145560732" Feb 17 16:13:16 crc kubenswrapper[4808]: I0217 16:13:16.655851 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Feb 17 16:13:16 crc kubenswrapper[4808]: I0217 16:13:16.960463 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-7j4gd"] Feb 17 16:13:16 crc kubenswrapper[4808]: I0217 16:13:16.971185 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-7j4gd" Feb 17 16:13:16 crc kubenswrapper[4808]: I0217 16:13:16.976480 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Feb 17 16:13:16 crc kubenswrapper[4808]: I0217 16:13:16.987324 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-7j4gd"] Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.000967 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-qh29t"] Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.002389 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-qh29t" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.006226 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.058106 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-qh29t"] Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.077785 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/52d5a09f-33dd-49cf-9a31-a21d73a43b86-ovn-rundir\") pod \"ovn-controller-metrics-qh29t\" (UID: \"52d5a09f-33dd-49cf-9a31-a21d73a43b86\") " pod="openstack/ovn-controller-metrics-qh29t" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.077838 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1602c17-564b-482f-b5cc-cadd68ec07da-config\") pod \"dnsmasq-dns-7f896c8c65-7j4gd\" (UID: \"b1602c17-564b-482f-b5cc-cadd68ec07da\") " pod="openstack/dnsmasq-dns-7f896c8c65-7j4gd" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.077879 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52d5a09f-33dd-49cf-9a31-a21d73a43b86-combined-ca-bundle\") pod \"ovn-controller-metrics-qh29t\" (UID: \"52d5a09f-33dd-49cf-9a31-a21d73a43b86\") " pod="openstack/ovn-controller-metrics-qh29t" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.077927 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b1602c17-564b-482f-b5cc-cadd68ec07da-dns-svc\") pod \"dnsmasq-dns-7f896c8c65-7j4gd\" (UID: \"b1602c17-564b-482f-b5cc-cadd68ec07da\") " pod="openstack/dnsmasq-dns-7f896c8c65-7j4gd" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.077966 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52d5a09f-33dd-49cf-9a31-a21d73a43b86-config\") pod \"ovn-controller-metrics-qh29t\" (UID: \"52d5a09f-33dd-49cf-9a31-a21d73a43b86\") " pod="openstack/ovn-controller-metrics-qh29t" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.078001 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmhss\" (UniqueName: \"kubernetes.io/projected/52d5a09f-33dd-49cf-9a31-a21d73a43b86-kube-api-access-tmhss\") pod \"ovn-controller-metrics-qh29t\" (UID: \"52d5a09f-33dd-49cf-9a31-a21d73a43b86\") " pod="openstack/ovn-controller-metrics-qh29t" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.078089 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b1602c17-564b-482f-b5cc-cadd68ec07da-ovsdbserver-sb\") pod \"dnsmasq-dns-7f896c8c65-7j4gd\" (UID: \"b1602c17-564b-482f-b5cc-cadd68ec07da\") " pod="openstack/dnsmasq-dns-7f896c8c65-7j4gd" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.078129 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/52d5a09f-33dd-49cf-9a31-a21d73a43b86-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-qh29t\" (UID: \"52d5a09f-33dd-49cf-9a31-a21d73a43b86\") " pod="openstack/ovn-controller-metrics-qh29t" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.078165 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/52d5a09f-33dd-49cf-9a31-a21d73a43b86-ovs-rundir\") pod \"ovn-controller-metrics-qh29t\" (UID: \"52d5a09f-33dd-49cf-9a31-a21d73a43b86\") " pod="openstack/ovn-controller-metrics-qh29t" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.078212 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jllqw\" (UniqueName: \"kubernetes.io/projected/b1602c17-564b-482f-b5cc-cadd68ec07da-kube-api-access-jllqw\") pod \"dnsmasq-dns-7f896c8c65-7j4gd\" (UID: \"b1602c17-564b-482f-b5cc-cadd68ec07da\") " pod="openstack/dnsmasq-dns-7f896c8c65-7j4gd" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.128155 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-8sg8r" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.187016 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b1602c17-564b-482f-b5cc-cadd68ec07da-ovsdbserver-sb\") pod \"dnsmasq-dns-7f896c8c65-7j4gd\" (UID: \"b1602c17-564b-482f-b5cc-cadd68ec07da\") " pod="openstack/dnsmasq-dns-7f896c8c65-7j4gd" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.187076 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/52d5a09f-33dd-49cf-9a31-a21d73a43b86-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-qh29t\" (UID: \"52d5a09f-33dd-49cf-9a31-a21d73a43b86\") " pod="openstack/ovn-controller-metrics-qh29t" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.187105 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/52d5a09f-33dd-49cf-9a31-a21d73a43b86-ovs-rundir\") pod \"ovn-controller-metrics-qh29t\" (UID: \"52d5a09f-33dd-49cf-9a31-a21d73a43b86\") " pod="openstack/ovn-controller-metrics-qh29t" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.187137 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jllqw\" (UniqueName: \"kubernetes.io/projected/b1602c17-564b-482f-b5cc-cadd68ec07da-kube-api-access-jllqw\") pod \"dnsmasq-dns-7f896c8c65-7j4gd\" (UID: \"b1602c17-564b-482f-b5cc-cadd68ec07da\") " pod="openstack/dnsmasq-dns-7f896c8c65-7j4gd" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.187182 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/52d5a09f-33dd-49cf-9a31-a21d73a43b86-ovn-rundir\") pod \"ovn-controller-metrics-qh29t\" (UID: \"52d5a09f-33dd-49cf-9a31-a21d73a43b86\") " pod="openstack/ovn-controller-metrics-qh29t" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.187202 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1602c17-564b-482f-b5cc-cadd68ec07da-config\") pod \"dnsmasq-dns-7f896c8c65-7j4gd\" (UID: \"b1602c17-564b-482f-b5cc-cadd68ec07da\") " pod="openstack/dnsmasq-dns-7f896c8c65-7j4gd" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.187234 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52d5a09f-33dd-49cf-9a31-a21d73a43b86-combined-ca-bundle\") pod \"ovn-controller-metrics-qh29t\" (UID: \"52d5a09f-33dd-49cf-9a31-a21d73a43b86\") " pod="openstack/ovn-controller-metrics-qh29t" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.187271 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b1602c17-564b-482f-b5cc-cadd68ec07da-dns-svc\") pod \"dnsmasq-dns-7f896c8c65-7j4gd\" (UID: \"b1602c17-564b-482f-b5cc-cadd68ec07da\") " pod="openstack/dnsmasq-dns-7f896c8c65-7j4gd" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.187292 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52d5a09f-33dd-49cf-9a31-a21d73a43b86-config\") pod \"ovn-controller-metrics-qh29t\" (UID: \"52d5a09f-33dd-49cf-9a31-a21d73a43b86\") " pod="openstack/ovn-controller-metrics-qh29t" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.187315 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmhss\" (UniqueName: \"kubernetes.io/projected/52d5a09f-33dd-49cf-9a31-a21d73a43b86-kube-api-access-tmhss\") pod \"ovn-controller-metrics-qh29t\" (UID: \"52d5a09f-33dd-49cf-9a31-a21d73a43b86\") " pod="openstack/ovn-controller-metrics-qh29t" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.188274 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/52d5a09f-33dd-49cf-9a31-a21d73a43b86-ovn-rundir\") pod \"ovn-controller-metrics-qh29t\" (UID: \"52d5a09f-33dd-49cf-9a31-a21d73a43b86\") " pod="openstack/ovn-controller-metrics-qh29t" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.190608 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b1602c17-564b-482f-b5cc-cadd68ec07da-dns-svc\") pod \"dnsmasq-dns-7f896c8c65-7j4gd\" (UID: \"b1602c17-564b-482f-b5cc-cadd68ec07da\") " pod="openstack/dnsmasq-dns-7f896c8c65-7j4gd" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.190629 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52d5a09f-33dd-49cf-9a31-a21d73a43b86-config\") pod \"ovn-controller-metrics-qh29t\" (UID: \"52d5a09f-33dd-49cf-9a31-a21d73a43b86\") " pod="openstack/ovn-controller-metrics-qh29t" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.190684 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1602c17-564b-482f-b5cc-cadd68ec07da-config\") pod \"dnsmasq-dns-7f896c8c65-7j4gd\" (UID: \"b1602c17-564b-482f-b5cc-cadd68ec07da\") " pod="openstack/dnsmasq-dns-7f896c8c65-7j4gd" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.191126 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/52d5a09f-33dd-49cf-9a31-a21d73a43b86-ovs-rundir\") pod \"ovn-controller-metrics-qh29t\" (UID: \"52d5a09f-33dd-49cf-9a31-a21d73a43b86\") " pod="openstack/ovn-controller-metrics-qh29t" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.191292 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b1602c17-564b-482f-b5cc-cadd68ec07da-ovsdbserver-sb\") pod \"dnsmasq-dns-7f896c8c65-7j4gd\" (UID: \"b1602c17-564b-482f-b5cc-cadd68ec07da\") " pod="openstack/dnsmasq-dns-7f896c8c65-7j4gd" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.197971 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52d5a09f-33dd-49cf-9a31-a21d73a43b86-combined-ca-bundle\") pod \"ovn-controller-metrics-qh29t\" (UID: \"52d5a09f-33dd-49cf-9a31-a21d73a43b86\") " pod="openstack/ovn-controller-metrics-qh29t" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.210248 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmhss\" (UniqueName: \"kubernetes.io/projected/52d5a09f-33dd-49cf-9a31-a21d73a43b86-kube-api-access-tmhss\") pod \"ovn-controller-metrics-qh29t\" (UID: \"52d5a09f-33dd-49cf-9a31-a21d73a43b86\") " pod="openstack/ovn-controller-metrics-qh29t" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.217335 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/52d5a09f-33dd-49cf-9a31-a21d73a43b86-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-qh29t\" (UID: \"52d5a09f-33dd-49cf-9a31-a21d73a43b86\") " pod="openstack/ovn-controller-metrics-qh29t" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.220491 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jllqw\" (UniqueName: \"kubernetes.io/projected/b1602c17-564b-482f-b5cc-cadd68ec07da-kube-api-access-jllqw\") pod \"dnsmasq-dns-7f896c8c65-7j4gd\" (UID: \"b1602c17-564b-482f-b5cc-cadd68ec07da\") " pod="openstack/dnsmasq-dns-7f896c8c65-7j4gd" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.278002 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-querier-58c84b5844-pkj8k" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.290285 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tdvxp\" (UniqueName: \"kubernetes.io/projected/bac5f26b-ff81-49e2-854f-9cad23a57593-kube-api-access-tdvxp\") pod \"bac5f26b-ff81-49e2-854f-9cad23a57593\" (UID: \"bac5f26b-ff81-49e2-854f-9cad23a57593\") " Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.290387 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bac5f26b-ff81-49e2-854f-9cad23a57593-dns-svc\") pod \"bac5f26b-ff81-49e2-854f-9cad23a57593\" (UID: \"bac5f26b-ff81-49e2-854f-9cad23a57593\") " Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.290431 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bac5f26b-ff81-49e2-854f-9cad23a57593-config\") pod \"bac5f26b-ff81-49e2-854f-9cad23a57593\" (UID: \"bac5f26b-ff81-49e2-854f-9cad23a57593\") " Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.299976 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bac5f26b-ff81-49e2-854f-9cad23a57593-kube-api-access-tdvxp" (OuterVolumeSpecName: "kube-api-access-tdvxp") pod "bac5f26b-ff81-49e2-854f-9cad23a57593" (UID: "bac5f26b-ff81-49e2-854f-9cad23a57593"). InnerVolumeSpecName "kube-api-access-tdvxp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.324891 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-7j4gd" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.325955 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-7j4gd"] Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.334831 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-qh29t" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.346643 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bac5f26b-ff81-49e2-854f-9cad23a57593-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "bac5f26b-ff81-49e2-854f-9cad23a57593" (UID: "bac5f26b-ff81-49e2-854f-9cad23a57593"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.359301 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bac5f26b-ff81-49e2-854f-9cad23a57593-config" (OuterVolumeSpecName: "config") pod "bac5f26b-ff81-49e2-854f-9cad23a57593" (UID: "bac5f26b-ff81-49e2-854f-9cad23a57593"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.372696 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-v8hvr"] Feb 17 16:13:17 crc kubenswrapper[4808]: E0217 16:13:17.373484 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bac5f26b-ff81-49e2-854f-9cad23a57593" containerName="dnsmasq-dns" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.373496 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="bac5f26b-ff81-49e2-854f-9cad23a57593" containerName="dnsmasq-dns" Feb 17 16:13:17 crc kubenswrapper[4808]: E0217 16:13:17.373518 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bac5f26b-ff81-49e2-854f-9cad23a57593" containerName="init" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.373524 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="bac5f26b-ff81-49e2-854f-9cad23a57593" containerName="init" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.373712 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="bac5f26b-ff81-49e2-854f-9cad23a57593" containerName="dnsmasq-dns" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.374672 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-v8hvr" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.384332 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.405926 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tdvxp\" (UniqueName: \"kubernetes.io/projected/bac5f26b-ff81-49e2-854f-9cad23a57593-kube-api-access-tdvxp\") on node \"crc\" DevicePath \"\"" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.405985 4808 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bac5f26b-ff81-49e2-854f-9cad23a57593-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.405995 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bac5f26b-ff81-49e2-854f-9cad23a57593-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.412401 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-v8hvr"] Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.507696 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rj9g\" (UniqueName: \"kubernetes.io/projected/27d2df02-b7e7-4fe9-a125-5a6acf093c85-kube-api-access-6rj9g\") pod \"dnsmasq-dns-86db49b7ff-v8hvr\" (UID: \"27d2df02-b7e7-4fe9-a125-5a6acf093c85\") " pod="openstack/dnsmasq-dns-86db49b7ff-v8hvr" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.507754 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27d2df02-b7e7-4fe9-a125-5a6acf093c85-config\") pod \"dnsmasq-dns-86db49b7ff-v8hvr\" (UID: \"27d2df02-b7e7-4fe9-a125-5a6acf093c85\") " pod="openstack/dnsmasq-dns-86db49b7ff-v8hvr" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.507821 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/27d2df02-b7e7-4fe9-a125-5a6acf093c85-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-v8hvr\" (UID: \"27d2df02-b7e7-4fe9-a125-5a6acf093c85\") " pod="openstack/dnsmasq-dns-86db49b7ff-v8hvr" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.507851 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/27d2df02-b7e7-4fe9-a125-5a6acf093c85-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-v8hvr\" (UID: \"27d2df02-b7e7-4fe9-a125-5a6acf093c85\") " pod="openstack/dnsmasq-dns-86db49b7ff-v8hvr" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.507896 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/27d2df02-b7e7-4fe9-a125-5a6acf093c85-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-v8hvr\" (UID: \"27d2df02-b7e7-4fe9-a125-5a6acf093c85\") " pod="openstack/dnsmasq-dns-86db49b7ff-v8hvr" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.613777 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rj9g\" (UniqueName: \"kubernetes.io/projected/27d2df02-b7e7-4fe9-a125-5a6acf093c85-kube-api-access-6rj9g\") pod \"dnsmasq-dns-86db49b7ff-v8hvr\" (UID: \"27d2df02-b7e7-4fe9-a125-5a6acf093c85\") " pod="openstack/dnsmasq-dns-86db49b7ff-v8hvr" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.614149 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27d2df02-b7e7-4fe9-a125-5a6acf093c85-config\") pod \"dnsmasq-dns-86db49b7ff-v8hvr\" (UID: \"27d2df02-b7e7-4fe9-a125-5a6acf093c85\") " pod="openstack/dnsmasq-dns-86db49b7ff-v8hvr" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.614219 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/27d2df02-b7e7-4fe9-a125-5a6acf093c85-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-v8hvr\" (UID: \"27d2df02-b7e7-4fe9-a125-5a6acf093c85\") " pod="openstack/dnsmasq-dns-86db49b7ff-v8hvr" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.614260 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/27d2df02-b7e7-4fe9-a125-5a6acf093c85-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-v8hvr\" (UID: \"27d2df02-b7e7-4fe9-a125-5a6acf093c85\") " pod="openstack/dnsmasq-dns-86db49b7ff-v8hvr" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.614318 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/27d2df02-b7e7-4fe9-a125-5a6acf093c85-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-v8hvr\" (UID: \"27d2df02-b7e7-4fe9-a125-5a6acf093c85\") " pod="openstack/dnsmasq-dns-86db49b7ff-v8hvr" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.615921 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/27d2df02-b7e7-4fe9-a125-5a6acf093c85-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-v8hvr\" (UID: \"27d2df02-b7e7-4fe9-a125-5a6acf093c85\") " pod="openstack/dnsmasq-dns-86db49b7ff-v8hvr" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.616234 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27d2df02-b7e7-4fe9-a125-5a6acf093c85-config\") pod \"dnsmasq-dns-86db49b7ff-v8hvr\" (UID: \"27d2df02-b7e7-4fe9-a125-5a6acf093c85\") " pod="openstack/dnsmasq-dns-86db49b7ff-v8hvr" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.616830 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/27d2df02-b7e7-4fe9-a125-5a6acf093c85-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-v8hvr\" (UID: \"27d2df02-b7e7-4fe9-a125-5a6acf093c85\") " pod="openstack/dnsmasq-dns-86db49b7ff-v8hvr" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.619198 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/27d2df02-b7e7-4fe9-a125-5a6acf093c85-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-v8hvr\" (UID: \"27d2df02-b7e7-4fe9-a125-5a6acf093c85\") " pod="openstack/dnsmasq-dns-86db49b7ff-v8hvr" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.626615 4808 generic.go:334] "Generic (PLEG): container finished" podID="bac5f26b-ff81-49e2-854f-9cad23a57593" containerID="84853abf40c69f53c1f33037c497f55962bc9212b54400898031ca8bed97c77a" exitCode=0 Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.627795 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-8sg8r" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.630840 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-8sg8r" event={"ID":"bac5f26b-ff81-49e2-854f-9cad23a57593","Type":"ContainerDied","Data":"84853abf40c69f53c1f33037c497f55962bc9212b54400898031ca8bed97c77a"} Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.630910 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-8sg8r" event={"ID":"bac5f26b-ff81-49e2-854f-9cad23a57593","Type":"ContainerDied","Data":"83aebd7060ebf58080acd8dda61d0160f4457ae1b4e3e4db27d61232cdd028e3"} Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.630932 4808 scope.go:117] "RemoveContainer" containerID="84853abf40c69f53c1f33037c497f55962bc9212b54400898031ca8bed97c77a" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.640349 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rj9g\" (UniqueName: \"kubernetes.io/projected/27d2df02-b7e7-4fe9-a125-5a6acf093c85-kube-api-access-6rj9g\") pod \"dnsmasq-dns-86db49b7ff-v8hvr\" (UID: \"27d2df02-b7e7-4fe9-a125-5a6acf093c85\") " pod="openstack/dnsmasq-dns-86db49b7ff-v8hvr" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.670784 4808 scope.go:117] "RemoveContainer" containerID="33437dcb06d23989d40121f3a469434526c25c910f4a2965d927d0bdfc5b08ce" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.680691 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-8sg8r"] Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.696176 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-8sg8r"] Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.708714 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-v8hvr" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.712324 4808 scope.go:117] "RemoveContainer" containerID="84853abf40c69f53c1f33037c497f55962bc9212b54400898031ca8bed97c77a" Feb 17 16:13:17 crc kubenswrapper[4808]: E0217 16:13:17.714076 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84853abf40c69f53c1f33037c497f55962bc9212b54400898031ca8bed97c77a\": container with ID starting with 84853abf40c69f53c1f33037c497f55962bc9212b54400898031ca8bed97c77a not found: ID does not exist" containerID="84853abf40c69f53c1f33037c497f55962bc9212b54400898031ca8bed97c77a" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.714109 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84853abf40c69f53c1f33037c497f55962bc9212b54400898031ca8bed97c77a"} err="failed to get container status \"84853abf40c69f53c1f33037c497f55962bc9212b54400898031ca8bed97c77a\": rpc error: code = NotFound desc = could not find container \"84853abf40c69f53c1f33037c497f55962bc9212b54400898031ca8bed97c77a\": container with ID starting with 84853abf40c69f53c1f33037c497f55962bc9212b54400898031ca8bed97c77a not found: ID does not exist" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.714137 4808 scope.go:117] "RemoveContainer" containerID="33437dcb06d23989d40121f3a469434526c25c910f4a2965d927d0bdfc5b08ce" Feb 17 16:13:17 crc kubenswrapper[4808]: E0217 16:13:17.714438 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33437dcb06d23989d40121f3a469434526c25c910f4a2965d927d0bdfc5b08ce\": container with ID starting with 33437dcb06d23989d40121f3a469434526c25c910f4a2965d927d0bdfc5b08ce not found: ID does not exist" containerID="33437dcb06d23989d40121f3a469434526c25c910f4a2965d927d0bdfc5b08ce" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.714466 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33437dcb06d23989d40121f3a469434526c25c910f4a2965d927d0bdfc5b08ce"} err="failed to get container status \"33437dcb06d23989d40121f3a469434526c25c910f4a2965d927d0bdfc5b08ce\": rpc error: code = NotFound desc = could not find container \"33437dcb06d23989d40121f3a469434526c25c910f4a2965d927d0bdfc5b08ce\": container with ID starting with 33437dcb06d23989d40121f3a469434526c25c910f4a2965d927d0bdfc5b08ce not found: ID does not exist" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.909091 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-qh29t"] Feb 17 16:13:17 crc kubenswrapper[4808]: W0217 16:13:17.941799 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod52d5a09f_33dd_49cf_9a31_a21d73a43b86.slice/crio-b61328a374915c00a61741b36d0de944f0cbd3fb4e900ff90643dd9e298dedf6 WatchSource:0}: Error finding container b61328a374915c00a61741b36d0de944f0cbd3fb4e900ff90643dd9e298dedf6: Status 404 returned error can't find the container with id b61328a374915c00a61741b36d0de944f0cbd3fb4e900ff90643dd9e298dedf6 Feb 17 16:13:18 crc kubenswrapper[4808]: I0217 16:13:18.040364 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-7j4gd"] Feb 17 16:13:18 crc kubenswrapper[4808]: W0217 16:13:18.051900 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb1602c17_564b_482f_b5cc_cadd68ec07da.slice/crio-f063161929d5787f6080c4e7e94d4ec9783c2f61bba4d4ed1ee08f2ebd980f2e WatchSource:0}: Error finding container f063161929d5787f6080c4e7e94d4ec9783c2f61bba4d4ed1ee08f2ebd980f2e: Status 404 returned error can't find the container with id f063161929d5787f6080c4e7e94d4ec9783c2f61bba4d4ed1ee08f2ebd980f2e Feb 17 16:13:18 crc kubenswrapper[4808]: I0217 16:13:18.349899 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-v8hvr"] Feb 17 16:13:18 crc kubenswrapper[4808]: I0217 16:13:18.650142 4808 generic.go:334] "Generic (PLEG): container finished" podID="b1602c17-564b-482f-b5cc-cadd68ec07da" containerID="02fe4733904170d6ff8ca546ae278d5400ac1f6b5e0058e060083b8b17f2a502" exitCode=0 Feb 17 16:13:18 crc kubenswrapper[4808]: I0217 16:13:18.650241 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-7j4gd" event={"ID":"b1602c17-564b-482f-b5cc-cadd68ec07da","Type":"ContainerDied","Data":"02fe4733904170d6ff8ca546ae278d5400ac1f6b5e0058e060083b8b17f2a502"} Feb 17 16:13:18 crc kubenswrapper[4808]: I0217 16:13:18.650274 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-7j4gd" event={"ID":"b1602c17-564b-482f-b5cc-cadd68ec07da","Type":"ContainerStarted","Data":"f063161929d5787f6080c4e7e94d4ec9783c2f61bba4d4ed1ee08f2ebd980f2e"} Feb 17 16:13:18 crc kubenswrapper[4808]: I0217 16:13:18.660670 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-qh29t" event={"ID":"52d5a09f-33dd-49cf-9a31-a21d73a43b86","Type":"ContainerStarted","Data":"c00ff8d4a75ccaaaad28d0a38b92e55dce1ebb4576e8e7aef8057a40df458b3b"} Feb 17 16:13:18 crc kubenswrapper[4808]: I0217 16:13:18.660725 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-qh29t" event={"ID":"52d5a09f-33dd-49cf-9a31-a21d73a43b86","Type":"ContainerStarted","Data":"b61328a374915c00a61741b36d0de944f0cbd3fb4e900ff90643dd9e298dedf6"} Feb 17 16:13:18 crc kubenswrapper[4808]: I0217 16:13:18.666562 4808 generic.go:334] "Generic (PLEG): container finished" podID="27d2df02-b7e7-4fe9-a125-5a6acf093c85" containerID="b60fbde46c6075a50ace4cd1663669a692d98861f29087030c80fceb181a0f6f" exitCode=0 Feb 17 16:13:18 crc kubenswrapper[4808]: I0217 16:13:18.666635 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-v8hvr" event={"ID":"27d2df02-b7e7-4fe9-a125-5a6acf093c85","Type":"ContainerDied","Data":"b60fbde46c6075a50ace4cd1663669a692d98861f29087030c80fceb181a0f6f"} Feb 17 16:13:18 crc kubenswrapper[4808]: I0217 16:13:18.666663 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-v8hvr" event={"ID":"27d2df02-b7e7-4fe9-a125-5a6acf093c85","Type":"ContainerStarted","Data":"d63637f01ebacc82cd0cd4fa9f1b31ac08b1e5040c4e16549d0faa344661b80a"} Feb 17 16:13:18 crc kubenswrapper[4808]: I0217 16:13:18.707712 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-qh29t" podStartSLOduration=2.707693255 podStartE2EDuration="2.707693255s" podCreationTimestamp="2026-02-17 16:13:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:13:18.701458176 +0000 UTC m=+1162.217817249" watchObservedRunningTime="2026-02-17 16:13:18.707693255 +0000 UTC m=+1162.224052328" Feb 17 16:13:19 crc kubenswrapper[4808]: I0217 16:13:19.102039 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-7j4gd" Feb 17 16:13:19 crc kubenswrapper[4808]: I0217 16:13:19.157229 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b1602c17-564b-482f-b5cc-cadd68ec07da-ovsdbserver-sb\") pod \"b1602c17-564b-482f-b5cc-cadd68ec07da\" (UID: \"b1602c17-564b-482f-b5cc-cadd68ec07da\") " Feb 17 16:13:19 crc kubenswrapper[4808]: I0217 16:13:19.157352 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1602c17-564b-482f-b5cc-cadd68ec07da-config\") pod \"b1602c17-564b-482f-b5cc-cadd68ec07da\" (UID: \"b1602c17-564b-482f-b5cc-cadd68ec07da\") " Feb 17 16:13:19 crc kubenswrapper[4808]: I0217 16:13:19.157417 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jllqw\" (UniqueName: \"kubernetes.io/projected/b1602c17-564b-482f-b5cc-cadd68ec07da-kube-api-access-jllqw\") pod \"b1602c17-564b-482f-b5cc-cadd68ec07da\" (UID: \"b1602c17-564b-482f-b5cc-cadd68ec07da\") " Feb 17 16:13:19 crc kubenswrapper[4808]: I0217 16:13:19.157459 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b1602c17-564b-482f-b5cc-cadd68ec07da-dns-svc\") pod \"b1602c17-564b-482f-b5cc-cadd68ec07da\" (UID: \"b1602c17-564b-482f-b5cc-cadd68ec07da\") " Feb 17 16:13:19 crc kubenswrapper[4808]: I0217 16:13:19.164252 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bac5f26b-ff81-49e2-854f-9cad23a57593" path="/var/lib/kubelet/pods/bac5f26b-ff81-49e2-854f-9cad23a57593/volumes" Feb 17 16:13:19 crc kubenswrapper[4808]: I0217 16:13:19.275929 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1602c17-564b-482f-b5cc-cadd68ec07da-kube-api-access-jllqw" (OuterVolumeSpecName: "kube-api-access-jllqw") pod "b1602c17-564b-482f-b5cc-cadd68ec07da" (UID: "b1602c17-564b-482f-b5cc-cadd68ec07da"). InnerVolumeSpecName "kube-api-access-jllqw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:13:19 crc kubenswrapper[4808]: I0217 16:13:19.364000 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jllqw\" (UniqueName: \"kubernetes.io/projected/b1602c17-564b-482f-b5cc-cadd68ec07da-kube-api-access-jllqw\") on node \"crc\" DevicePath \"\"" Feb 17 16:13:19 crc kubenswrapper[4808]: I0217 16:13:19.474241 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1602c17-564b-482f-b5cc-cadd68ec07da-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b1602c17-564b-482f-b5cc-cadd68ec07da" (UID: "b1602c17-564b-482f-b5cc-cadd68ec07da"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:13:19 crc kubenswrapper[4808]: I0217 16:13:19.492412 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1602c17-564b-482f-b5cc-cadd68ec07da-config" (OuterVolumeSpecName: "config") pod "b1602c17-564b-482f-b5cc-cadd68ec07da" (UID: "b1602c17-564b-482f-b5cc-cadd68ec07da"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:13:19 crc kubenswrapper[4808]: I0217 16:13:19.495933 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1602c17-564b-482f-b5cc-cadd68ec07da-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b1602c17-564b-482f-b5cc-cadd68ec07da" (UID: "b1602c17-564b-482f-b5cc-cadd68ec07da"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:13:19 crc kubenswrapper[4808]: I0217 16:13:19.566674 4808 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b1602c17-564b-482f-b5cc-cadd68ec07da-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 17 16:13:19 crc kubenswrapper[4808]: I0217 16:13:19.566724 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1602c17-564b-482f-b5cc-cadd68ec07da-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:13:19 crc kubenswrapper[4808]: I0217 16:13:19.566733 4808 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b1602c17-564b-482f-b5cc-cadd68ec07da-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 16:13:19 crc kubenswrapper[4808]: I0217 16:13:19.694875 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-v8hvr" event={"ID":"27d2df02-b7e7-4fe9-a125-5a6acf093c85","Type":"ContainerStarted","Data":"8e5f6f7a728607504ca216d406d1d8a535d1573f6c6ba0a924dbe399f84dae18"} Feb 17 16:13:19 crc kubenswrapper[4808]: I0217 16:13:19.695562 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86db49b7ff-v8hvr" Feb 17 16:13:19 crc kubenswrapper[4808]: I0217 16:13:19.696985 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-7j4gd" Feb 17 16:13:19 crc kubenswrapper[4808]: I0217 16:13:19.697039 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-7j4gd" event={"ID":"b1602c17-564b-482f-b5cc-cadd68ec07da","Type":"ContainerDied","Data":"f063161929d5787f6080c4e7e94d4ec9783c2f61bba4d4ed1ee08f2ebd980f2e"} Feb 17 16:13:19 crc kubenswrapper[4808]: I0217 16:13:19.697085 4808 scope.go:117] "RemoveContainer" containerID="02fe4733904170d6ff8ca546ae278d5400ac1f6b5e0058e060083b8b17f2a502" Feb 17 16:13:19 crc kubenswrapper[4808]: I0217 16:13:19.698791 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"a020d38c-5e24-4266-96dc-9050e4d82f46","Type":"ContainerStarted","Data":"63d14012fa7e0d1db45466cd7673614d41b6384d4b8d5ab46a11ce8b71cfbb93"} Feb 17 16:13:19 crc kubenswrapper[4808]: I0217 16:13:19.708053 4808 generic.go:334] "Generic (PLEG): container finished" podID="56f9931d-b010-4282-9068-16b2e4e4b247" containerID="eaab0a6bfd8b2f49bb5b0419ebf83f83f3a7d7db298ba6d150f0ad5ee4951a2a" exitCode=0 Feb 17 16:13:19 crc kubenswrapper[4808]: I0217 16:13:19.708121 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"56f9931d-b010-4282-9068-16b2e4e4b247","Type":"ContainerDied","Data":"eaab0a6bfd8b2f49bb5b0419ebf83f83f3a7d7db298ba6d150f0ad5ee4951a2a"} Feb 17 16:13:19 crc kubenswrapper[4808]: I0217 16:13:19.719034 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"ade81c90-5cdf-45d4-ad2f-52a3514e1596","Type":"ContainerStarted","Data":"0ea8527a371975975278f77fbada0061706f8832d74429f7bac385a21fce660f"} Feb 17 16:13:19 crc kubenswrapper[4808]: I0217 16:13:19.723393 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86db49b7ff-v8hvr" podStartSLOduration=2.723371525 podStartE2EDuration="2.723371525s" podCreationTimestamp="2026-02-17 16:13:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:13:19.713720193 +0000 UTC m=+1163.230079266" watchObservedRunningTime="2026-02-17 16:13:19.723371525 +0000 UTC m=+1163.239730608" Feb 17 16:13:19 crc kubenswrapper[4808]: I0217 16:13:19.833260 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-7j4gd"] Feb 17 16:13:19 crc kubenswrapper[4808]: I0217 16:13:19.841934 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-7j4gd"] Feb 17 16:13:20 crc kubenswrapper[4808]: I0217 16:13:20.728654 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"2ea38754-3b00-4bcb-93d9-28b60dda0e0a","Type":"ContainerStarted","Data":"4394d899179994d78a4e42db6f34ea90e3d9c5f609acb5be4ecfd05118f69bbf"} Feb 17 16:13:20 crc kubenswrapper[4808]: I0217 16:13:20.728944 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Feb 17 16:13:20 crc kubenswrapper[4808]: I0217 16:13:20.731269 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"2917eca2-0431-4bd6-ad96-ab8464cc4fd7","Type":"ContainerStarted","Data":"2fc63ca226fc458b6690177cc943e7e0ca56b5c8e5a076cf9854b9dccf7b50f0"} Feb 17 16:13:20 crc kubenswrapper[4808]: I0217 16:13:20.747603 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=17.315505518 podStartE2EDuration="46.747585626s" podCreationTimestamp="2026-02-17 16:12:34 +0000 UTC" firstStartedPulling="2026-02-17 16:12:50.139427626 +0000 UTC m=+1133.655786699" lastFinishedPulling="2026-02-17 16:13:19.571507734 +0000 UTC m=+1163.087866807" observedRunningTime="2026-02-17 16:13:20.743867335 +0000 UTC m=+1164.260226408" watchObservedRunningTime="2026-02-17 16:13:20.747585626 +0000 UTC m=+1164.263944699" Feb 17 16:13:21 crc kubenswrapper[4808]: I0217 16:13:21.167628 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1602c17-564b-482f-b5cc-cadd68ec07da" path="/var/lib/kubelet/pods/b1602c17-564b-482f-b5cc-cadd68ec07da/volumes" Feb 17 16:13:22 crc kubenswrapper[4808]: I0217 16:13:22.753219 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"56f9931d-b010-4282-9068-16b2e4e4b247","Type":"ContainerStarted","Data":"98654911aaf83ce6cc519f041d3a0e10f34536f058c65db77bda34adf754d38f"} Feb 17 16:13:23 crc kubenswrapper[4808]: I0217 16:13:23.769421 4808 generic.go:334] "Generic (PLEG): container finished" podID="ade81c90-5cdf-45d4-ad2f-52a3514e1596" containerID="0ea8527a371975975278f77fbada0061706f8832d74429f7bac385a21fce660f" exitCode=0 Feb 17 16:13:23 crc kubenswrapper[4808]: I0217 16:13:23.769654 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"ade81c90-5cdf-45d4-ad2f-52a3514e1596","Type":"ContainerDied","Data":"0ea8527a371975975278f77fbada0061706f8832d74429f7bac385a21fce660f"} Feb 17 16:13:23 crc kubenswrapper[4808]: I0217 16:13:23.774269 4808 generic.go:334] "Generic (PLEG): container finished" podID="a020d38c-5e24-4266-96dc-9050e4d82f46" containerID="63d14012fa7e0d1db45466cd7673614d41b6384d4b8d5ab46a11ce8b71cfbb93" exitCode=0 Feb 17 16:13:23 crc kubenswrapper[4808]: I0217 16:13:23.774310 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"a020d38c-5e24-4266-96dc-9050e4d82f46","Type":"ContainerDied","Data":"63d14012fa7e0d1db45466cd7673614d41b6384d4b8d5ab46a11ce8b71cfbb93"} Feb 17 16:13:24 crc kubenswrapper[4808]: I0217 16:13:24.783520 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"a020d38c-5e24-4266-96dc-9050e4d82f46","Type":"ContainerStarted","Data":"a394837335a0eb508b22e180b1e69e1e33f3eda577ba4224fd4c1b14c7ac5119"} Feb 17 16:13:24 crc kubenswrapper[4808]: I0217 16:13:24.785538 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"ade81c90-5cdf-45d4-ad2f-52a3514e1596","Type":"ContainerStarted","Data":"0588f71e9a5f5fc1b883f656058d8cc65fead8be7fab00b0f6048fb1284601c0"} Feb 17 16:13:24 crc kubenswrapper[4808]: I0217 16:13:24.787191 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"8c434a76-4dcf-4c69-aefa-5cda8b120a26","Type":"ContainerStarted","Data":"bb093a9a448d6b27086896cfe5e9ec8580c0bc815915eb5536a1f7c2a75e71df"} Feb 17 16:13:24 crc kubenswrapper[4808]: I0217 16:13:24.806420 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=24.475376049 podStartE2EDuration="53.80640645s" podCreationTimestamp="2026-02-17 16:12:31 +0000 UTC" firstStartedPulling="2026-02-17 16:12:49.412659639 +0000 UTC m=+1132.929018722" lastFinishedPulling="2026-02-17 16:13:18.74369005 +0000 UTC m=+1162.260049123" observedRunningTime="2026-02-17 16:13:24.805626989 +0000 UTC m=+1168.321986062" watchObservedRunningTime="2026-02-17 16:13:24.80640645 +0000 UTC m=+1168.322765523" Feb 17 16:13:24 crc kubenswrapper[4808]: I0217 16:13:24.829474 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=11.667675566 podStartE2EDuration="44.829454255s" podCreationTimestamp="2026-02-17 16:12:40 +0000 UTC" firstStartedPulling="2026-02-17 16:12:50.515682414 +0000 UTC m=+1134.032041487" lastFinishedPulling="2026-02-17 16:13:23.677461103 +0000 UTC m=+1167.193820176" observedRunningTime="2026-02-17 16:13:24.825859317 +0000 UTC m=+1168.342218390" watchObservedRunningTime="2026-02-17 16:13:24.829454255 +0000 UTC m=+1168.345813328" Feb 17 16:13:24 crc kubenswrapper[4808]: I0217 16:13:24.854209 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=23.094490593 podStartE2EDuration="51.854183684s" podCreationTimestamp="2026-02-17 16:12:33 +0000 UTC" firstStartedPulling="2026-02-17 16:12:50.148629176 +0000 UTC m=+1133.664988249" lastFinishedPulling="2026-02-17 16:13:18.908322277 +0000 UTC m=+1162.424681340" observedRunningTime="2026-02-17 16:13:24.847098232 +0000 UTC m=+1168.363457345" watchObservedRunningTime="2026-02-17 16:13:24.854183684 +0000 UTC m=+1168.370542777" Feb 17 16:13:24 crc kubenswrapper[4808]: I0217 16:13:24.894168 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 17 16:13:24 crc kubenswrapper[4808]: I0217 16:13:24.896691 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 17 16:13:25 crc kubenswrapper[4808]: I0217 16:13:25.216449 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Feb 17 16:13:25 crc kubenswrapper[4808]: I0217 16:13:25.797530 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"0a2bf674-1881-41e9-9c0f-93e8f14ac222","Type":"ContainerStarted","Data":"b8838c518fb8b535c043a526b61b1b74b26af147fff1399fef7427934840abb3"} Feb 17 16:13:25 crc kubenswrapper[4808]: I0217 16:13:25.798264 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 17 16:13:25 crc kubenswrapper[4808]: I0217 16:13:25.800753 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"56f9931d-b010-4282-9068-16b2e4e4b247","Type":"ContainerStarted","Data":"03937f48577de4a835f7ff8c33ce25fd6b70916328f18305898cd5ad82b45276"} Feb 17 16:13:25 crc kubenswrapper[4808]: I0217 16:13:25.818130 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=14.631862538 podStartE2EDuration="49.818108212s" podCreationTimestamp="2026-02-17 16:12:36 +0000 UTC" firstStartedPulling="2026-02-17 16:12:50.368416416 +0000 UTC m=+1133.884775489" lastFinishedPulling="2026-02-17 16:13:25.55466205 +0000 UTC m=+1169.071021163" observedRunningTime="2026-02-17 16:13:25.811838593 +0000 UTC m=+1169.328197686" watchObservedRunningTime="2026-02-17 16:13:25.818108212 +0000 UTC m=+1169.334467285" Feb 17 16:13:25 crc kubenswrapper[4808]: I0217 16:13:25.839135 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/alertmanager-metric-storage-0" podStartSLOduration=16.912500915 podStartE2EDuration="48.839110982s" podCreationTimestamp="2026-02-17 16:12:37 +0000 UTC" firstStartedPulling="2026-02-17 16:12:50.15433783 +0000 UTC m=+1133.670696903" lastFinishedPulling="2026-02-17 16:13:22.080947887 +0000 UTC m=+1165.597306970" observedRunningTime="2026-02-17 16:13:25.835034841 +0000 UTC m=+1169.351393944" watchObservedRunningTime="2026-02-17 16:13:25.839110982 +0000 UTC m=+1169.355470065" Feb 17 16:13:26 crc kubenswrapper[4808]: I0217 16:13:26.624912 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Feb 17 16:13:26 crc kubenswrapper[4808]: I0217 16:13:26.625124 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Feb 17 16:13:26 crc kubenswrapper[4808]: I0217 16:13:26.817392 4808 generic.go:334] "Generic (PLEG): container finished" podID="2917eca2-0431-4bd6-ad96-ab8464cc4fd7" containerID="2fc63ca226fc458b6690177cc943e7e0ca56b5c8e5a076cf9854b9dccf7b50f0" exitCode=0 Feb 17 16:13:26 crc kubenswrapper[4808]: I0217 16:13:26.818262 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"2917eca2-0431-4bd6-ad96-ab8464cc4fd7","Type":"ContainerDied","Data":"2fc63ca226fc458b6690177cc943e7e0ca56b5c8e5a076cf9854b9dccf7b50f0"} Feb 17 16:13:26 crc kubenswrapper[4808]: I0217 16:13:26.819097 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/alertmanager-metric-storage-0" Feb 17 16:13:26 crc kubenswrapper[4808]: I0217 16:13:26.822608 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/alertmanager-metric-storage-0" Feb 17 16:13:27 crc kubenswrapper[4808]: I0217 16:13:27.048672 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-zfhfg" Feb 17 16:13:27 crc kubenswrapper[4808]: I0217 16:13:27.311750 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-v8hvr"] Feb 17 16:13:27 crc kubenswrapper[4808]: I0217 16:13:27.311984 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86db49b7ff-v8hvr" podUID="27d2df02-b7e7-4fe9-a125-5a6acf093c85" containerName="dnsmasq-dns" containerID="cri-o://8e5f6f7a728607504ca216d406d1d8a535d1573f6c6ba0a924dbe399f84dae18" gracePeriod=10 Feb 17 16:13:27 crc kubenswrapper[4808]: I0217 16:13:27.314865 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-86db49b7ff-v8hvr" Feb 17 16:13:27 crc kubenswrapper[4808]: I0217 16:13:27.366892 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-52cj4" Feb 17 16:13:27 crc kubenswrapper[4808]: I0217 16:13:27.374656 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-pq8qq"] Feb 17 16:13:27 crc kubenswrapper[4808]: E0217 16:13:27.375008 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1602c17-564b-482f-b5cc-cadd68ec07da" containerName="init" Feb 17 16:13:27 crc kubenswrapper[4808]: I0217 16:13:27.375024 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1602c17-564b-482f-b5cc-cadd68ec07da" containerName="init" Feb 17 16:13:27 crc kubenswrapper[4808]: I0217 16:13:27.375185 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1602c17-564b-482f-b5cc-cadd68ec07da" containerName="init" Feb 17 16:13:27 crc kubenswrapper[4808]: I0217 16:13:27.376044 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-pq8qq" Feb 17 16:13:27 crc kubenswrapper[4808]: I0217 16:13:27.407888 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-pq8qq"] Feb 17 16:13:27 crc kubenswrapper[4808]: I0217 16:13:27.522719 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/317e56c8-5f01-4313-a632-12ccaccf9442-config\") pod \"dnsmasq-dns-698758b865-pq8qq\" (UID: \"317e56c8-5f01-4313-a632-12ccaccf9442\") " pod="openstack/dnsmasq-dns-698758b865-pq8qq" Feb 17 16:13:27 crc kubenswrapper[4808]: I0217 16:13:27.522857 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/317e56c8-5f01-4313-a632-12ccaccf9442-dns-svc\") pod \"dnsmasq-dns-698758b865-pq8qq\" (UID: \"317e56c8-5f01-4313-a632-12ccaccf9442\") " pod="openstack/dnsmasq-dns-698758b865-pq8qq" Feb 17 16:13:27 crc kubenswrapper[4808]: I0217 16:13:27.522922 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/317e56c8-5f01-4313-a632-12ccaccf9442-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-pq8qq\" (UID: \"317e56c8-5f01-4313-a632-12ccaccf9442\") " pod="openstack/dnsmasq-dns-698758b865-pq8qq" Feb 17 16:13:27 crc kubenswrapper[4808]: I0217 16:13:27.522966 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2l9h5\" (UniqueName: \"kubernetes.io/projected/317e56c8-5f01-4313-a632-12ccaccf9442-kube-api-access-2l9h5\") pod \"dnsmasq-dns-698758b865-pq8qq\" (UID: \"317e56c8-5f01-4313-a632-12ccaccf9442\") " pod="openstack/dnsmasq-dns-698758b865-pq8qq" Feb 17 16:13:27 crc kubenswrapper[4808]: I0217 16:13:27.523013 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/317e56c8-5f01-4313-a632-12ccaccf9442-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-pq8qq\" (UID: \"317e56c8-5f01-4313-a632-12ccaccf9442\") " pod="openstack/dnsmasq-dns-698758b865-pq8qq" Feb 17 16:13:27 crc kubenswrapper[4808]: I0217 16:13:27.633165 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/317e56c8-5f01-4313-a632-12ccaccf9442-config\") pod \"dnsmasq-dns-698758b865-pq8qq\" (UID: \"317e56c8-5f01-4313-a632-12ccaccf9442\") " pod="openstack/dnsmasq-dns-698758b865-pq8qq" Feb 17 16:13:27 crc kubenswrapper[4808]: I0217 16:13:27.634363 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/317e56c8-5f01-4313-a632-12ccaccf9442-config\") pod \"dnsmasq-dns-698758b865-pq8qq\" (UID: \"317e56c8-5f01-4313-a632-12ccaccf9442\") " pod="openstack/dnsmasq-dns-698758b865-pq8qq" Feb 17 16:13:27 crc kubenswrapper[4808]: I0217 16:13:27.643520 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/317e56c8-5f01-4313-a632-12ccaccf9442-dns-svc\") pod \"dnsmasq-dns-698758b865-pq8qq\" (UID: \"317e56c8-5f01-4313-a632-12ccaccf9442\") " pod="openstack/dnsmasq-dns-698758b865-pq8qq" Feb 17 16:13:27 crc kubenswrapper[4808]: I0217 16:13:27.643652 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/317e56c8-5f01-4313-a632-12ccaccf9442-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-pq8qq\" (UID: \"317e56c8-5f01-4313-a632-12ccaccf9442\") " pod="openstack/dnsmasq-dns-698758b865-pq8qq" Feb 17 16:13:27 crc kubenswrapper[4808]: I0217 16:13:27.643769 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2l9h5\" (UniqueName: \"kubernetes.io/projected/317e56c8-5f01-4313-a632-12ccaccf9442-kube-api-access-2l9h5\") pod \"dnsmasq-dns-698758b865-pq8qq\" (UID: \"317e56c8-5f01-4313-a632-12ccaccf9442\") " pod="openstack/dnsmasq-dns-698758b865-pq8qq" Feb 17 16:13:27 crc kubenswrapper[4808]: I0217 16:13:27.647801 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/317e56c8-5f01-4313-a632-12ccaccf9442-dns-svc\") pod \"dnsmasq-dns-698758b865-pq8qq\" (UID: \"317e56c8-5f01-4313-a632-12ccaccf9442\") " pod="openstack/dnsmasq-dns-698758b865-pq8qq" Feb 17 16:13:27 crc kubenswrapper[4808]: I0217 16:13:27.651872 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/317e56c8-5f01-4313-a632-12ccaccf9442-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-pq8qq\" (UID: \"317e56c8-5f01-4313-a632-12ccaccf9442\") " pod="openstack/dnsmasq-dns-698758b865-pq8qq" Feb 17 16:13:27 crc kubenswrapper[4808]: I0217 16:13:27.656454 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/317e56c8-5f01-4313-a632-12ccaccf9442-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-pq8qq\" (UID: \"317e56c8-5f01-4313-a632-12ccaccf9442\") " pod="openstack/dnsmasq-dns-698758b865-pq8qq" Feb 17 16:13:27 crc kubenswrapper[4808]: I0217 16:13:27.658878 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/317e56c8-5f01-4313-a632-12ccaccf9442-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-pq8qq\" (UID: \"317e56c8-5f01-4313-a632-12ccaccf9442\") " pod="openstack/dnsmasq-dns-698758b865-pq8qq" Feb 17 16:13:27 crc kubenswrapper[4808]: I0217 16:13:27.697641 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2l9h5\" (UniqueName: \"kubernetes.io/projected/317e56c8-5f01-4313-a632-12ccaccf9442-kube-api-access-2l9h5\") pod \"dnsmasq-dns-698758b865-pq8qq\" (UID: \"317e56c8-5f01-4313-a632-12ccaccf9442\") " pod="openstack/dnsmasq-dns-698758b865-pq8qq" Feb 17 16:13:27 crc kubenswrapper[4808]: I0217 16:13:27.834732 4808 generic.go:334] "Generic (PLEG): container finished" podID="27d2df02-b7e7-4fe9-a125-5a6acf093c85" containerID="8e5f6f7a728607504ca216d406d1d8a535d1573f6c6ba0a924dbe399f84dae18" exitCode=0 Feb 17 16:13:27 crc kubenswrapper[4808]: I0217 16:13:27.834793 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-v8hvr" event={"ID":"27d2df02-b7e7-4fe9-a125-5a6acf093c85","Type":"ContainerDied","Data":"8e5f6f7a728607504ca216d406d1d8a535d1573f6c6ba0a924dbe399f84dae18"} Feb 17 16:13:27 crc kubenswrapper[4808]: I0217 16:13:27.834844 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-v8hvr" event={"ID":"27d2df02-b7e7-4fe9-a125-5a6acf093c85","Type":"ContainerDied","Data":"d63637f01ebacc82cd0cd4fa9f1b31ac08b1e5040c4e16549d0faa344661b80a"} Feb 17 16:13:27 crc kubenswrapper[4808]: I0217 16:13:27.834855 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d63637f01ebacc82cd0cd4fa9f1b31ac08b1e5040c4e16549d0faa344661b80a" Feb 17 16:13:27 crc kubenswrapper[4808]: I0217 16:13:27.843954 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-v8hvr" Feb 17 16:13:27 crc kubenswrapper[4808]: I0217 16:13:27.964293 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/27d2df02-b7e7-4fe9-a125-5a6acf093c85-dns-svc\") pod \"27d2df02-b7e7-4fe9-a125-5a6acf093c85\" (UID: \"27d2df02-b7e7-4fe9-a125-5a6acf093c85\") " Feb 17 16:13:27 crc kubenswrapper[4808]: I0217 16:13:27.964401 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rj9g\" (UniqueName: \"kubernetes.io/projected/27d2df02-b7e7-4fe9-a125-5a6acf093c85-kube-api-access-6rj9g\") pod \"27d2df02-b7e7-4fe9-a125-5a6acf093c85\" (UID: \"27d2df02-b7e7-4fe9-a125-5a6acf093c85\") " Feb 17 16:13:27 crc kubenswrapper[4808]: I0217 16:13:27.964458 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27d2df02-b7e7-4fe9-a125-5a6acf093c85-config\") pod \"27d2df02-b7e7-4fe9-a125-5a6acf093c85\" (UID: \"27d2df02-b7e7-4fe9-a125-5a6acf093c85\") " Feb 17 16:13:27 crc kubenswrapper[4808]: I0217 16:13:27.964528 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/27d2df02-b7e7-4fe9-a125-5a6acf093c85-ovsdbserver-sb\") pod \"27d2df02-b7e7-4fe9-a125-5a6acf093c85\" (UID: \"27d2df02-b7e7-4fe9-a125-5a6acf093c85\") " Feb 17 16:13:27 crc kubenswrapper[4808]: I0217 16:13:27.964694 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/27d2df02-b7e7-4fe9-a125-5a6acf093c85-ovsdbserver-nb\") pod \"27d2df02-b7e7-4fe9-a125-5a6acf093c85\" (UID: \"27d2df02-b7e7-4fe9-a125-5a6acf093c85\") " Feb 17 16:13:27 crc kubenswrapper[4808]: I0217 16:13:27.970480 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27d2df02-b7e7-4fe9-a125-5a6acf093c85-kube-api-access-6rj9g" (OuterVolumeSpecName: "kube-api-access-6rj9g") pod "27d2df02-b7e7-4fe9-a125-5a6acf093c85" (UID: "27d2df02-b7e7-4fe9-a125-5a6acf093c85"). InnerVolumeSpecName "kube-api-access-6rj9g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:13:27 crc kubenswrapper[4808]: I0217 16:13:27.995797 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-pq8qq" Feb 17 16:13:28 crc kubenswrapper[4808]: I0217 16:13:28.012652 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27d2df02-b7e7-4fe9-a125-5a6acf093c85-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "27d2df02-b7e7-4fe9-a125-5a6acf093c85" (UID: "27d2df02-b7e7-4fe9-a125-5a6acf093c85"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:13:28 crc kubenswrapper[4808]: I0217 16:13:28.023699 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27d2df02-b7e7-4fe9-a125-5a6acf093c85-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "27d2df02-b7e7-4fe9-a125-5a6acf093c85" (UID: "27d2df02-b7e7-4fe9-a125-5a6acf093c85"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:13:28 crc kubenswrapper[4808]: I0217 16:13:28.027193 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27d2df02-b7e7-4fe9-a125-5a6acf093c85-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "27d2df02-b7e7-4fe9-a125-5a6acf093c85" (UID: "27d2df02-b7e7-4fe9-a125-5a6acf093c85"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:13:28 crc kubenswrapper[4808]: I0217 16:13:28.040462 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27d2df02-b7e7-4fe9-a125-5a6acf093c85-config" (OuterVolumeSpecName: "config") pod "27d2df02-b7e7-4fe9-a125-5a6acf093c85" (UID: "27d2df02-b7e7-4fe9-a125-5a6acf093c85"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:13:28 crc kubenswrapper[4808]: I0217 16:13:28.066429 4808 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/27d2df02-b7e7-4fe9-a125-5a6acf093c85-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 16:13:28 crc kubenswrapper[4808]: I0217 16:13:28.066456 4808 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/27d2df02-b7e7-4fe9-a125-5a6acf093c85-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 16:13:28 crc kubenswrapper[4808]: I0217 16:13:28.066467 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6rj9g\" (UniqueName: \"kubernetes.io/projected/27d2df02-b7e7-4fe9-a125-5a6acf093c85-kube-api-access-6rj9g\") on node \"crc\" DevicePath \"\"" Feb 17 16:13:28 crc kubenswrapper[4808]: I0217 16:13:28.066476 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27d2df02-b7e7-4fe9-a125-5a6acf093c85-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:13:28 crc kubenswrapper[4808]: I0217 16:13:28.066486 4808 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/27d2df02-b7e7-4fe9-a125-5a6acf093c85-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 17 16:13:28 crc kubenswrapper[4808]: I0217 16:13:28.288854 4808 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-lokistack-ingester-0" podUID="c7929d5b-e791-419e-8039-50cc9f8202f2" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 17 16:13:28 crc kubenswrapper[4808]: I0217 16:13:28.304752 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-compactor-0" Feb 17 16:13:28 crc kubenswrapper[4808]: I0217 16:13:28.390379 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 17 16:13:28 crc kubenswrapper[4808]: I0217 16:13:28.406562 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Feb 17 16:13:28 crc kubenswrapper[4808]: E0217 16:13:28.407248 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27d2df02-b7e7-4fe9-a125-5a6acf093c85" containerName="init" Feb 17 16:13:28 crc kubenswrapper[4808]: I0217 16:13:28.407365 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="27d2df02-b7e7-4fe9-a125-5a6acf093c85" containerName="init" Feb 17 16:13:28 crc kubenswrapper[4808]: E0217 16:13:28.407498 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27d2df02-b7e7-4fe9-a125-5a6acf093c85" containerName="dnsmasq-dns" Feb 17 16:13:28 crc kubenswrapper[4808]: I0217 16:13:28.407883 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="27d2df02-b7e7-4fe9-a125-5a6acf093c85" containerName="dnsmasq-dns" Feb 17 16:13:28 crc kubenswrapper[4808]: I0217 16:13:28.408196 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="27d2df02-b7e7-4fe9-a125-5a6acf093c85" containerName="dnsmasq-dns" Feb 17 16:13:28 crc kubenswrapper[4808]: I0217 16:13:28.415981 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 17 16:13:28 crc kubenswrapper[4808]: I0217 16:13:28.446486 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Feb 17 16:13:28 crc kubenswrapper[4808]: I0217 16:13:28.447476 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Feb 17 16:13:28 crc kubenswrapper[4808]: I0217 16:13:28.448882 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Feb 17 16:13:28 crc kubenswrapper[4808]: I0217 16:13:28.452963 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-dqpkp" Feb 17 16:13:28 crc kubenswrapper[4808]: I0217 16:13:28.517457 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 17 16:13:28 crc kubenswrapper[4808]: I0217 16:13:28.540878 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-pq8qq"] Feb 17 16:13:28 crc kubenswrapper[4808]: I0217 16:13:28.599176 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/8f52ebe4-f003-4d0b-8539-1d406db95b2f-cache\") pod \"swift-storage-0\" (UID: \"8f52ebe4-f003-4d0b-8539-1d406db95b2f\") " pod="openstack/swift-storage-0" Feb 17 16:13:28 crc kubenswrapper[4808]: I0217 16:13:28.599435 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8f52ebe4-f003-4d0b-8539-1d406db95b2f-etc-swift\") pod \"swift-storage-0\" (UID: \"8f52ebe4-f003-4d0b-8539-1d406db95b2f\") " pod="openstack/swift-storage-0" Feb 17 16:13:28 crc kubenswrapper[4808]: I0217 16:13:28.599460 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/8f52ebe4-f003-4d0b-8539-1d406db95b2f-lock\") pod \"swift-storage-0\" (UID: \"8f52ebe4-f003-4d0b-8539-1d406db95b2f\") " pod="openstack/swift-storage-0" Feb 17 16:13:28 crc kubenswrapper[4808]: I0217 16:13:28.599554 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-7f7e85ae-97b7-4933-b91f-f2522cd6cf2e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7f7e85ae-97b7-4933-b91f-f2522cd6cf2e\") pod \"swift-storage-0\" (UID: \"8f52ebe4-f003-4d0b-8539-1d406db95b2f\") " pod="openstack/swift-storage-0" Feb 17 16:13:28 crc kubenswrapper[4808]: I0217 16:13:28.599613 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f52ebe4-f003-4d0b-8539-1d406db95b2f-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"8f52ebe4-f003-4d0b-8539-1d406db95b2f\") " pod="openstack/swift-storage-0" Feb 17 16:13:28 crc kubenswrapper[4808]: I0217 16:13:28.599641 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hl7b\" (UniqueName: \"kubernetes.io/projected/8f52ebe4-f003-4d0b-8539-1d406db95b2f-kube-api-access-6hl7b\") pod \"swift-storage-0\" (UID: \"8f52ebe4-f003-4d0b-8539-1d406db95b2f\") " pod="openstack/swift-storage-0" Feb 17 16:13:28 crc kubenswrapper[4808]: I0217 16:13:28.700991 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8f52ebe4-f003-4d0b-8539-1d406db95b2f-etc-swift\") pod \"swift-storage-0\" (UID: \"8f52ebe4-f003-4d0b-8539-1d406db95b2f\") " pod="openstack/swift-storage-0" Feb 17 16:13:28 crc kubenswrapper[4808]: I0217 16:13:28.701029 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/8f52ebe4-f003-4d0b-8539-1d406db95b2f-lock\") pod \"swift-storage-0\" (UID: \"8f52ebe4-f003-4d0b-8539-1d406db95b2f\") " pod="openstack/swift-storage-0" Feb 17 16:13:28 crc kubenswrapper[4808]: I0217 16:13:28.701075 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-7f7e85ae-97b7-4933-b91f-f2522cd6cf2e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7f7e85ae-97b7-4933-b91f-f2522cd6cf2e\") pod \"swift-storage-0\" (UID: \"8f52ebe4-f003-4d0b-8539-1d406db95b2f\") " pod="openstack/swift-storage-0" Feb 17 16:13:28 crc kubenswrapper[4808]: I0217 16:13:28.701098 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f52ebe4-f003-4d0b-8539-1d406db95b2f-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"8f52ebe4-f003-4d0b-8539-1d406db95b2f\") " pod="openstack/swift-storage-0" Feb 17 16:13:28 crc kubenswrapper[4808]: I0217 16:13:28.701118 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6hl7b\" (UniqueName: \"kubernetes.io/projected/8f52ebe4-f003-4d0b-8539-1d406db95b2f-kube-api-access-6hl7b\") pod \"swift-storage-0\" (UID: \"8f52ebe4-f003-4d0b-8539-1d406db95b2f\") " pod="openstack/swift-storage-0" Feb 17 16:13:28 crc kubenswrapper[4808]: I0217 16:13:28.701149 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/8f52ebe4-f003-4d0b-8539-1d406db95b2f-cache\") pod \"swift-storage-0\" (UID: \"8f52ebe4-f003-4d0b-8539-1d406db95b2f\") " pod="openstack/swift-storage-0" Feb 17 16:13:28 crc kubenswrapper[4808]: E0217 16:13:28.701255 4808 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 17 16:13:28 crc kubenswrapper[4808]: E0217 16:13:28.701294 4808 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 17 16:13:28 crc kubenswrapper[4808]: E0217 16:13:28.701361 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8f52ebe4-f003-4d0b-8539-1d406db95b2f-etc-swift podName:8f52ebe4-f003-4d0b-8539-1d406db95b2f nodeName:}" failed. No retries permitted until 2026-02-17 16:13:29.201337968 +0000 UTC m=+1172.717697121 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/8f52ebe4-f003-4d0b-8539-1d406db95b2f-etc-swift") pod "swift-storage-0" (UID: "8f52ebe4-f003-4d0b-8539-1d406db95b2f") : configmap "swift-ring-files" not found Feb 17 16:13:28 crc kubenswrapper[4808]: I0217 16:13:28.701522 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/8f52ebe4-f003-4d0b-8539-1d406db95b2f-cache\") pod \"swift-storage-0\" (UID: \"8f52ebe4-f003-4d0b-8539-1d406db95b2f\") " pod="openstack/swift-storage-0" Feb 17 16:13:28 crc kubenswrapper[4808]: I0217 16:13:28.701604 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/8f52ebe4-f003-4d0b-8539-1d406db95b2f-lock\") pod \"swift-storage-0\" (UID: \"8f52ebe4-f003-4d0b-8539-1d406db95b2f\") " pod="openstack/swift-storage-0" Feb 17 16:13:28 crc kubenswrapper[4808]: I0217 16:13:28.707643 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f52ebe4-f003-4d0b-8539-1d406db95b2f-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"8f52ebe4-f003-4d0b-8539-1d406db95b2f\") " pod="openstack/swift-storage-0" Feb 17 16:13:28 crc kubenswrapper[4808]: I0217 16:13:28.714804 4808 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 16:13:28 crc kubenswrapper[4808]: I0217 16:13:28.714867 4808 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-7f7e85ae-97b7-4933-b91f-f2522cd6cf2e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7f7e85ae-97b7-4933-b91f-f2522cd6cf2e\") pod \"swift-storage-0\" (UID: \"8f52ebe4-f003-4d0b-8539-1d406db95b2f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/44b3cc468d8ea04f83345148f53d61bae2d04f9b0032f327344dd9c4f5b28475/globalmount\"" pod="openstack/swift-storage-0" Feb 17 16:13:28 crc kubenswrapper[4808]: I0217 16:13:28.722298 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6hl7b\" (UniqueName: \"kubernetes.io/projected/8f52ebe4-f003-4d0b-8539-1d406db95b2f-kube-api-access-6hl7b\") pod \"swift-storage-0\" (UID: \"8f52ebe4-f003-4d0b-8539-1d406db95b2f\") " pod="openstack/swift-storage-0" Feb 17 16:13:28 crc kubenswrapper[4808]: I0217 16:13:28.758197 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-7f7e85ae-97b7-4933-b91f-f2522cd6cf2e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7f7e85ae-97b7-4933-b91f-f2522cd6cf2e\") pod \"swift-storage-0\" (UID: \"8f52ebe4-f003-4d0b-8539-1d406db95b2f\") " pod="openstack/swift-storage-0" Feb 17 16:13:28 crc kubenswrapper[4808]: I0217 16:13:28.843217 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-pq8qq" event={"ID":"317e56c8-5f01-4313-a632-12ccaccf9442","Type":"ContainerStarted","Data":"ddfff32a5e606c9bd26b149ee55b24df69316a56d9a9ba2c7680c271a80e072c"} Feb 17 16:13:28 crc kubenswrapper[4808]: I0217 16:13:28.843266 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-v8hvr" Feb 17 16:13:28 crc kubenswrapper[4808]: I0217 16:13:28.890895 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-v8hvr"] Feb 17 16:13:28 crc kubenswrapper[4808]: I0217 16:13:28.899565 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-v8hvr"] Feb 17 16:13:28 crc kubenswrapper[4808]: I0217 16:13:28.979797 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-qg65w"] Feb 17 16:13:28 crc kubenswrapper[4808]: I0217 16:13:28.980952 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-qg65w" Feb 17 16:13:28 crc kubenswrapper[4808]: I0217 16:13:28.983144 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 17 16:13:28 crc kubenswrapper[4808]: I0217 16:13:28.983400 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Feb 17 16:13:28 crc kubenswrapper[4808]: I0217 16:13:28.990323 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-qg65w"] Feb 17 16:13:28 crc kubenswrapper[4808]: I0217 16:13:28.991076 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Feb 17 16:13:29 crc kubenswrapper[4808]: I0217 16:13:29.110566 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/eb2856a7-c37a-4ecc-a4a2-c49864240315-etc-swift\") pod \"swift-ring-rebalance-qg65w\" (UID: \"eb2856a7-c37a-4ecc-a4a2-c49864240315\") " pod="openstack/swift-ring-rebalance-qg65w" Feb 17 16:13:29 crc kubenswrapper[4808]: I0217 16:13:29.110638 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/eb2856a7-c37a-4ecc-a4a2-c49864240315-scripts\") pod \"swift-ring-rebalance-qg65w\" (UID: \"eb2856a7-c37a-4ecc-a4a2-c49864240315\") " pod="openstack/swift-ring-rebalance-qg65w" Feb 17 16:13:29 crc kubenswrapper[4808]: I0217 16:13:29.110663 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/eb2856a7-c37a-4ecc-a4a2-c49864240315-ring-data-devices\") pod \"swift-ring-rebalance-qg65w\" (UID: \"eb2856a7-c37a-4ecc-a4a2-c49864240315\") " pod="openstack/swift-ring-rebalance-qg65w" Feb 17 16:13:29 crc kubenswrapper[4808]: I0217 16:13:29.110744 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb2856a7-c37a-4ecc-a4a2-c49864240315-combined-ca-bundle\") pod \"swift-ring-rebalance-qg65w\" (UID: \"eb2856a7-c37a-4ecc-a4a2-c49864240315\") " pod="openstack/swift-ring-rebalance-qg65w" Feb 17 16:13:29 crc kubenswrapper[4808]: I0217 16:13:29.110771 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/eb2856a7-c37a-4ecc-a4a2-c49864240315-swiftconf\") pod \"swift-ring-rebalance-qg65w\" (UID: \"eb2856a7-c37a-4ecc-a4a2-c49864240315\") " pod="openstack/swift-ring-rebalance-qg65w" Feb 17 16:13:29 crc kubenswrapper[4808]: I0217 16:13:29.110788 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vndk\" (UniqueName: \"kubernetes.io/projected/eb2856a7-c37a-4ecc-a4a2-c49864240315-kube-api-access-9vndk\") pod \"swift-ring-rebalance-qg65w\" (UID: \"eb2856a7-c37a-4ecc-a4a2-c49864240315\") " pod="openstack/swift-ring-rebalance-qg65w" Feb 17 16:13:29 crc kubenswrapper[4808]: I0217 16:13:29.110879 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/eb2856a7-c37a-4ecc-a4a2-c49864240315-dispersionconf\") pod \"swift-ring-rebalance-qg65w\" (UID: \"eb2856a7-c37a-4ecc-a4a2-c49864240315\") " pod="openstack/swift-ring-rebalance-qg65w" Feb 17 16:13:29 crc kubenswrapper[4808]: I0217 16:13:29.159398 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27d2df02-b7e7-4fe9-a125-5a6acf093c85" path="/var/lib/kubelet/pods/27d2df02-b7e7-4fe9-a125-5a6acf093c85/volumes" Feb 17 16:13:29 crc kubenswrapper[4808]: I0217 16:13:29.212850 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/eb2856a7-c37a-4ecc-a4a2-c49864240315-etc-swift\") pod \"swift-ring-rebalance-qg65w\" (UID: \"eb2856a7-c37a-4ecc-a4a2-c49864240315\") " pod="openstack/swift-ring-rebalance-qg65w" Feb 17 16:13:29 crc kubenswrapper[4808]: I0217 16:13:29.213433 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/eb2856a7-c37a-4ecc-a4a2-c49864240315-etc-swift\") pod \"swift-ring-rebalance-qg65w\" (UID: \"eb2856a7-c37a-4ecc-a4a2-c49864240315\") " pod="openstack/swift-ring-rebalance-qg65w" Feb 17 16:13:29 crc kubenswrapper[4808]: I0217 16:13:29.212951 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/eb2856a7-c37a-4ecc-a4a2-c49864240315-scripts\") pod \"swift-ring-rebalance-qg65w\" (UID: \"eb2856a7-c37a-4ecc-a4a2-c49864240315\") " pod="openstack/swift-ring-rebalance-qg65w" Feb 17 16:13:29 crc kubenswrapper[4808]: I0217 16:13:29.213515 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/eb2856a7-c37a-4ecc-a4a2-c49864240315-ring-data-devices\") pod \"swift-ring-rebalance-qg65w\" (UID: \"eb2856a7-c37a-4ecc-a4a2-c49864240315\") " pod="openstack/swift-ring-rebalance-qg65w" Feb 17 16:13:29 crc kubenswrapper[4808]: I0217 16:13:29.213607 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8f52ebe4-f003-4d0b-8539-1d406db95b2f-etc-swift\") pod \"swift-storage-0\" (UID: \"8f52ebe4-f003-4d0b-8539-1d406db95b2f\") " pod="openstack/swift-storage-0" Feb 17 16:13:29 crc kubenswrapper[4808]: I0217 16:13:29.213714 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb2856a7-c37a-4ecc-a4a2-c49864240315-combined-ca-bundle\") pod \"swift-ring-rebalance-qg65w\" (UID: \"eb2856a7-c37a-4ecc-a4a2-c49864240315\") " pod="openstack/swift-ring-rebalance-qg65w" Feb 17 16:13:29 crc kubenswrapper[4808]: I0217 16:13:29.213825 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/eb2856a7-c37a-4ecc-a4a2-c49864240315-swiftconf\") pod \"swift-ring-rebalance-qg65w\" (UID: \"eb2856a7-c37a-4ecc-a4a2-c49864240315\") " pod="openstack/swift-ring-rebalance-qg65w" Feb 17 16:13:29 crc kubenswrapper[4808]: I0217 16:13:29.213853 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vndk\" (UniqueName: \"kubernetes.io/projected/eb2856a7-c37a-4ecc-a4a2-c49864240315-kube-api-access-9vndk\") pod \"swift-ring-rebalance-qg65w\" (UID: \"eb2856a7-c37a-4ecc-a4a2-c49864240315\") " pod="openstack/swift-ring-rebalance-qg65w" Feb 17 16:13:29 crc kubenswrapper[4808]: I0217 16:13:29.213935 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/eb2856a7-c37a-4ecc-a4a2-c49864240315-dispersionconf\") pod \"swift-ring-rebalance-qg65w\" (UID: \"eb2856a7-c37a-4ecc-a4a2-c49864240315\") " pod="openstack/swift-ring-rebalance-qg65w" Feb 17 16:13:29 crc kubenswrapper[4808]: I0217 16:13:29.214315 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/eb2856a7-c37a-4ecc-a4a2-c49864240315-scripts\") pod \"swift-ring-rebalance-qg65w\" (UID: \"eb2856a7-c37a-4ecc-a4a2-c49864240315\") " pod="openstack/swift-ring-rebalance-qg65w" Feb 17 16:13:29 crc kubenswrapper[4808]: I0217 16:13:29.214409 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/eb2856a7-c37a-4ecc-a4a2-c49864240315-ring-data-devices\") pod \"swift-ring-rebalance-qg65w\" (UID: \"eb2856a7-c37a-4ecc-a4a2-c49864240315\") " pod="openstack/swift-ring-rebalance-qg65w" Feb 17 16:13:29 crc kubenswrapper[4808]: E0217 16:13:29.214433 4808 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 17 16:13:29 crc kubenswrapper[4808]: E0217 16:13:29.214447 4808 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 17 16:13:29 crc kubenswrapper[4808]: E0217 16:13:29.214487 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8f52ebe4-f003-4d0b-8539-1d406db95b2f-etc-swift podName:8f52ebe4-f003-4d0b-8539-1d406db95b2f nodeName:}" failed. No retries permitted until 2026-02-17 16:13:30.214472791 +0000 UTC m=+1173.730831864 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/8f52ebe4-f003-4d0b-8539-1d406db95b2f-etc-swift") pod "swift-storage-0" (UID: "8f52ebe4-f003-4d0b-8539-1d406db95b2f") : configmap "swift-ring-files" not found Feb 17 16:13:29 crc kubenswrapper[4808]: I0217 16:13:29.218185 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/eb2856a7-c37a-4ecc-a4a2-c49864240315-dispersionconf\") pod \"swift-ring-rebalance-qg65w\" (UID: \"eb2856a7-c37a-4ecc-a4a2-c49864240315\") " pod="openstack/swift-ring-rebalance-qg65w" Feb 17 16:13:29 crc kubenswrapper[4808]: I0217 16:13:29.219308 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb2856a7-c37a-4ecc-a4a2-c49864240315-combined-ca-bundle\") pod \"swift-ring-rebalance-qg65w\" (UID: \"eb2856a7-c37a-4ecc-a4a2-c49864240315\") " pod="openstack/swift-ring-rebalance-qg65w" Feb 17 16:13:29 crc kubenswrapper[4808]: I0217 16:13:29.222719 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/eb2856a7-c37a-4ecc-a4a2-c49864240315-swiftconf\") pod \"swift-ring-rebalance-qg65w\" (UID: \"eb2856a7-c37a-4ecc-a4a2-c49864240315\") " pod="openstack/swift-ring-rebalance-qg65w" Feb 17 16:13:29 crc kubenswrapper[4808]: I0217 16:13:29.239808 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vndk\" (UniqueName: \"kubernetes.io/projected/eb2856a7-c37a-4ecc-a4a2-c49864240315-kube-api-access-9vndk\") pod \"swift-ring-rebalance-qg65w\" (UID: \"eb2856a7-c37a-4ecc-a4a2-c49864240315\") " pod="openstack/swift-ring-rebalance-qg65w" Feb 17 16:13:29 crc kubenswrapper[4808]: I0217 16:13:29.299634 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-qg65w" Feb 17 16:13:29 crc kubenswrapper[4808]: I0217 16:13:29.538667 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Feb 17 16:13:29 crc kubenswrapper[4808]: I0217 16:13:29.627485 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Feb 17 16:13:29 crc kubenswrapper[4808]: I0217 16:13:29.678315 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Feb 17 16:13:29 crc kubenswrapper[4808]: I0217 16:13:29.732061 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Feb 17 16:13:29 crc kubenswrapper[4808]: I0217 16:13:29.808842 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-qg65w"] Feb 17 16:13:29 crc kubenswrapper[4808]: W0217 16:13:29.815960 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeb2856a7_c37a_4ecc_a4a2_c49864240315.slice/crio-c158428c095eaa91f94460c1176f203740b31134ec5ab68c67c7165466a47208 WatchSource:0}: Error finding container c158428c095eaa91f94460c1176f203740b31134ec5ab68c67c7165466a47208: Status 404 returned error can't find the container with id c158428c095eaa91f94460c1176f203740b31134ec5ab68c67c7165466a47208 Feb 17 16:13:29 crc kubenswrapper[4808]: I0217 16:13:29.852068 4808 generic.go:334] "Generic (PLEG): container finished" podID="317e56c8-5f01-4313-a632-12ccaccf9442" containerID="05efd9fb2a30652e1a674ecb739d46dca429eecdc2a90da4de03961953c36078" exitCode=0 Feb 17 16:13:29 crc kubenswrapper[4808]: I0217 16:13:29.852129 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-pq8qq" event={"ID":"317e56c8-5f01-4313-a632-12ccaccf9442","Type":"ContainerDied","Data":"05efd9fb2a30652e1a674ecb739d46dca429eecdc2a90da4de03961953c36078"} Feb 17 16:13:29 crc kubenswrapper[4808]: I0217 16:13:29.854357 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-qg65w" event={"ID":"eb2856a7-c37a-4ecc-a4a2-c49864240315","Type":"ContainerStarted","Data":"c158428c095eaa91f94460c1176f203740b31134ec5ab68c67c7165466a47208"} Feb 17 16:13:30 crc kubenswrapper[4808]: I0217 16:13:30.025594 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Feb 17 16:13:30 crc kubenswrapper[4808]: I0217 16:13:30.027414 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 17 16:13:30 crc kubenswrapper[4808]: I0217 16:13:30.037194 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-677jx" Feb 17 16:13:30 crc kubenswrapper[4808]: I0217 16:13:30.037440 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Feb 17 16:13:30 crc kubenswrapper[4808]: I0217 16:13:30.037672 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Feb 17 16:13:30 crc kubenswrapper[4808]: I0217 16:13:30.037906 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Feb 17 16:13:30 crc kubenswrapper[4808]: I0217 16:13:30.051105 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 17 16:13:30 crc kubenswrapper[4808]: I0217 16:13:30.131899 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/79b7a04d-f324-40d0-ad2b-370cfef43858-scripts\") pod \"ovn-northd-0\" (UID: \"79b7a04d-f324-40d0-ad2b-370cfef43858\") " pod="openstack/ovn-northd-0" Feb 17 16:13:30 crc kubenswrapper[4808]: I0217 16:13:30.132109 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79b7a04d-f324-40d0-ad2b-370cfef43858-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"79b7a04d-f324-40d0-ad2b-370cfef43858\") " pod="openstack/ovn-northd-0" Feb 17 16:13:30 crc kubenswrapper[4808]: I0217 16:13:30.132280 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zncdh\" (UniqueName: \"kubernetes.io/projected/79b7a04d-f324-40d0-ad2b-370cfef43858-kube-api-access-zncdh\") pod \"ovn-northd-0\" (UID: \"79b7a04d-f324-40d0-ad2b-370cfef43858\") " pod="openstack/ovn-northd-0" Feb 17 16:13:30 crc kubenswrapper[4808]: I0217 16:13:30.132464 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/79b7a04d-f324-40d0-ad2b-370cfef43858-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"79b7a04d-f324-40d0-ad2b-370cfef43858\") " pod="openstack/ovn-northd-0" Feb 17 16:13:30 crc kubenswrapper[4808]: I0217 16:13:30.132816 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79b7a04d-f324-40d0-ad2b-370cfef43858-config\") pod \"ovn-northd-0\" (UID: \"79b7a04d-f324-40d0-ad2b-370cfef43858\") " pod="openstack/ovn-northd-0" Feb 17 16:13:30 crc kubenswrapper[4808]: I0217 16:13:30.133524 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/79b7a04d-f324-40d0-ad2b-370cfef43858-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"79b7a04d-f324-40d0-ad2b-370cfef43858\") " pod="openstack/ovn-northd-0" Feb 17 16:13:30 crc kubenswrapper[4808]: I0217 16:13:30.133690 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/79b7a04d-f324-40d0-ad2b-370cfef43858-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"79b7a04d-f324-40d0-ad2b-370cfef43858\") " pod="openstack/ovn-northd-0" Feb 17 16:13:30 crc kubenswrapper[4808]: I0217 16:13:30.238443 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79b7a04d-f324-40d0-ad2b-370cfef43858-config\") pod \"ovn-northd-0\" (UID: \"79b7a04d-f324-40d0-ad2b-370cfef43858\") " pod="openstack/ovn-northd-0" Feb 17 16:13:30 crc kubenswrapper[4808]: I0217 16:13:30.238584 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/79b7a04d-f324-40d0-ad2b-370cfef43858-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"79b7a04d-f324-40d0-ad2b-370cfef43858\") " pod="openstack/ovn-northd-0" Feb 17 16:13:30 crc kubenswrapper[4808]: I0217 16:13:30.238619 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/79b7a04d-f324-40d0-ad2b-370cfef43858-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"79b7a04d-f324-40d0-ad2b-370cfef43858\") " pod="openstack/ovn-northd-0" Feb 17 16:13:30 crc kubenswrapper[4808]: I0217 16:13:30.238712 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/79b7a04d-f324-40d0-ad2b-370cfef43858-scripts\") pod \"ovn-northd-0\" (UID: \"79b7a04d-f324-40d0-ad2b-370cfef43858\") " pod="openstack/ovn-northd-0" Feb 17 16:13:30 crc kubenswrapper[4808]: I0217 16:13:30.238736 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79b7a04d-f324-40d0-ad2b-370cfef43858-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"79b7a04d-f324-40d0-ad2b-370cfef43858\") " pod="openstack/ovn-northd-0" Feb 17 16:13:30 crc kubenswrapper[4808]: I0217 16:13:30.238756 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zncdh\" (UniqueName: \"kubernetes.io/projected/79b7a04d-f324-40d0-ad2b-370cfef43858-kube-api-access-zncdh\") pod \"ovn-northd-0\" (UID: \"79b7a04d-f324-40d0-ad2b-370cfef43858\") " pod="openstack/ovn-northd-0" Feb 17 16:13:30 crc kubenswrapper[4808]: I0217 16:13:30.238775 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/79b7a04d-f324-40d0-ad2b-370cfef43858-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"79b7a04d-f324-40d0-ad2b-370cfef43858\") " pod="openstack/ovn-northd-0" Feb 17 16:13:30 crc kubenswrapper[4808]: I0217 16:13:30.238821 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8f52ebe4-f003-4d0b-8539-1d406db95b2f-etc-swift\") pod \"swift-storage-0\" (UID: \"8f52ebe4-f003-4d0b-8539-1d406db95b2f\") " pod="openstack/swift-storage-0" Feb 17 16:13:30 crc kubenswrapper[4808]: E0217 16:13:30.238995 4808 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 17 16:13:30 crc kubenswrapper[4808]: E0217 16:13:30.239009 4808 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 17 16:13:30 crc kubenswrapper[4808]: E0217 16:13:30.239057 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8f52ebe4-f003-4d0b-8539-1d406db95b2f-etc-swift podName:8f52ebe4-f003-4d0b-8539-1d406db95b2f nodeName:}" failed. No retries permitted until 2026-02-17 16:13:32.239042582 +0000 UTC m=+1175.755401655 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/8f52ebe4-f003-4d0b-8539-1d406db95b2f-etc-swift") pod "swift-storage-0" (UID: "8f52ebe4-f003-4d0b-8539-1d406db95b2f") : configmap "swift-ring-files" not found Feb 17 16:13:30 crc kubenswrapper[4808]: I0217 16:13:30.239862 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/79b7a04d-f324-40d0-ad2b-370cfef43858-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"79b7a04d-f324-40d0-ad2b-370cfef43858\") " pod="openstack/ovn-northd-0" Feb 17 16:13:30 crc kubenswrapper[4808]: I0217 16:13:30.243372 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/79b7a04d-f324-40d0-ad2b-370cfef43858-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"79b7a04d-f324-40d0-ad2b-370cfef43858\") " pod="openstack/ovn-northd-0" Feb 17 16:13:30 crc kubenswrapper[4808]: I0217 16:13:30.245036 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79b7a04d-f324-40d0-ad2b-370cfef43858-config\") pod \"ovn-northd-0\" (UID: \"79b7a04d-f324-40d0-ad2b-370cfef43858\") " pod="openstack/ovn-northd-0" Feb 17 16:13:30 crc kubenswrapper[4808]: I0217 16:13:30.250659 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/79b7a04d-f324-40d0-ad2b-370cfef43858-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"79b7a04d-f324-40d0-ad2b-370cfef43858\") " pod="openstack/ovn-northd-0" Feb 17 16:13:30 crc kubenswrapper[4808]: I0217 16:13:30.252979 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79b7a04d-f324-40d0-ad2b-370cfef43858-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"79b7a04d-f324-40d0-ad2b-370cfef43858\") " pod="openstack/ovn-northd-0" Feb 17 16:13:30 crc kubenswrapper[4808]: I0217 16:13:30.260294 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/79b7a04d-f324-40d0-ad2b-370cfef43858-scripts\") pod \"ovn-northd-0\" (UID: \"79b7a04d-f324-40d0-ad2b-370cfef43858\") " pod="openstack/ovn-northd-0" Feb 17 16:13:30 crc kubenswrapper[4808]: I0217 16:13:30.263145 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zncdh\" (UniqueName: \"kubernetes.io/projected/79b7a04d-f324-40d0-ad2b-370cfef43858-kube-api-access-zncdh\") pod \"ovn-northd-0\" (UID: \"79b7a04d-f324-40d0-ad2b-370cfef43858\") " pod="openstack/ovn-northd-0" Feb 17 16:13:30 crc kubenswrapper[4808]: I0217 16:13:30.373925 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 17 16:13:30 crc kubenswrapper[4808]: I0217 16:13:30.865269 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-pq8qq" event={"ID":"317e56c8-5f01-4313-a632-12ccaccf9442","Type":"ContainerStarted","Data":"5bbec6100cf7c3218bd24bc7371072ff178631d539a209a85ec99f4282aadb9a"} Feb 17 16:13:30 crc kubenswrapper[4808]: I0217 16:13:30.867394 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-pq8qq" Feb 17 16:13:30 crc kubenswrapper[4808]: I0217 16:13:30.900985 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 17 16:13:30 crc kubenswrapper[4808]: I0217 16:13:30.909176 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-pq8qq" podStartSLOduration=3.909157695 podStartE2EDuration="3.909157695s" podCreationTimestamp="2026-02-17 16:13:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:13:30.888008552 +0000 UTC m=+1174.404367625" watchObservedRunningTime="2026-02-17 16:13:30.909157695 +0000 UTC m=+1174.425516768" Feb 17 16:13:31 crc kubenswrapper[4808]: I0217 16:13:31.877502 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"79b7a04d-f324-40d0-ad2b-370cfef43858","Type":"ContainerStarted","Data":"94459071397bab42a5432e97d2a82ed90d6a1670865721bd5f60b89b0be2a2ed"} Feb 17 16:13:32 crc kubenswrapper[4808]: I0217 16:13:32.283771 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8f52ebe4-f003-4d0b-8539-1d406db95b2f-etc-swift\") pod \"swift-storage-0\" (UID: \"8f52ebe4-f003-4d0b-8539-1d406db95b2f\") " pod="openstack/swift-storage-0" Feb 17 16:13:32 crc kubenswrapper[4808]: E0217 16:13:32.284007 4808 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 17 16:13:32 crc kubenswrapper[4808]: E0217 16:13:32.284297 4808 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 17 16:13:32 crc kubenswrapper[4808]: E0217 16:13:32.284366 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8f52ebe4-f003-4d0b-8539-1d406db95b2f-etc-swift podName:8f52ebe4-f003-4d0b-8539-1d406db95b2f nodeName:}" failed. No retries permitted until 2026-02-17 16:13:36.284346009 +0000 UTC m=+1179.800705082 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/8f52ebe4-f003-4d0b-8539-1d406db95b2f-etc-swift") pod "swift-storage-0" (UID: "8f52ebe4-f003-4d0b-8539-1d406db95b2f") : configmap "swift-ring-files" not found Feb 17 16:13:32 crc kubenswrapper[4808]: I0217 16:13:32.709591 4808 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-86db49b7ff-v8hvr" podUID="27d2df02-b7e7-4fe9-a125-5a6acf093c85" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.127:5353: i/o timeout" Feb 17 16:13:33 crc kubenswrapper[4808]: I0217 16:13:33.241672 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 17 16:13:33 crc kubenswrapper[4808]: I0217 16:13:33.241798 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 17 16:13:33 crc kubenswrapper[4808]: I0217 16:13:33.325969 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Feb 17 16:13:33 crc kubenswrapper[4808]: I0217 16:13:33.612953 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-l2f2z"] Feb 17 16:13:33 crc kubenswrapper[4808]: I0217 16:13:33.614039 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-l2f2z" Feb 17 16:13:33 crc kubenswrapper[4808]: I0217 16:13:33.623102 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 17 16:13:33 crc kubenswrapper[4808]: I0217 16:13:33.623401 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-l2f2z"] Feb 17 16:13:33 crc kubenswrapper[4808]: I0217 16:13:33.708199 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wnbd\" (UniqueName: \"kubernetes.io/projected/bc5e9f09-05c9-4fa2-8e39-22ffa4fa8d2c-kube-api-access-8wnbd\") pod \"root-account-create-update-l2f2z\" (UID: \"bc5e9f09-05c9-4fa2-8e39-22ffa4fa8d2c\") " pod="openstack/root-account-create-update-l2f2z" Feb 17 16:13:33 crc kubenswrapper[4808]: I0217 16:13:33.708300 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc5e9f09-05c9-4fa2-8e39-22ffa4fa8d2c-operator-scripts\") pod \"root-account-create-update-l2f2z\" (UID: \"bc5e9f09-05c9-4fa2-8e39-22ffa4fa8d2c\") " pod="openstack/root-account-create-update-l2f2z" Feb 17 16:13:33 crc kubenswrapper[4808]: I0217 16:13:33.810794 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wnbd\" (UniqueName: \"kubernetes.io/projected/bc5e9f09-05c9-4fa2-8e39-22ffa4fa8d2c-kube-api-access-8wnbd\") pod \"root-account-create-update-l2f2z\" (UID: \"bc5e9f09-05c9-4fa2-8e39-22ffa4fa8d2c\") " pod="openstack/root-account-create-update-l2f2z" Feb 17 16:13:33 crc kubenswrapper[4808]: I0217 16:13:33.810878 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc5e9f09-05c9-4fa2-8e39-22ffa4fa8d2c-operator-scripts\") pod \"root-account-create-update-l2f2z\" (UID: \"bc5e9f09-05c9-4fa2-8e39-22ffa4fa8d2c\") " pod="openstack/root-account-create-update-l2f2z" Feb 17 16:13:33 crc kubenswrapper[4808]: I0217 16:13:33.824014 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc5e9f09-05c9-4fa2-8e39-22ffa4fa8d2c-operator-scripts\") pod \"root-account-create-update-l2f2z\" (UID: \"bc5e9f09-05c9-4fa2-8e39-22ffa4fa8d2c\") " pod="openstack/root-account-create-update-l2f2z" Feb 17 16:13:33 crc kubenswrapper[4808]: I0217 16:13:33.830773 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wnbd\" (UniqueName: \"kubernetes.io/projected/bc5e9f09-05c9-4fa2-8e39-22ffa4fa8d2c-kube-api-access-8wnbd\") pod \"root-account-create-update-l2f2z\" (UID: \"bc5e9f09-05c9-4fa2-8e39-22ffa4fa8d2c\") " pod="openstack/root-account-create-update-l2f2z" Feb 17 16:13:33 crc kubenswrapper[4808]: I0217 16:13:33.939969 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-l2f2z" Feb 17 16:13:33 crc kubenswrapper[4808]: I0217 16:13:33.962288 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Feb 17 16:13:35 crc kubenswrapper[4808]: I0217 16:13:35.362008 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-cw2fg"] Feb 17 16:13:35 crc kubenswrapper[4808]: I0217 16:13:35.365306 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-cw2fg" Feb 17 16:13:35 crc kubenswrapper[4808]: I0217 16:13:35.371711 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-cw2fg"] Feb 17 16:13:35 crc kubenswrapper[4808]: I0217 16:13:35.478618 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-1c2d-account-create-update-5rmst"] Feb 17 16:13:35 crc kubenswrapper[4808]: I0217 16:13:35.486074 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-1c2d-account-create-update-5rmst"] Feb 17 16:13:35 crc kubenswrapper[4808]: I0217 16:13:35.486186 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-1c2d-account-create-update-5rmst" Feb 17 16:13:35 crc kubenswrapper[4808]: I0217 16:13:35.489220 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Feb 17 16:13:35 crc kubenswrapper[4808]: I0217 16:13:35.558044 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8885\" (UniqueName: \"kubernetes.io/projected/850baae5-89be-441f-85e0-f2f0ec68bdc3-kube-api-access-b8885\") pod \"glance-db-create-cw2fg\" (UID: \"850baae5-89be-441f-85e0-f2f0ec68bdc3\") " pod="openstack/glance-db-create-cw2fg" Feb 17 16:13:35 crc kubenswrapper[4808]: I0217 16:13:35.558348 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/850baae5-89be-441f-85e0-f2f0ec68bdc3-operator-scripts\") pod \"glance-db-create-cw2fg\" (UID: \"850baae5-89be-441f-85e0-f2f0ec68bdc3\") " pod="openstack/glance-db-create-cw2fg" Feb 17 16:13:35 crc kubenswrapper[4808]: I0217 16:13:35.659848 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/850baae5-89be-441f-85e0-f2f0ec68bdc3-operator-scripts\") pod \"glance-db-create-cw2fg\" (UID: \"850baae5-89be-441f-85e0-f2f0ec68bdc3\") " pod="openstack/glance-db-create-cw2fg" Feb 17 16:13:35 crc kubenswrapper[4808]: I0217 16:13:35.660293 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dbacbd93-bbc0-4360-bc45-9782988bd3c0-operator-scripts\") pod \"glance-1c2d-account-create-update-5rmst\" (UID: \"dbacbd93-bbc0-4360-bc45-9782988bd3c0\") " pod="openstack/glance-1c2d-account-create-update-5rmst" Feb 17 16:13:35 crc kubenswrapper[4808]: I0217 16:13:35.660481 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hm29j\" (UniqueName: \"kubernetes.io/projected/dbacbd93-bbc0-4360-bc45-9782988bd3c0-kube-api-access-hm29j\") pod \"glance-1c2d-account-create-update-5rmst\" (UID: \"dbacbd93-bbc0-4360-bc45-9782988bd3c0\") " pod="openstack/glance-1c2d-account-create-update-5rmst" Feb 17 16:13:35 crc kubenswrapper[4808]: I0217 16:13:35.660505 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8885\" (UniqueName: \"kubernetes.io/projected/850baae5-89be-441f-85e0-f2f0ec68bdc3-kube-api-access-b8885\") pod \"glance-db-create-cw2fg\" (UID: \"850baae5-89be-441f-85e0-f2f0ec68bdc3\") " pod="openstack/glance-db-create-cw2fg" Feb 17 16:13:35 crc kubenswrapper[4808]: I0217 16:13:35.662005 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/850baae5-89be-441f-85e0-f2f0ec68bdc3-operator-scripts\") pod \"glance-db-create-cw2fg\" (UID: \"850baae5-89be-441f-85e0-f2f0ec68bdc3\") " pod="openstack/glance-db-create-cw2fg" Feb 17 16:13:35 crc kubenswrapper[4808]: I0217 16:13:35.694311 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8885\" (UniqueName: \"kubernetes.io/projected/850baae5-89be-441f-85e0-f2f0ec68bdc3-kube-api-access-b8885\") pod \"glance-db-create-cw2fg\" (UID: \"850baae5-89be-441f-85e0-f2f0ec68bdc3\") " pod="openstack/glance-db-create-cw2fg" Feb 17 16:13:35 crc kubenswrapper[4808]: I0217 16:13:35.762316 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hm29j\" (UniqueName: \"kubernetes.io/projected/dbacbd93-bbc0-4360-bc45-9782988bd3c0-kube-api-access-hm29j\") pod \"glance-1c2d-account-create-update-5rmst\" (UID: \"dbacbd93-bbc0-4360-bc45-9782988bd3c0\") " pod="openstack/glance-1c2d-account-create-update-5rmst" Feb 17 16:13:35 crc kubenswrapper[4808]: I0217 16:13:35.762389 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dbacbd93-bbc0-4360-bc45-9782988bd3c0-operator-scripts\") pod \"glance-1c2d-account-create-update-5rmst\" (UID: \"dbacbd93-bbc0-4360-bc45-9782988bd3c0\") " pod="openstack/glance-1c2d-account-create-update-5rmst" Feb 17 16:13:35 crc kubenswrapper[4808]: I0217 16:13:35.765262 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dbacbd93-bbc0-4360-bc45-9782988bd3c0-operator-scripts\") pod \"glance-1c2d-account-create-update-5rmst\" (UID: \"dbacbd93-bbc0-4360-bc45-9782988bd3c0\") " pod="openstack/glance-1c2d-account-create-update-5rmst" Feb 17 16:13:35 crc kubenswrapper[4808]: I0217 16:13:35.779087 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hm29j\" (UniqueName: \"kubernetes.io/projected/dbacbd93-bbc0-4360-bc45-9782988bd3c0-kube-api-access-hm29j\") pod \"glance-1c2d-account-create-update-5rmst\" (UID: \"dbacbd93-bbc0-4360-bc45-9782988bd3c0\") " pod="openstack/glance-1c2d-account-create-update-5rmst" Feb 17 16:13:35 crc kubenswrapper[4808]: I0217 16:13:35.803075 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-1c2d-account-create-update-5rmst" Feb 17 16:13:35 crc kubenswrapper[4808]: I0217 16:13:35.994149 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-cw2fg" Feb 17 16:13:36 crc kubenswrapper[4808]: I0217 16:13:36.120533 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-6mgt5"] Feb 17 16:13:36 crc kubenswrapper[4808]: I0217 16:13:36.122289 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-6mgt5" Feb 17 16:13:36 crc kubenswrapper[4808]: I0217 16:13:36.131223 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-6mgt5"] Feb 17 16:13:36 crc kubenswrapper[4808]: I0217 16:13:36.222857 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-1e92-account-create-update-s8tnj"] Feb 17 16:13:36 crc kubenswrapper[4808]: I0217 16:13:36.224364 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-1e92-account-create-update-s8tnj" Feb 17 16:13:36 crc kubenswrapper[4808]: I0217 16:13:36.227223 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Feb 17 16:13:36 crc kubenswrapper[4808]: I0217 16:13:36.230218 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-1e92-account-create-update-s8tnj"] Feb 17 16:13:36 crc kubenswrapper[4808]: I0217 16:13:36.288081 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngpd6\" (UniqueName: \"kubernetes.io/projected/7419b027-2686-4ba4-9459-30a4362d34f0-kube-api-access-ngpd6\") pod \"keystone-db-create-6mgt5\" (UID: \"7419b027-2686-4ba4-9459-30a4362d34f0\") " pod="openstack/keystone-db-create-6mgt5" Feb 17 16:13:36 crc kubenswrapper[4808]: I0217 16:13:36.288327 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/850d66dd-e985-408b-93a0-8251cfd8dbc5-operator-scripts\") pod \"keystone-1e92-account-create-update-s8tnj\" (UID: \"850d66dd-e985-408b-93a0-8251cfd8dbc5\") " pod="openstack/keystone-1e92-account-create-update-s8tnj" Feb 17 16:13:36 crc kubenswrapper[4808]: I0217 16:13:36.288465 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8f52ebe4-f003-4d0b-8539-1d406db95b2f-etc-swift\") pod \"swift-storage-0\" (UID: \"8f52ebe4-f003-4d0b-8539-1d406db95b2f\") " pod="openstack/swift-storage-0" Feb 17 16:13:36 crc kubenswrapper[4808]: I0217 16:13:36.288526 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tv5tr\" (UniqueName: \"kubernetes.io/projected/850d66dd-e985-408b-93a0-8251cfd8dbc5-kube-api-access-tv5tr\") pod \"keystone-1e92-account-create-update-s8tnj\" (UID: \"850d66dd-e985-408b-93a0-8251cfd8dbc5\") " pod="openstack/keystone-1e92-account-create-update-s8tnj" Feb 17 16:13:36 crc kubenswrapper[4808]: I0217 16:13:36.288718 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7419b027-2686-4ba4-9459-30a4362d34f0-operator-scripts\") pod \"keystone-db-create-6mgt5\" (UID: \"7419b027-2686-4ba4-9459-30a4362d34f0\") " pod="openstack/keystone-db-create-6mgt5" Feb 17 16:13:36 crc kubenswrapper[4808]: E0217 16:13:36.290181 4808 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 17 16:13:36 crc kubenswrapper[4808]: E0217 16:13:36.290348 4808 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 17 16:13:36 crc kubenswrapper[4808]: E0217 16:13:36.290424 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8f52ebe4-f003-4d0b-8539-1d406db95b2f-etc-swift podName:8f52ebe4-f003-4d0b-8539-1d406db95b2f nodeName:}" failed. No retries permitted until 2026-02-17 16:13:44.290405865 +0000 UTC m=+1187.806764938 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/8f52ebe4-f003-4d0b-8539-1d406db95b2f-etc-swift") pod "swift-storage-0" (UID: "8f52ebe4-f003-4d0b-8539-1d406db95b2f") : configmap "swift-ring-files" not found Feb 17 16:13:36 crc kubenswrapper[4808]: I0217 16:13:36.389836 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7419b027-2686-4ba4-9459-30a4362d34f0-operator-scripts\") pod \"keystone-db-create-6mgt5\" (UID: \"7419b027-2686-4ba4-9459-30a4362d34f0\") " pod="openstack/keystone-db-create-6mgt5" Feb 17 16:13:36 crc kubenswrapper[4808]: I0217 16:13:36.389903 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngpd6\" (UniqueName: \"kubernetes.io/projected/7419b027-2686-4ba4-9459-30a4362d34f0-kube-api-access-ngpd6\") pod \"keystone-db-create-6mgt5\" (UID: \"7419b027-2686-4ba4-9459-30a4362d34f0\") " pod="openstack/keystone-db-create-6mgt5" Feb 17 16:13:36 crc kubenswrapper[4808]: I0217 16:13:36.389952 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/850d66dd-e985-408b-93a0-8251cfd8dbc5-operator-scripts\") pod \"keystone-1e92-account-create-update-s8tnj\" (UID: \"850d66dd-e985-408b-93a0-8251cfd8dbc5\") " pod="openstack/keystone-1e92-account-create-update-s8tnj" Feb 17 16:13:36 crc kubenswrapper[4808]: I0217 16:13:36.390008 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tv5tr\" (UniqueName: \"kubernetes.io/projected/850d66dd-e985-408b-93a0-8251cfd8dbc5-kube-api-access-tv5tr\") pod \"keystone-1e92-account-create-update-s8tnj\" (UID: \"850d66dd-e985-408b-93a0-8251cfd8dbc5\") " pod="openstack/keystone-1e92-account-create-update-s8tnj" Feb 17 16:13:36 crc kubenswrapper[4808]: I0217 16:13:36.390930 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7419b027-2686-4ba4-9459-30a4362d34f0-operator-scripts\") pod \"keystone-db-create-6mgt5\" (UID: \"7419b027-2686-4ba4-9459-30a4362d34f0\") " pod="openstack/keystone-db-create-6mgt5" Feb 17 16:13:36 crc kubenswrapper[4808]: I0217 16:13:36.391374 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/850d66dd-e985-408b-93a0-8251cfd8dbc5-operator-scripts\") pod \"keystone-1e92-account-create-update-s8tnj\" (UID: \"850d66dd-e985-408b-93a0-8251cfd8dbc5\") " pod="openstack/keystone-1e92-account-create-update-s8tnj" Feb 17 16:13:36 crc kubenswrapper[4808]: I0217 16:13:36.423848 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tv5tr\" (UniqueName: \"kubernetes.io/projected/850d66dd-e985-408b-93a0-8251cfd8dbc5-kube-api-access-tv5tr\") pod \"keystone-1e92-account-create-update-s8tnj\" (UID: \"850d66dd-e985-408b-93a0-8251cfd8dbc5\") " pod="openstack/keystone-1e92-account-create-update-s8tnj" Feb 17 16:13:36 crc kubenswrapper[4808]: I0217 16:13:36.430443 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngpd6\" (UniqueName: \"kubernetes.io/projected/7419b027-2686-4ba4-9459-30a4362d34f0-kube-api-access-ngpd6\") pod \"keystone-db-create-6mgt5\" (UID: \"7419b027-2686-4ba4-9459-30a4362d34f0\") " pod="openstack/keystone-db-create-6mgt5" Feb 17 16:13:36 crc kubenswrapper[4808]: I0217 16:13:36.450170 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-mp9g8"] Feb 17 16:13:36 crc kubenswrapper[4808]: I0217 16:13:36.451392 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-mp9g8" Feb 17 16:13:36 crc kubenswrapper[4808]: I0217 16:13:36.466717 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-6fc9-account-create-update-hsl6c"] Feb 17 16:13:36 crc kubenswrapper[4808]: I0217 16:13:36.468153 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6fc9-account-create-update-hsl6c" Feb 17 16:13:36 crc kubenswrapper[4808]: I0217 16:13:36.473852 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Feb 17 16:13:36 crc kubenswrapper[4808]: I0217 16:13:36.482410 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-6mgt5" Feb 17 16:13:36 crc kubenswrapper[4808]: I0217 16:13:36.490949 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-mp9g8"] Feb 17 16:13:36 crc kubenswrapper[4808]: I0217 16:13:36.507622 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6fc9-account-create-update-hsl6c"] Feb 17 16:13:36 crc kubenswrapper[4808]: I0217 16:13:36.542026 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-1e92-account-create-update-s8tnj" Feb 17 16:13:36 crc kubenswrapper[4808]: I0217 16:13:36.594816 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/58e700c8-ab25-47a2-a6cf-e85ffcb57e74-operator-scripts\") pod \"placement-6fc9-account-create-update-hsl6c\" (UID: \"58e700c8-ab25-47a2-a6cf-e85ffcb57e74\") " pod="openstack/placement-6fc9-account-create-update-hsl6c" Feb 17 16:13:36 crc kubenswrapper[4808]: I0217 16:13:36.595162 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2v7cl\" (UniqueName: \"kubernetes.io/projected/56341195-0325-4b22-ba76-8f792fbbcdb6-kube-api-access-2v7cl\") pod \"placement-db-create-mp9g8\" (UID: \"56341195-0325-4b22-ba76-8f792fbbcdb6\") " pod="openstack/placement-db-create-mp9g8" Feb 17 16:13:36 crc kubenswrapper[4808]: I0217 16:13:36.595269 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/56341195-0325-4b22-ba76-8f792fbbcdb6-operator-scripts\") pod \"placement-db-create-mp9g8\" (UID: \"56341195-0325-4b22-ba76-8f792fbbcdb6\") " pod="openstack/placement-db-create-mp9g8" Feb 17 16:13:36 crc kubenswrapper[4808]: I0217 16:13:36.595348 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krhzh\" (UniqueName: \"kubernetes.io/projected/58e700c8-ab25-47a2-a6cf-e85ffcb57e74-kube-api-access-krhzh\") pod \"placement-6fc9-account-create-update-hsl6c\" (UID: \"58e700c8-ab25-47a2-a6cf-e85ffcb57e74\") " pod="openstack/placement-6fc9-account-create-update-hsl6c" Feb 17 16:13:36 crc kubenswrapper[4808]: I0217 16:13:36.697631 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krhzh\" (UniqueName: \"kubernetes.io/projected/58e700c8-ab25-47a2-a6cf-e85ffcb57e74-kube-api-access-krhzh\") pod \"placement-6fc9-account-create-update-hsl6c\" (UID: \"58e700c8-ab25-47a2-a6cf-e85ffcb57e74\") " pod="openstack/placement-6fc9-account-create-update-hsl6c" Feb 17 16:13:36 crc kubenswrapper[4808]: I0217 16:13:36.697739 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/58e700c8-ab25-47a2-a6cf-e85ffcb57e74-operator-scripts\") pod \"placement-6fc9-account-create-update-hsl6c\" (UID: \"58e700c8-ab25-47a2-a6cf-e85ffcb57e74\") " pod="openstack/placement-6fc9-account-create-update-hsl6c" Feb 17 16:13:36 crc kubenswrapper[4808]: I0217 16:13:36.697822 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2v7cl\" (UniqueName: \"kubernetes.io/projected/56341195-0325-4b22-ba76-8f792fbbcdb6-kube-api-access-2v7cl\") pod \"placement-db-create-mp9g8\" (UID: \"56341195-0325-4b22-ba76-8f792fbbcdb6\") " pod="openstack/placement-db-create-mp9g8" Feb 17 16:13:36 crc kubenswrapper[4808]: I0217 16:13:36.697895 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/56341195-0325-4b22-ba76-8f792fbbcdb6-operator-scripts\") pod \"placement-db-create-mp9g8\" (UID: \"56341195-0325-4b22-ba76-8f792fbbcdb6\") " pod="openstack/placement-db-create-mp9g8" Feb 17 16:13:36 crc kubenswrapper[4808]: I0217 16:13:36.699284 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/58e700c8-ab25-47a2-a6cf-e85ffcb57e74-operator-scripts\") pod \"placement-6fc9-account-create-update-hsl6c\" (UID: \"58e700c8-ab25-47a2-a6cf-e85ffcb57e74\") " pod="openstack/placement-6fc9-account-create-update-hsl6c" Feb 17 16:13:36 crc kubenswrapper[4808]: I0217 16:13:36.700463 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/56341195-0325-4b22-ba76-8f792fbbcdb6-operator-scripts\") pod \"placement-db-create-mp9g8\" (UID: \"56341195-0325-4b22-ba76-8f792fbbcdb6\") " pod="openstack/placement-db-create-mp9g8" Feb 17 16:13:36 crc kubenswrapper[4808]: I0217 16:13:36.715343 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2v7cl\" (UniqueName: \"kubernetes.io/projected/56341195-0325-4b22-ba76-8f792fbbcdb6-kube-api-access-2v7cl\") pod \"placement-db-create-mp9g8\" (UID: \"56341195-0325-4b22-ba76-8f792fbbcdb6\") " pod="openstack/placement-db-create-mp9g8" Feb 17 16:13:36 crc kubenswrapper[4808]: I0217 16:13:36.715712 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krhzh\" (UniqueName: \"kubernetes.io/projected/58e700c8-ab25-47a2-a6cf-e85ffcb57e74-kube-api-access-krhzh\") pod \"placement-6fc9-account-create-update-hsl6c\" (UID: \"58e700c8-ab25-47a2-a6cf-e85ffcb57e74\") " pod="openstack/placement-6fc9-account-create-update-hsl6c" Feb 17 16:13:36 crc kubenswrapper[4808]: I0217 16:13:36.824699 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-mp9g8" Feb 17 16:13:36 crc kubenswrapper[4808]: I0217 16:13:36.858159 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6fc9-account-create-update-hsl6c" Feb 17 16:13:37 crc kubenswrapper[4808]: I0217 16:13:37.192192 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-1c2d-account-create-update-5rmst"] Feb 17 16:13:37 crc kubenswrapper[4808]: W0217 16:13:37.202366 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddbacbd93_bbc0_4360_bc45_9782988bd3c0.slice/crio-fc073784c031cac98470bba284bdb32968853c4aeeff19e47471f3b9dbc91465 WatchSource:0}: Error finding container fc073784c031cac98470bba284bdb32968853c4aeeff19e47471f3b9dbc91465: Status 404 returned error can't find the container with id fc073784c031cac98470bba284bdb32968853c4aeeff19e47471f3b9dbc91465 Feb 17 16:13:37 crc kubenswrapper[4808]: I0217 16:13:37.223731 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 17 16:13:37 crc kubenswrapper[4808]: I0217 16:13:37.289169 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-querier-58c84b5844-pkj8k" Feb 17 16:13:37 crc kubenswrapper[4808]: W0217 16:13:37.349236 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbc5e9f09_05c9_4fa2_8e39_22ffa4fa8d2c.slice/crio-67e1d9e4beb27bf149e3172995f31de56d2719eb7b25ce4c319edba907379192 WatchSource:0}: Error finding container 67e1d9e4beb27bf149e3172995f31de56d2719eb7b25ce4c319edba907379192: Status 404 returned error can't find the container with id 67e1d9e4beb27bf149e3172995f31de56d2719eb7b25ce4c319edba907379192 Feb 17 16:13:37 crc kubenswrapper[4808]: I0217 16:13:37.351547 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-l2f2z"] Feb 17 16:13:37 crc kubenswrapper[4808]: I0217 16:13:37.491363 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-1e92-account-create-update-s8tnj"] Feb 17 16:13:37 crc kubenswrapper[4808]: I0217 16:13:37.501838 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-6mgt5"] Feb 17 16:13:37 crc kubenswrapper[4808]: I0217 16:13:37.517317 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-cw2fg"] Feb 17 16:13:37 crc kubenswrapper[4808]: I0217 16:13:37.737674 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6fc9-account-create-update-hsl6c"] Feb 17 16:13:37 crc kubenswrapper[4808]: I0217 16:13:37.747471 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-mp9g8"] Feb 17 16:13:37 crc kubenswrapper[4808]: W0217 16:13:37.793458 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod56341195_0325_4b22_ba76_8f792fbbcdb6.slice/crio-d1561dcdfaac7c99f53a2dd25dc15dd288466f9c31855a26306f9f871e78f225 WatchSource:0}: Error finding container d1561dcdfaac7c99f53a2dd25dc15dd288466f9c31855a26306f9f871e78f225: Status 404 returned error can't find the container with id d1561dcdfaac7c99f53a2dd25dc15dd288466f9c31855a26306f9f871e78f225 Feb 17 16:13:37 crc kubenswrapper[4808]: I0217 16:13:37.929192 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-6mgt5" event={"ID":"7419b027-2686-4ba4-9459-30a4362d34f0","Type":"ContainerStarted","Data":"313ac15ae60a5d599f6768b0198df4cac62283c718fe3fa77e1a4a039f74c3b9"} Feb 17 16:13:37 crc kubenswrapper[4808]: I0217 16:13:37.929239 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-6mgt5" event={"ID":"7419b027-2686-4ba4-9459-30a4362d34f0","Type":"ContainerStarted","Data":"c89dbe2cc7630ae1cc4dfb777a53044b9caf01f9b81ec512acbb427ca87dadf9"} Feb 17 16:13:37 crc kubenswrapper[4808]: I0217 16:13:37.933623 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6fc9-account-create-update-hsl6c" event={"ID":"58e700c8-ab25-47a2-a6cf-e85ffcb57e74","Type":"ContainerStarted","Data":"ff8a1308f30cac05f4582dcef33e2089bd45ba7c33c330702b7e8ec8f4a48526"} Feb 17 16:13:37 crc kubenswrapper[4808]: I0217 16:13:37.940248 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"79b7a04d-f324-40d0-ad2b-370cfef43858","Type":"ContainerStarted","Data":"a72850f8f00fd340022c4bb892c35b0149af790964fead3e49b61535eefcdf37"} Feb 17 16:13:37 crc kubenswrapper[4808]: I0217 16:13:37.940294 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"79b7a04d-f324-40d0-ad2b-370cfef43858","Type":"ContainerStarted","Data":"bcf18da17ab80ed0879939884efc09d2733aac447dc222187451816e2f2f9d3f"} Feb 17 16:13:37 crc kubenswrapper[4808]: I0217 16:13:37.941066 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Feb 17 16:13:37 crc kubenswrapper[4808]: I0217 16:13:37.943153 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-1e92-account-create-update-s8tnj" event={"ID":"850d66dd-e985-408b-93a0-8251cfd8dbc5","Type":"ContainerStarted","Data":"285375d2088a10c12e0cc841d85c9fdfa40b8c2ff310c72a4cadbe5048c52b8c"} Feb 17 16:13:37 crc kubenswrapper[4808]: I0217 16:13:37.948527 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-qg65w" event={"ID":"eb2856a7-c37a-4ecc-a4a2-c49864240315","Type":"ContainerStarted","Data":"531cd6842c615f80a678de85ab5ffd56ce530c2a4ddaf1a8a62d7dbfe638cf33"} Feb 17 16:13:37 crc kubenswrapper[4808]: I0217 16:13:37.950468 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-l2f2z" event={"ID":"bc5e9f09-05c9-4fa2-8e39-22ffa4fa8d2c","Type":"ContainerStarted","Data":"67e1d9e4beb27bf149e3172995f31de56d2719eb7b25ce4c319edba907379192"} Feb 17 16:13:37 crc kubenswrapper[4808]: I0217 16:13:37.951416 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-mp9g8" event={"ID":"56341195-0325-4b22-ba76-8f792fbbcdb6","Type":"ContainerStarted","Data":"d1561dcdfaac7c99f53a2dd25dc15dd288466f9c31855a26306f9f871e78f225"} Feb 17 16:13:37 crc kubenswrapper[4808]: I0217 16:13:37.952350 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-cw2fg" event={"ID":"850baae5-89be-441f-85e0-f2f0ec68bdc3","Type":"ContainerStarted","Data":"590c5689226b24e8a79cadbae587b15db602a7fa85141bb00ffbdcd1faf2d3ef"} Feb 17 16:13:37 crc kubenswrapper[4808]: I0217 16:13:37.955190 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-6mgt5" podStartSLOduration=1.955171259 podStartE2EDuration="1.955171259s" podCreationTimestamp="2026-02-17 16:13:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:13:37.944562622 +0000 UTC m=+1181.460921695" watchObservedRunningTime="2026-02-17 16:13:37.955171259 +0000 UTC m=+1181.471530342" Feb 17 16:13:37 crc kubenswrapper[4808]: I0217 16:13:37.957197 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"2917eca2-0431-4bd6-ad96-ab8464cc4fd7","Type":"ContainerStarted","Data":"4b0c39d37d11b4b4e6ab329ec7e07436445d5087b94a405b5022cc84ee9f2693"} Feb 17 16:13:37 crc kubenswrapper[4808]: I0217 16:13:37.967452 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-1c2d-account-create-update-5rmst" event={"ID":"dbacbd93-bbc0-4360-bc45-9782988bd3c0","Type":"ContainerStarted","Data":"8bbf45c20da63316a7d1a31fef41a55e4272d4200c5d0a86c7aa340258751589"} Feb 17 16:13:37 crc kubenswrapper[4808]: I0217 16:13:37.967503 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-1c2d-account-create-update-5rmst" event={"ID":"dbacbd93-bbc0-4360-bc45-9782988bd3c0","Type":"ContainerStarted","Data":"fc073784c031cac98470bba284bdb32968853c4aeeff19e47471f3b9dbc91465"} Feb 17 16:13:37 crc kubenswrapper[4808]: I0217 16:13:37.969478 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=1.976461773 podStartE2EDuration="7.969459136s" podCreationTimestamp="2026-02-17 16:13:30 +0000 UTC" firstStartedPulling="2026-02-17 16:13:30.917313726 +0000 UTC m=+1174.433672799" lastFinishedPulling="2026-02-17 16:13:36.910311089 +0000 UTC m=+1180.426670162" observedRunningTime="2026-02-17 16:13:37.960473943 +0000 UTC m=+1181.476833026" watchObservedRunningTime="2026-02-17 16:13:37.969459136 +0000 UTC m=+1181.485818209" Feb 17 16:13:37 crc kubenswrapper[4808]: I0217 16:13:37.997756 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-pq8qq" Feb 17 16:13:38 crc kubenswrapper[4808]: I0217 16:13:38.008031 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-qg65w" podStartSLOduration=3.056000352 podStartE2EDuration="10.00800574s" podCreationTimestamp="2026-02-17 16:13:28 +0000 UTC" firstStartedPulling="2026-02-17 16:13:29.818382212 +0000 UTC m=+1173.334741285" lastFinishedPulling="2026-02-17 16:13:36.7703876 +0000 UTC m=+1180.286746673" observedRunningTime="2026-02-17 16:13:37.98401171 +0000 UTC m=+1181.500370803" watchObservedRunningTime="2026-02-17 16:13:38.00800574 +0000 UTC m=+1181.524364833" Feb 17 16:13:38 crc kubenswrapper[4808]: I0217 16:13:38.012345 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-1c2d-account-create-update-5rmst" podStartSLOduration=3.012326606 podStartE2EDuration="3.012326606s" podCreationTimestamp="2026-02-17 16:13:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:13:37.998158383 +0000 UTC m=+1181.514517466" watchObservedRunningTime="2026-02-17 16:13:38.012326606 +0000 UTC m=+1181.528685699" Feb 17 16:13:38 crc kubenswrapper[4808]: I0217 16:13:38.061683 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-5wrzq"] Feb 17 16:13:38 crc kubenswrapper[4808]: I0217 16:13:38.066039 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-5wrzq" podUID="24cc6fe1-da44-4d61-98bf-3088b398903b" containerName="dnsmasq-dns" containerID="cri-o://3df2b6c8480475dff990f580da87d30f986cfab5664d5aa6987e96c0458e40ce" gracePeriod=10 Feb 17 16:13:38 crc kubenswrapper[4808]: I0217 16:13:38.221740 4808 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-lokistack-ingester-0" podUID="c7929d5b-e791-419e-8039-50cc9f8202f2" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 17 16:13:38 crc kubenswrapper[4808]: I0217 16:13:38.864085 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-5wrzq" Feb 17 16:13:38 crc kubenswrapper[4808]: I0217 16:13:38.948525 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zdqm8\" (UniqueName: \"kubernetes.io/projected/24cc6fe1-da44-4d61-98bf-3088b398903b-kube-api-access-zdqm8\") pod \"24cc6fe1-da44-4d61-98bf-3088b398903b\" (UID: \"24cc6fe1-da44-4d61-98bf-3088b398903b\") " Feb 17 16:13:38 crc kubenswrapper[4808]: I0217 16:13:38.948645 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/24cc6fe1-da44-4d61-98bf-3088b398903b-dns-svc\") pod \"24cc6fe1-da44-4d61-98bf-3088b398903b\" (UID: \"24cc6fe1-da44-4d61-98bf-3088b398903b\") " Feb 17 16:13:38 crc kubenswrapper[4808]: I0217 16:13:38.948728 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24cc6fe1-da44-4d61-98bf-3088b398903b-config\") pod \"24cc6fe1-da44-4d61-98bf-3088b398903b\" (UID: \"24cc6fe1-da44-4d61-98bf-3088b398903b\") " Feb 17 16:13:38 crc kubenswrapper[4808]: I0217 16:13:38.962877 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24cc6fe1-da44-4d61-98bf-3088b398903b-kube-api-access-zdqm8" (OuterVolumeSpecName: "kube-api-access-zdqm8") pod "24cc6fe1-da44-4d61-98bf-3088b398903b" (UID: "24cc6fe1-da44-4d61-98bf-3088b398903b"). InnerVolumeSpecName "kube-api-access-zdqm8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:13:39 crc kubenswrapper[4808]: I0217 16:13:39.051541 4808 generic.go:334] "Generic (PLEG): container finished" podID="7419b027-2686-4ba4-9459-30a4362d34f0" containerID="313ac15ae60a5d599f6768b0198df4cac62283c718fe3fa77e1a4a039f74c3b9" exitCode=0 Feb 17 16:13:39 crc kubenswrapper[4808]: I0217 16:13:39.051660 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-6mgt5" event={"ID":"7419b027-2686-4ba4-9459-30a4362d34f0","Type":"ContainerDied","Data":"313ac15ae60a5d599f6768b0198df4cac62283c718fe3fa77e1a4a039f74c3b9"} Feb 17 16:13:39 crc kubenswrapper[4808]: I0217 16:13:39.062827 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zdqm8\" (UniqueName: \"kubernetes.io/projected/24cc6fe1-da44-4d61-98bf-3088b398903b-kube-api-access-zdqm8\") on node \"crc\" DevicePath \"\"" Feb 17 16:13:39 crc kubenswrapper[4808]: I0217 16:13:39.074501 4808 generic.go:334] "Generic (PLEG): container finished" podID="56341195-0325-4b22-ba76-8f792fbbcdb6" containerID="77cbcade43f0ae77b54c73845bcb62b81d16918f6513db83061d64f348ec9b2b" exitCode=0 Feb 17 16:13:39 crc kubenswrapper[4808]: I0217 16:13:39.074604 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-mp9g8" event={"ID":"56341195-0325-4b22-ba76-8f792fbbcdb6","Type":"ContainerDied","Data":"77cbcade43f0ae77b54c73845bcb62b81d16918f6513db83061d64f348ec9b2b"} Feb 17 16:13:39 crc kubenswrapper[4808]: I0217 16:13:39.077375 4808 generic.go:334] "Generic (PLEG): container finished" podID="dbacbd93-bbc0-4360-bc45-9782988bd3c0" containerID="8bbf45c20da63316a7d1a31fef41a55e4272d4200c5d0a86c7aa340258751589" exitCode=0 Feb 17 16:13:39 crc kubenswrapper[4808]: I0217 16:13:39.077426 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-1c2d-account-create-update-5rmst" event={"ID":"dbacbd93-bbc0-4360-bc45-9782988bd3c0","Type":"ContainerDied","Data":"8bbf45c20da63316a7d1a31fef41a55e4272d4200c5d0a86c7aa340258751589"} Feb 17 16:13:39 crc kubenswrapper[4808]: I0217 16:13:39.079681 4808 generic.go:334] "Generic (PLEG): container finished" podID="850baae5-89be-441f-85e0-f2f0ec68bdc3" containerID="d6c0e57ec0c9fe5da75d2c778f8867455af3d9bb73146a28181bca20e679417d" exitCode=0 Feb 17 16:13:39 crc kubenswrapper[4808]: I0217 16:13:39.079732 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-cw2fg" event={"ID":"850baae5-89be-441f-85e0-f2f0ec68bdc3","Type":"ContainerDied","Data":"d6c0e57ec0c9fe5da75d2c778f8867455af3d9bb73146a28181bca20e679417d"} Feb 17 16:13:39 crc kubenswrapper[4808]: I0217 16:13:39.085359 4808 generic.go:334] "Generic (PLEG): container finished" podID="58e700c8-ab25-47a2-a6cf-e85ffcb57e74" containerID="92a52a548321e7e91228a92677db66adc649f3fd4be4a1f0b2dcb81c8ce95063" exitCode=0 Feb 17 16:13:39 crc kubenswrapper[4808]: I0217 16:13:39.085441 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6fc9-account-create-update-hsl6c" event={"ID":"58e700c8-ab25-47a2-a6cf-e85ffcb57e74","Type":"ContainerDied","Data":"92a52a548321e7e91228a92677db66adc649f3fd4be4a1f0b2dcb81c8ce95063"} Feb 17 16:13:39 crc kubenswrapper[4808]: I0217 16:13:39.105439 4808 generic.go:334] "Generic (PLEG): container finished" podID="850d66dd-e985-408b-93a0-8251cfd8dbc5" containerID="b9a6e75c4872c463e0bee7ea278256a76575233d65a1cb8980723a4259e57365" exitCode=0 Feb 17 16:13:39 crc kubenswrapper[4808]: I0217 16:13:39.105497 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-1e92-account-create-update-s8tnj" event={"ID":"850d66dd-e985-408b-93a0-8251cfd8dbc5","Type":"ContainerDied","Data":"b9a6e75c4872c463e0bee7ea278256a76575233d65a1cb8980723a4259e57365"} Feb 17 16:13:39 crc kubenswrapper[4808]: I0217 16:13:39.121761 4808 generic.go:334] "Generic (PLEG): container finished" podID="24cc6fe1-da44-4d61-98bf-3088b398903b" containerID="3df2b6c8480475dff990f580da87d30f986cfab5664d5aa6987e96c0458e40ce" exitCode=0 Feb 17 16:13:39 crc kubenswrapper[4808]: I0217 16:13:39.121936 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-5wrzq" event={"ID":"24cc6fe1-da44-4d61-98bf-3088b398903b","Type":"ContainerDied","Data":"3df2b6c8480475dff990f580da87d30f986cfab5664d5aa6987e96c0458e40ce"} Feb 17 16:13:39 crc kubenswrapper[4808]: I0217 16:13:39.122688 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-5wrzq" event={"ID":"24cc6fe1-da44-4d61-98bf-3088b398903b","Type":"ContainerDied","Data":"4a7ab805f716d84e3d73f9394b1b45757927f27450dd37708e63205a258bb4f5"} Feb 17 16:13:39 crc kubenswrapper[4808]: I0217 16:13:39.122023 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-5wrzq" Feb 17 16:13:39 crc kubenswrapper[4808]: I0217 16:13:39.122720 4808 scope.go:117] "RemoveContainer" containerID="3df2b6c8480475dff990f580da87d30f986cfab5664d5aa6987e96c0458e40ce" Feb 17 16:13:39 crc kubenswrapper[4808]: I0217 16:13:39.131383 4808 generic.go:334] "Generic (PLEG): container finished" podID="bc5e9f09-05c9-4fa2-8e39-22ffa4fa8d2c" containerID="7aea08d602941315a47910cfb8dca2a1ac4425726486c35b99c77739c12a5b14" exitCode=0 Feb 17 16:13:39 crc kubenswrapper[4808]: I0217 16:13:39.132277 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-l2f2z" event={"ID":"bc5e9f09-05c9-4fa2-8e39-22ffa4fa8d2c","Type":"ContainerDied","Data":"7aea08d602941315a47910cfb8dca2a1ac4425726486c35b99c77739c12a5b14"} Feb 17 16:13:39 crc kubenswrapper[4808]: I0217 16:13:39.174445 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24cc6fe1-da44-4d61-98bf-3088b398903b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "24cc6fe1-da44-4d61-98bf-3088b398903b" (UID: "24cc6fe1-da44-4d61-98bf-3088b398903b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:13:39 crc kubenswrapper[4808]: I0217 16:13:39.179435 4808 scope.go:117] "RemoveContainer" containerID="5eef31ccf738b712b92d96f8cbf9367f57cb6ada66d559cdc21e7d0e94df0e1d" Feb 17 16:13:39 crc kubenswrapper[4808]: I0217 16:13:39.269647 4808 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/24cc6fe1-da44-4d61-98bf-3088b398903b-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 16:13:39 crc kubenswrapper[4808]: I0217 16:13:39.289685 4808 scope.go:117] "RemoveContainer" containerID="3df2b6c8480475dff990f580da87d30f986cfab5664d5aa6987e96c0458e40ce" Feb 17 16:13:39 crc kubenswrapper[4808]: E0217 16:13:39.290106 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3df2b6c8480475dff990f580da87d30f986cfab5664d5aa6987e96c0458e40ce\": container with ID starting with 3df2b6c8480475dff990f580da87d30f986cfab5664d5aa6987e96c0458e40ce not found: ID does not exist" containerID="3df2b6c8480475dff990f580da87d30f986cfab5664d5aa6987e96c0458e40ce" Feb 17 16:13:39 crc kubenswrapper[4808]: I0217 16:13:39.290130 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3df2b6c8480475dff990f580da87d30f986cfab5664d5aa6987e96c0458e40ce"} err="failed to get container status \"3df2b6c8480475dff990f580da87d30f986cfab5664d5aa6987e96c0458e40ce\": rpc error: code = NotFound desc = could not find container \"3df2b6c8480475dff990f580da87d30f986cfab5664d5aa6987e96c0458e40ce\": container with ID starting with 3df2b6c8480475dff990f580da87d30f986cfab5664d5aa6987e96c0458e40ce not found: ID does not exist" Feb 17 16:13:39 crc kubenswrapper[4808]: I0217 16:13:39.290149 4808 scope.go:117] "RemoveContainer" containerID="5eef31ccf738b712b92d96f8cbf9367f57cb6ada66d559cdc21e7d0e94df0e1d" Feb 17 16:13:39 crc kubenswrapper[4808]: E0217 16:13:39.290353 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5eef31ccf738b712b92d96f8cbf9367f57cb6ada66d559cdc21e7d0e94df0e1d\": container with ID starting with 5eef31ccf738b712b92d96f8cbf9367f57cb6ada66d559cdc21e7d0e94df0e1d not found: ID does not exist" containerID="5eef31ccf738b712b92d96f8cbf9367f57cb6ada66d559cdc21e7d0e94df0e1d" Feb 17 16:13:39 crc kubenswrapper[4808]: I0217 16:13:39.290367 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5eef31ccf738b712b92d96f8cbf9367f57cb6ada66d559cdc21e7d0e94df0e1d"} err="failed to get container status \"5eef31ccf738b712b92d96f8cbf9367f57cb6ada66d559cdc21e7d0e94df0e1d\": rpc error: code = NotFound desc = could not find container \"5eef31ccf738b712b92d96f8cbf9367f57cb6ada66d559cdc21e7d0e94df0e1d\": container with ID starting with 5eef31ccf738b712b92d96f8cbf9367f57cb6ada66d559cdc21e7d0e94df0e1d not found: ID does not exist" Feb 17 16:13:39 crc kubenswrapper[4808]: I0217 16:13:39.298691 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24cc6fe1-da44-4d61-98bf-3088b398903b-config" (OuterVolumeSpecName: "config") pod "24cc6fe1-da44-4d61-98bf-3088b398903b" (UID: "24cc6fe1-da44-4d61-98bf-3088b398903b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:13:39 crc kubenswrapper[4808]: I0217 16:13:39.372700 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24cc6fe1-da44-4d61-98bf-3088b398903b-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:13:39 crc kubenswrapper[4808]: I0217 16:13:39.458986 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-5wrzq"] Feb 17 16:13:39 crc kubenswrapper[4808]: I0217 16:13:39.473721 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-5wrzq"] Feb 17 16:13:40 crc kubenswrapper[4808]: I0217 16:13:40.527676 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-1c2d-account-create-update-5rmst" Feb 17 16:13:40 crc kubenswrapper[4808]: I0217 16:13:40.605298 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hm29j\" (UniqueName: \"kubernetes.io/projected/dbacbd93-bbc0-4360-bc45-9782988bd3c0-kube-api-access-hm29j\") pod \"dbacbd93-bbc0-4360-bc45-9782988bd3c0\" (UID: \"dbacbd93-bbc0-4360-bc45-9782988bd3c0\") " Feb 17 16:13:40 crc kubenswrapper[4808]: I0217 16:13:40.605432 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dbacbd93-bbc0-4360-bc45-9782988bd3c0-operator-scripts\") pod \"dbacbd93-bbc0-4360-bc45-9782988bd3c0\" (UID: \"dbacbd93-bbc0-4360-bc45-9782988bd3c0\") " Feb 17 16:13:40 crc kubenswrapper[4808]: I0217 16:13:40.606665 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dbacbd93-bbc0-4360-bc45-9782988bd3c0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "dbacbd93-bbc0-4360-bc45-9782988bd3c0" (UID: "dbacbd93-bbc0-4360-bc45-9782988bd3c0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:13:40 crc kubenswrapper[4808]: I0217 16:13:40.613924 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbacbd93-bbc0-4360-bc45-9782988bd3c0-kube-api-access-hm29j" (OuterVolumeSpecName: "kube-api-access-hm29j") pod "dbacbd93-bbc0-4360-bc45-9782988bd3c0" (UID: "dbacbd93-bbc0-4360-bc45-9782988bd3c0"). InnerVolumeSpecName "kube-api-access-hm29j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:13:40 crc kubenswrapper[4808]: I0217 16:13:40.708978 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hm29j\" (UniqueName: \"kubernetes.io/projected/dbacbd93-bbc0-4360-bc45-9782988bd3c0-kube-api-access-hm29j\") on node \"crc\" DevicePath \"\"" Feb 17 16:13:40 crc kubenswrapper[4808]: I0217 16:13:40.709022 4808 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dbacbd93-bbc0-4360-bc45-9782988bd3c0-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:13:40 crc kubenswrapper[4808]: I0217 16:13:40.727868 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-mp9g8" Feb 17 16:13:40 crc kubenswrapper[4808]: I0217 16:13:40.734948 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-1e92-account-create-update-s8tnj" Feb 17 16:13:40 crc kubenswrapper[4808]: I0217 16:13:40.765024 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-l2f2z" Feb 17 16:13:40 crc kubenswrapper[4808]: I0217 16:13:40.810033 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2v7cl\" (UniqueName: \"kubernetes.io/projected/56341195-0325-4b22-ba76-8f792fbbcdb6-kube-api-access-2v7cl\") pod \"56341195-0325-4b22-ba76-8f792fbbcdb6\" (UID: \"56341195-0325-4b22-ba76-8f792fbbcdb6\") " Feb 17 16:13:40 crc kubenswrapper[4808]: I0217 16:13:40.810473 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc5e9f09-05c9-4fa2-8e39-22ffa4fa8d2c-operator-scripts\") pod \"bc5e9f09-05c9-4fa2-8e39-22ffa4fa8d2c\" (UID: \"bc5e9f09-05c9-4fa2-8e39-22ffa4fa8d2c\") " Feb 17 16:13:40 crc kubenswrapper[4808]: I0217 16:13:40.810642 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8wnbd\" (UniqueName: \"kubernetes.io/projected/bc5e9f09-05c9-4fa2-8e39-22ffa4fa8d2c-kube-api-access-8wnbd\") pod \"bc5e9f09-05c9-4fa2-8e39-22ffa4fa8d2c\" (UID: \"bc5e9f09-05c9-4fa2-8e39-22ffa4fa8d2c\") " Feb 17 16:13:40 crc kubenswrapper[4808]: I0217 16:13:40.810786 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/56341195-0325-4b22-ba76-8f792fbbcdb6-operator-scripts\") pod \"56341195-0325-4b22-ba76-8f792fbbcdb6\" (UID: \"56341195-0325-4b22-ba76-8f792fbbcdb6\") " Feb 17 16:13:40 crc kubenswrapper[4808]: I0217 16:13:40.810919 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/850d66dd-e985-408b-93a0-8251cfd8dbc5-operator-scripts\") pod \"850d66dd-e985-408b-93a0-8251cfd8dbc5\" (UID: \"850d66dd-e985-408b-93a0-8251cfd8dbc5\") " Feb 17 16:13:40 crc kubenswrapper[4808]: I0217 16:13:40.811747 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tv5tr\" (UniqueName: \"kubernetes.io/projected/850d66dd-e985-408b-93a0-8251cfd8dbc5-kube-api-access-tv5tr\") pod \"850d66dd-e985-408b-93a0-8251cfd8dbc5\" (UID: \"850d66dd-e985-408b-93a0-8251cfd8dbc5\") " Feb 17 16:13:40 crc kubenswrapper[4808]: I0217 16:13:40.813559 4808 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-pfcvm" podUID="8a76a2ff-ed1a-4279-898c-54e85973f024" containerName="ovn-controller" probeResult="failure" output=< Feb 17 16:13:40 crc kubenswrapper[4808]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 17 16:13:40 crc kubenswrapper[4808]: > Feb 17 16:13:40 crc kubenswrapper[4808]: I0217 16:13:40.811083 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc5e9f09-05c9-4fa2-8e39-22ffa4fa8d2c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bc5e9f09-05c9-4fa2-8e39-22ffa4fa8d2c" (UID: "bc5e9f09-05c9-4fa2-8e39-22ffa4fa8d2c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:13:40 crc kubenswrapper[4808]: I0217 16:13:40.811602 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56341195-0325-4b22-ba76-8f792fbbcdb6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "56341195-0325-4b22-ba76-8f792fbbcdb6" (UID: "56341195-0325-4b22-ba76-8f792fbbcdb6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:13:40 crc kubenswrapper[4808]: I0217 16:13:40.811627 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/850d66dd-e985-408b-93a0-8251cfd8dbc5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "850d66dd-e985-408b-93a0-8251cfd8dbc5" (UID: "850d66dd-e985-408b-93a0-8251cfd8dbc5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:13:40 crc kubenswrapper[4808]: I0217 16:13:40.822766 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5e9f09-05c9-4fa2-8e39-22ffa4fa8d2c-kube-api-access-8wnbd" (OuterVolumeSpecName: "kube-api-access-8wnbd") pod "bc5e9f09-05c9-4fa2-8e39-22ffa4fa8d2c" (UID: "bc5e9f09-05c9-4fa2-8e39-22ffa4fa8d2c"). InnerVolumeSpecName "kube-api-access-8wnbd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:13:40 crc kubenswrapper[4808]: I0217 16:13:40.822874 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/850d66dd-e985-408b-93a0-8251cfd8dbc5-kube-api-access-tv5tr" (OuterVolumeSpecName: "kube-api-access-tv5tr") pod "850d66dd-e985-408b-93a0-8251cfd8dbc5" (UID: "850d66dd-e985-408b-93a0-8251cfd8dbc5"). InnerVolumeSpecName "kube-api-access-tv5tr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:13:40 crc kubenswrapper[4808]: I0217 16:13:40.839878 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56341195-0325-4b22-ba76-8f792fbbcdb6-kube-api-access-2v7cl" (OuterVolumeSpecName: "kube-api-access-2v7cl") pod "56341195-0325-4b22-ba76-8f792fbbcdb6" (UID: "56341195-0325-4b22-ba76-8f792fbbcdb6"). InnerVolumeSpecName "kube-api-access-2v7cl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:13:40 crc kubenswrapper[4808]: I0217 16:13:40.915030 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2v7cl\" (UniqueName: \"kubernetes.io/projected/56341195-0325-4b22-ba76-8f792fbbcdb6-kube-api-access-2v7cl\") on node \"crc\" DevicePath \"\"" Feb 17 16:13:40 crc kubenswrapper[4808]: I0217 16:13:40.915391 4808 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc5e9f09-05c9-4fa2-8e39-22ffa4fa8d2c-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:13:40 crc kubenswrapper[4808]: I0217 16:13:40.915404 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8wnbd\" (UniqueName: \"kubernetes.io/projected/bc5e9f09-05c9-4fa2-8e39-22ffa4fa8d2c-kube-api-access-8wnbd\") on node \"crc\" DevicePath \"\"" Feb 17 16:13:40 crc kubenswrapper[4808]: I0217 16:13:40.915417 4808 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/56341195-0325-4b22-ba76-8f792fbbcdb6-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:13:40 crc kubenswrapper[4808]: I0217 16:13:40.915428 4808 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/850d66dd-e985-408b-93a0-8251cfd8dbc5-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:13:40 crc kubenswrapper[4808]: I0217 16:13:40.915440 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tv5tr\" (UniqueName: \"kubernetes.io/projected/850d66dd-e985-408b-93a0-8251cfd8dbc5-kube-api-access-tv5tr\") on node \"crc\" DevicePath \"\"" Feb 17 16:13:40 crc kubenswrapper[4808]: I0217 16:13:40.932317 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-6mgt5" Feb 17 16:13:40 crc kubenswrapper[4808]: I0217 16:13:40.986231 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6fc9-account-create-update-hsl6c" Feb 17 16:13:40 crc kubenswrapper[4808]: I0217 16:13:40.992958 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-cw2fg" Feb 17 16:13:41 crc kubenswrapper[4808]: I0217 16:13:41.016249 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngpd6\" (UniqueName: \"kubernetes.io/projected/7419b027-2686-4ba4-9459-30a4362d34f0-kube-api-access-ngpd6\") pod \"7419b027-2686-4ba4-9459-30a4362d34f0\" (UID: \"7419b027-2686-4ba4-9459-30a4362d34f0\") " Feb 17 16:13:41 crc kubenswrapper[4808]: I0217 16:13:41.016360 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7419b027-2686-4ba4-9459-30a4362d34f0-operator-scripts\") pod \"7419b027-2686-4ba4-9459-30a4362d34f0\" (UID: \"7419b027-2686-4ba4-9459-30a4362d34f0\") " Feb 17 16:13:41 crc kubenswrapper[4808]: I0217 16:13:41.017118 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7419b027-2686-4ba4-9459-30a4362d34f0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7419b027-2686-4ba4-9459-30a4362d34f0" (UID: "7419b027-2686-4ba4-9459-30a4362d34f0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:13:41 crc kubenswrapper[4808]: I0217 16:13:41.019609 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7419b027-2686-4ba4-9459-30a4362d34f0-kube-api-access-ngpd6" (OuterVolumeSpecName: "kube-api-access-ngpd6") pod "7419b027-2686-4ba4-9459-30a4362d34f0" (UID: "7419b027-2686-4ba4-9459-30a4362d34f0"). InnerVolumeSpecName "kube-api-access-ngpd6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:13:41 crc kubenswrapper[4808]: I0217 16:13:41.117425 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/58e700c8-ab25-47a2-a6cf-e85ffcb57e74-operator-scripts\") pod \"58e700c8-ab25-47a2-a6cf-e85ffcb57e74\" (UID: \"58e700c8-ab25-47a2-a6cf-e85ffcb57e74\") " Feb 17 16:13:41 crc kubenswrapper[4808]: I0217 16:13:41.117567 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-krhzh\" (UniqueName: \"kubernetes.io/projected/58e700c8-ab25-47a2-a6cf-e85ffcb57e74-kube-api-access-krhzh\") pod \"58e700c8-ab25-47a2-a6cf-e85ffcb57e74\" (UID: \"58e700c8-ab25-47a2-a6cf-e85ffcb57e74\") " Feb 17 16:13:41 crc kubenswrapper[4808]: I0217 16:13:41.117622 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b8885\" (UniqueName: \"kubernetes.io/projected/850baae5-89be-441f-85e0-f2f0ec68bdc3-kube-api-access-b8885\") pod \"850baae5-89be-441f-85e0-f2f0ec68bdc3\" (UID: \"850baae5-89be-441f-85e0-f2f0ec68bdc3\") " Feb 17 16:13:41 crc kubenswrapper[4808]: I0217 16:13:41.117758 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/850baae5-89be-441f-85e0-f2f0ec68bdc3-operator-scripts\") pod \"850baae5-89be-441f-85e0-f2f0ec68bdc3\" (UID: \"850baae5-89be-441f-85e0-f2f0ec68bdc3\") " Feb 17 16:13:41 crc kubenswrapper[4808]: I0217 16:13:41.117962 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/58e700c8-ab25-47a2-a6cf-e85ffcb57e74-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "58e700c8-ab25-47a2-a6cf-e85ffcb57e74" (UID: "58e700c8-ab25-47a2-a6cf-e85ffcb57e74"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:13:41 crc kubenswrapper[4808]: I0217 16:13:41.118220 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/850baae5-89be-441f-85e0-f2f0ec68bdc3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "850baae5-89be-441f-85e0-f2f0ec68bdc3" (UID: "850baae5-89be-441f-85e0-f2f0ec68bdc3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:13:41 crc kubenswrapper[4808]: I0217 16:13:41.118743 4808 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/58e700c8-ab25-47a2-a6cf-e85ffcb57e74-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:13:41 crc kubenswrapper[4808]: I0217 16:13:41.118764 4808 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7419b027-2686-4ba4-9459-30a4362d34f0-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:13:41 crc kubenswrapper[4808]: I0217 16:13:41.118774 4808 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/850baae5-89be-441f-85e0-f2f0ec68bdc3-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:13:41 crc kubenswrapper[4808]: I0217 16:13:41.118784 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngpd6\" (UniqueName: \"kubernetes.io/projected/7419b027-2686-4ba4-9459-30a4362d34f0-kube-api-access-ngpd6\") on node \"crc\" DevicePath \"\"" Feb 17 16:13:41 crc kubenswrapper[4808]: I0217 16:13:41.121223 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58e700c8-ab25-47a2-a6cf-e85ffcb57e74-kube-api-access-krhzh" (OuterVolumeSpecName: "kube-api-access-krhzh") pod "58e700c8-ab25-47a2-a6cf-e85ffcb57e74" (UID: "58e700c8-ab25-47a2-a6cf-e85ffcb57e74"). InnerVolumeSpecName "kube-api-access-krhzh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:13:41 crc kubenswrapper[4808]: I0217 16:13:41.128275 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/850baae5-89be-441f-85e0-f2f0ec68bdc3-kube-api-access-b8885" (OuterVolumeSpecName: "kube-api-access-b8885") pod "850baae5-89be-441f-85e0-f2f0ec68bdc3" (UID: "850baae5-89be-441f-85e0-f2f0ec68bdc3"). InnerVolumeSpecName "kube-api-access-b8885". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:13:41 crc kubenswrapper[4808]: I0217 16:13:41.154845 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-l2f2z" Feb 17 16:13:41 crc kubenswrapper[4808]: I0217 16:13:41.156607 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-6mgt5" Feb 17 16:13:41 crc kubenswrapper[4808]: I0217 16:13:41.156604 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24cc6fe1-da44-4d61-98bf-3088b398903b" path="/var/lib/kubelet/pods/24cc6fe1-da44-4d61-98bf-3088b398903b/volumes" Feb 17 16:13:41 crc kubenswrapper[4808]: I0217 16:13:41.160320 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-mp9g8" Feb 17 16:13:41 crc kubenswrapper[4808]: I0217 16:13:41.161906 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-1c2d-account-create-update-5rmst" Feb 17 16:13:41 crc kubenswrapper[4808]: I0217 16:13:41.163871 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-cw2fg" Feb 17 16:13:41 crc kubenswrapper[4808]: I0217 16:13:41.167990 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-l2f2z" event={"ID":"bc5e9f09-05c9-4fa2-8e39-22ffa4fa8d2c","Type":"ContainerDied","Data":"67e1d9e4beb27bf149e3172995f31de56d2719eb7b25ce4c319edba907379192"} Feb 17 16:13:41 crc kubenswrapper[4808]: I0217 16:13:41.173825 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="67e1d9e4beb27bf149e3172995f31de56d2719eb7b25ce4c319edba907379192" Feb 17 16:13:41 crc kubenswrapper[4808]: I0217 16:13:41.173993 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-6mgt5" event={"ID":"7419b027-2686-4ba4-9459-30a4362d34f0","Type":"ContainerDied","Data":"c89dbe2cc7630ae1cc4dfb777a53044b9caf01f9b81ec512acbb427ca87dadf9"} Feb 17 16:13:41 crc kubenswrapper[4808]: I0217 16:13:41.174148 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c89dbe2cc7630ae1cc4dfb777a53044b9caf01f9b81ec512acbb427ca87dadf9" Feb 17 16:13:41 crc kubenswrapper[4808]: I0217 16:13:41.174266 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-mp9g8" event={"ID":"56341195-0325-4b22-ba76-8f792fbbcdb6","Type":"ContainerDied","Data":"d1561dcdfaac7c99f53a2dd25dc15dd288466f9c31855a26306f9f871e78f225"} Feb 17 16:13:41 crc kubenswrapper[4808]: I0217 16:13:41.174404 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d1561dcdfaac7c99f53a2dd25dc15dd288466f9c31855a26306f9f871e78f225" Feb 17 16:13:41 crc kubenswrapper[4808]: I0217 16:13:41.174530 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-1c2d-account-create-update-5rmst" event={"ID":"dbacbd93-bbc0-4360-bc45-9782988bd3c0","Type":"ContainerDied","Data":"fc073784c031cac98470bba284bdb32968853c4aeeff19e47471f3b9dbc91465"} Feb 17 16:13:41 crc kubenswrapper[4808]: I0217 16:13:41.174681 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fc073784c031cac98470bba284bdb32968853c4aeeff19e47471f3b9dbc91465" Feb 17 16:13:41 crc kubenswrapper[4808]: I0217 16:13:41.174800 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-cw2fg" event={"ID":"850baae5-89be-441f-85e0-f2f0ec68bdc3","Type":"ContainerDied","Data":"590c5689226b24e8a79cadbae587b15db602a7fa85141bb00ffbdcd1faf2d3ef"} Feb 17 16:13:41 crc kubenswrapper[4808]: I0217 16:13:41.174934 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="590c5689226b24e8a79cadbae587b15db602a7fa85141bb00ffbdcd1faf2d3ef" Feb 17 16:13:41 crc kubenswrapper[4808]: I0217 16:13:41.174959 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"2917eca2-0431-4bd6-ad96-ab8464cc4fd7","Type":"ContainerStarted","Data":"8d4b256de0544b61472bec728b8a9f6596b6505c3ff6baf74b4b74f9988e76dc"} Feb 17 16:13:41 crc kubenswrapper[4808]: I0217 16:13:41.174764 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6fc9-account-create-update-hsl6c" Feb 17 16:13:41 crc kubenswrapper[4808]: I0217 16:13:41.174983 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6fc9-account-create-update-hsl6c" event={"ID":"58e700c8-ab25-47a2-a6cf-e85ffcb57e74","Type":"ContainerDied","Data":"ff8a1308f30cac05f4582dcef33e2089bd45ba7c33c330702b7e8ec8f4a48526"} Feb 17 16:13:41 crc kubenswrapper[4808]: I0217 16:13:41.174993 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ff8a1308f30cac05f4582dcef33e2089bd45ba7c33c330702b7e8ec8f4a48526" Feb 17 16:13:41 crc kubenswrapper[4808]: I0217 16:13:41.176554 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-1e92-account-create-update-s8tnj" event={"ID":"850d66dd-e985-408b-93a0-8251cfd8dbc5","Type":"ContainerDied","Data":"285375d2088a10c12e0cc841d85c9fdfa40b8c2ff310c72a4cadbe5048c52b8c"} Feb 17 16:13:41 crc kubenswrapper[4808]: I0217 16:13:41.176586 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="285375d2088a10c12e0cc841d85c9fdfa40b8c2ff310c72a4cadbe5048c52b8c" Feb 17 16:13:41 crc kubenswrapper[4808]: I0217 16:13:41.176620 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-1e92-account-create-update-s8tnj" Feb 17 16:13:41 crc kubenswrapper[4808]: I0217 16:13:41.221163 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-krhzh\" (UniqueName: \"kubernetes.io/projected/58e700c8-ab25-47a2-a6cf-e85ffcb57e74-kube-api-access-krhzh\") on node \"crc\" DevicePath \"\"" Feb 17 16:13:41 crc kubenswrapper[4808]: I0217 16:13:41.221192 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b8885\" (UniqueName: \"kubernetes.io/projected/850baae5-89be-441f-85e0-f2f0ec68bdc3-kube-api-access-b8885\") on node \"crc\" DevicePath \"\"" Feb 17 16:13:41 crc kubenswrapper[4808]: I0217 16:13:41.913981 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-l2f2z"] Feb 17 16:13:41 crc kubenswrapper[4808]: I0217 16:13:41.923354 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-l2f2z"] Feb 17 16:13:42 crc kubenswrapper[4808]: I0217 16:13:42.007466 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-kt8sq"] Feb 17 16:13:42 crc kubenswrapper[4808]: E0217 16:13:42.008293 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7419b027-2686-4ba4-9459-30a4362d34f0" containerName="mariadb-database-create" Feb 17 16:13:42 crc kubenswrapper[4808]: I0217 16:13:42.008317 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="7419b027-2686-4ba4-9459-30a4362d34f0" containerName="mariadb-database-create" Feb 17 16:13:42 crc kubenswrapper[4808]: E0217 16:13:42.008333 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc5e9f09-05c9-4fa2-8e39-22ffa4fa8d2c" containerName="mariadb-account-create-update" Feb 17 16:13:42 crc kubenswrapper[4808]: I0217 16:13:42.008341 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc5e9f09-05c9-4fa2-8e39-22ffa4fa8d2c" containerName="mariadb-account-create-update" Feb 17 16:13:42 crc kubenswrapper[4808]: E0217 16:13:42.008352 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56341195-0325-4b22-ba76-8f792fbbcdb6" containerName="mariadb-database-create" Feb 17 16:13:42 crc kubenswrapper[4808]: I0217 16:13:42.008360 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="56341195-0325-4b22-ba76-8f792fbbcdb6" containerName="mariadb-database-create" Feb 17 16:13:42 crc kubenswrapper[4808]: E0217 16:13:42.008391 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24cc6fe1-da44-4d61-98bf-3088b398903b" containerName="dnsmasq-dns" Feb 17 16:13:42 crc kubenswrapper[4808]: I0217 16:13:42.008399 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="24cc6fe1-da44-4d61-98bf-3088b398903b" containerName="dnsmasq-dns" Feb 17 16:13:42 crc kubenswrapper[4808]: E0217 16:13:42.008410 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="850d66dd-e985-408b-93a0-8251cfd8dbc5" containerName="mariadb-account-create-update" Feb 17 16:13:42 crc kubenswrapper[4808]: I0217 16:13:42.008419 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="850d66dd-e985-408b-93a0-8251cfd8dbc5" containerName="mariadb-account-create-update" Feb 17 16:13:42 crc kubenswrapper[4808]: E0217 16:13:42.008430 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58e700c8-ab25-47a2-a6cf-e85ffcb57e74" containerName="mariadb-account-create-update" Feb 17 16:13:42 crc kubenswrapper[4808]: I0217 16:13:42.008438 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="58e700c8-ab25-47a2-a6cf-e85ffcb57e74" containerName="mariadb-account-create-update" Feb 17 16:13:42 crc kubenswrapper[4808]: E0217 16:13:42.008447 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24cc6fe1-da44-4d61-98bf-3088b398903b" containerName="init" Feb 17 16:13:42 crc kubenswrapper[4808]: I0217 16:13:42.008454 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="24cc6fe1-da44-4d61-98bf-3088b398903b" containerName="init" Feb 17 16:13:42 crc kubenswrapper[4808]: E0217 16:13:42.008463 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="850baae5-89be-441f-85e0-f2f0ec68bdc3" containerName="mariadb-database-create" Feb 17 16:13:42 crc kubenswrapper[4808]: I0217 16:13:42.008471 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="850baae5-89be-441f-85e0-f2f0ec68bdc3" containerName="mariadb-database-create" Feb 17 16:13:42 crc kubenswrapper[4808]: E0217 16:13:42.008485 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbacbd93-bbc0-4360-bc45-9782988bd3c0" containerName="mariadb-account-create-update" Feb 17 16:13:42 crc kubenswrapper[4808]: I0217 16:13:42.008493 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbacbd93-bbc0-4360-bc45-9782988bd3c0" containerName="mariadb-account-create-update" Feb 17 16:13:42 crc kubenswrapper[4808]: I0217 16:13:42.008707 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="850baae5-89be-441f-85e0-f2f0ec68bdc3" containerName="mariadb-database-create" Feb 17 16:13:42 crc kubenswrapper[4808]: I0217 16:13:42.008728 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="58e700c8-ab25-47a2-a6cf-e85ffcb57e74" containerName="mariadb-account-create-update" Feb 17 16:13:42 crc kubenswrapper[4808]: I0217 16:13:42.008738 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbacbd93-bbc0-4360-bc45-9782988bd3c0" containerName="mariadb-account-create-update" Feb 17 16:13:42 crc kubenswrapper[4808]: I0217 16:13:42.008749 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="24cc6fe1-da44-4d61-98bf-3088b398903b" containerName="dnsmasq-dns" Feb 17 16:13:42 crc kubenswrapper[4808]: I0217 16:13:42.008760 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="56341195-0325-4b22-ba76-8f792fbbcdb6" containerName="mariadb-database-create" Feb 17 16:13:42 crc kubenswrapper[4808]: I0217 16:13:42.008779 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="7419b027-2686-4ba4-9459-30a4362d34f0" containerName="mariadb-database-create" Feb 17 16:13:42 crc kubenswrapper[4808]: I0217 16:13:42.008787 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="850d66dd-e985-408b-93a0-8251cfd8dbc5" containerName="mariadb-account-create-update" Feb 17 16:13:42 crc kubenswrapper[4808]: I0217 16:13:42.008795 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc5e9f09-05c9-4fa2-8e39-22ffa4fa8d2c" containerName="mariadb-account-create-update" Feb 17 16:13:42 crc kubenswrapper[4808]: I0217 16:13:42.009466 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-kt8sq" Feb 17 16:13:42 crc kubenswrapper[4808]: I0217 16:13:42.014675 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 17 16:13:42 crc kubenswrapper[4808]: I0217 16:13:42.023204 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-kt8sq"] Feb 17 16:13:42 crc kubenswrapper[4808]: I0217 16:13:42.135088 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kn7p2\" (UniqueName: \"kubernetes.io/projected/6940f857-9d37-4d69-8b1a-33208fe6de43-kube-api-access-kn7p2\") pod \"root-account-create-update-kt8sq\" (UID: \"6940f857-9d37-4d69-8b1a-33208fe6de43\") " pod="openstack/root-account-create-update-kt8sq" Feb 17 16:13:42 crc kubenswrapper[4808]: I0217 16:13:42.135249 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6940f857-9d37-4d69-8b1a-33208fe6de43-operator-scripts\") pod \"root-account-create-update-kt8sq\" (UID: \"6940f857-9d37-4d69-8b1a-33208fe6de43\") " pod="openstack/root-account-create-update-kt8sq" Feb 17 16:13:42 crc kubenswrapper[4808]: I0217 16:13:42.237057 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kn7p2\" (UniqueName: \"kubernetes.io/projected/6940f857-9d37-4d69-8b1a-33208fe6de43-kube-api-access-kn7p2\") pod \"root-account-create-update-kt8sq\" (UID: \"6940f857-9d37-4d69-8b1a-33208fe6de43\") " pod="openstack/root-account-create-update-kt8sq" Feb 17 16:13:42 crc kubenswrapper[4808]: I0217 16:13:42.238736 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6940f857-9d37-4d69-8b1a-33208fe6de43-operator-scripts\") pod \"root-account-create-update-kt8sq\" (UID: \"6940f857-9d37-4d69-8b1a-33208fe6de43\") " pod="openstack/root-account-create-update-kt8sq" Feb 17 16:13:42 crc kubenswrapper[4808]: I0217 16:13:42.239480 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6940f857-9d37-4d69-8b1a-33208fe6de43-operator-scripts\") pod \"root-account-create-update-kt8sq\" (UID: \"6940f857-9d37-4d69-8b1a-33208fe6de43\") " pod="openstack/root-account-create-update-kt8sq" Feb 17 16:13:42 crc kubenswrapper[4808]: I0217 16:13:42.258294 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kn7p2\" (UniqueName: \"kubernetes.io/projected/6940f857-9d37-4d69-8b1a-33208fe6de43-kube-api-access-kn7p2\") pod \"root-account-create-update-kt8sq\" (UID: \"6940f857-9d37-4d69-8b1a-33208fe6de43\") " pod="openstack/root-account-create-update-kt8sq" Feb 17 16:13:42 crc kubenswrapper[4808]: I0217 16:13:42.331629 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-kt8sq" Feb 17 16:13:42 crc kubenswrapper[4808]: I0217 16:13:42.705495 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-kt8sq"] Feb 17 16:13:43 crc kubenswrapper[4808]: I0217 16:13:43.166238 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5e9f09-05c9-4fa2-8e39-22ffa4fa8d2c" path="/var/lib/kubelet/pods/bc5e9f09-05c9-4fa2-8e39-22ffa4fa8d2c/volumes" Feb 17 16:13:43 crc kubenswrapper[4808]: I0217 16:13:43.210379 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-kt8sq" event={"ID":"6940f857-9d37-4d69-8b1a-33208fe6de43","Type":"ContainerStarted","Data":"aa9c642e8bb62ae5d91fda2bdf24643392c75706213200f28e2d16c8e6a33f94"} Feb 17 16:13:43 crc kubenswrapper[4808]: I0217 16:13:43.210419 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-kt8sq" event={"ID":"6940f857-9d37-4d69-8b1a-33208fe6de43","Type":"ContainerStarted","Data":"98c2800077894190b1e9521bc93e98e57fb1374bafdeb5e31d595195ddc58cf4"} Feb 17 16:13:44 crc kubenswrapper[4808]: I0217 16:13:44.220871 4808 generic.go:334] "Generic (PLEG): container finished" podID="698c36e9-5f87-4836-8660-aaceac669005" containerID="19fb997acb847b4585d9f3a1732ebf382a63b29716209b27bb21be0c936a6430" exitCode=0 Feb 17 16:13:44 crc kubenswrapper[4808]: I0217 16:13:44.220954 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"698c36e9-5f87-4836-8660-aaceac669005","Type":"ContainerDied","Data":"19fb997acb847b4585d9f3a1732ebf382a63b29716209b27bb21be0c936a6430"} Feb 17 16:13:44 crc kubenswrapper[4808]: I0217 16:13:44.222905 4808 generic.go:334] "Generic (PLEG): container finished" podID="6940f857-9d37-4d69-8b1a-33208fe6de43" containerID="aa9c642e8bb62ae5d91fda2bdf24643392c75706213200f28e2d16c8e6a33f94" exitCode=0 Feb 17 16:13:44 crc kubenswrapper[4808]: I0217 16:13:44.222953 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-kt8sq" event={"ID":"6940f857-9d37-4d69-8b1a-33208fe6de43","Type":"ContainerDied","Data":"aa9c642e8bb62ae5d91fda2bdf24643392c75706213200f28e2d16c8e6a33f94"} Feb 17 16:13:44 crc kubenswrapper[4808]: I0217 16:13:44.225126 4808 generic.go:334] "Generic (PLEG): container finished" podID="59be2048-a5c9-44c9-a3ef-651002555ff0" containerID="5486e6dc5697e1e74b776b15f38831dacbc3e1b4bd9ce88391352b7167a44fe9" exitCode=0 Feb 17 16:13:44 crc kubenswrapper[4808]: I0217 16:13:44.225171 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"59be2048-a5c9-44c9-a3ef-651002555ff0","Type":"ContainerDied","Data":"5486e6dc5697e1e74b776b15f38831dacbc3e1b4bd9ce88391352b7167a44fe9"} Feb 17 16:13:44 crc kubenswrapper[4808]: I0217 16:13:44.256831 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-kt8sq" podStartSLOduration=3.256794888 podStartE2EDuration="3.256794888s" podCreationTimestamp="2026-02-17 16:13:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:13:43.22788208 +0000 UTC m=+1186.744241153" watchObservedRunningTime="2026-02-17 16:13:44.256794888 +0000 UTC m=+1187.773153961" Feb 17 16:13:44 crc kubenswrapper[4808]: I0217 16:13:44.340449 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8f52ebe4-f003-4d0b-8539-1d406db95b2f-etc-swift\") pod \"swift-storage-0\" (UID: \"8f52ebe4-f003-4d0b-8539-1d406db95b2f\") " pod="openstack/swift-storage-0" Feb 17 16:13:44 crc kubenswrapper[4808]: E0217 16:13:44.341322 4808 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 17 16:13:44 crc kubenswrapper[4808]: E0217 16:13:44.341351 4808 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 17 16:13:44 crc kubenswrapper[4808]: E0217 16:13:44.341398 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8f52ebe4-f003-4d0b-8539-1d406db95b2f-etc-swift podName:8f52ebe4-f003-4d0b-8539-1d406db95b2f nodeName:}" failed. No retries permitted until 2026-02-17 16:14:00.341379059 +0000 UTC m=+1203.857738222 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/8f52ebe4-f003-4d0b-8539-1d406db95b2f-etc-swift") pod "swift-storage-0" (UID: "8f52ebe4-f003-4d0b-8539-1d406db95b2f") : configmap "swift-ring-files" not found Feb 17 16:13:45 crc kubenswrapper[4808]: I0217 16:13:45.235525 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"2917eca2-0431-4bd6-ad96-ab8464cc4fd7","Type":"ContainerStarted","Data":"3e1259ba3d26a0e7de7e3a0ca80bca8985317419bb22e9888ef6fc0a7e83aec7"} Feb 17 16:13:45 crc kubenswrapper[4808]: I0217 16:13:45.244921 4808 generic.go:334] "Generic (PLEG): container finished" podID="eb2856a7-c37a-4ecc-a4a2-c49864240315" containerID="531cd6842c615f80a678de85ab5ffd56ce530c2a4ddaf1a8a62d7dbfe638cf33" exitCode=0 Feb 17 16:13:45 crc kubenswrapper[4808]: I0217 16:13:45.244979 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-qg65w" event={"ID":"eb2856a7-c37a-4ecc-a4a2-c49864240315","Type":"ContainerDied","Data":"531cd6842c615f80a678de85ab5ffd56ce530c2a4ddaf1a8a62d7dbfe638cf33"} Feb 17 16:13:45 crc kubenswrapper[4808]: I0217 16:13:45.248856 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"698c36e9-5f87-4836-8660-aaceac669005","Type":"ContainerStarted","Data":"d280bb8f394e232e2279b423416261e7f2f5d4ad76577ac87b19691f2c6abe5e"} Feb 17 16:13:45 crc kubenswrapper[4808]: I0217 16:13:45.249118 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 17 16:13:45 crc kubenswrapper[4808]: I0217 16:13:45.252915 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"59be2048-a5c9-44c9-a3ef-651002555ff0","Type":"ContainerStarted","Data":"a66e5c234068e929dfcc62adceb6ad71c707c8e45c67ae3fa19c099a1c7d0807"} Feb 17 16:13:45 crc kubenswrapper[4808]: I0217 16:13:45.253529 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:13:45 crc kubenswrapper[4808]: I0217 16:13:45.270742 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=14.15672832 podStartE2EDuration="1m8.27070954s" podCreationTimestamp="2026-02-17 16:12:37 +0000 UTC" firstStartedPulling="2026-02-17 16:12:50.127329789 +0000 UTC m=+1133.643688862" lastFinishedPulling="2026-02-17 16:13:44.241311009 +0000 UTC m=+1187.757670082" observedRunningTime="2026-02-17 16:13:45.268380878 +0000 UTC m=+1188.784739951" watchObservedRunningTime="2026-02-17 16:13:45.27070954 +0000 UTC m=+1188.787068613" Feb 17 16:13:45 crc kubenswrapper[4808]: I0217 16:13:45.347507 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=57.555325178 podStartE2EDuration="1m15.347484359s" podCreationTimestamp="2026-02-17 16:12:30 +0000 UTC" firstStartedPulling="2026-02-17 16:12:49.427320235 +0000 UTC m=+1132.943679308" lastFinishedPulling="2026-02-17 16:13:07.219479416 +0000 UTC m=+1150.735838489" observedRunningTime="2026-02-17 16:13:45.318197746 +0000 UTC m=+1188.834556819" watchObservedRunningTime="2026-02-17 16:13:45.347484359 +0000 UTC m=+1188.863843432" Feb 17 16:13:45 crc kubenswrapper[4808]: I0217 16:13:45.348793 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=56.738267396 podStartE2EDuration="1m15.348787774s" podCreationTimestamp="2026-02-17 16:12:30 +0000 UTC" firstStartedPulling="2026-02-17 16:12:48.992849162 +0000 UTC m=+1132.509208235" lastFinishedPulling="2026-02-17 16:13:07.60336953 +0000 UTC m=+1151.119728613" observedRunningTime="2026-02-17 16:13:45.342212276 +0000 UTC m=+1188.858571359" watchObservedRunningTime="2026-02-17 16:13:45.348787774 +0000 UTC m=+1188.865146847" Feb 17 16:13:45 crc kubenswrapper[4808]: I0217 16:13:45.630278 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-4mdzt"] Feb 17 16:13:45 crc kubenswrapper[4808]: I0217 16:13:45.631616 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-4mdzt" Feb 17 16:13:45 crc kubenswrapper[4808]: I0217 16:13:45.634848 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Feb 17 16:13:45 crc kubenswrapper[4808]: I0217 16:13:45.635407 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-xhb8t" Feb 17 16:13:45 crc kubenswrapper[4808]: I0217 16:13:45.643220 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-4mdzt"] Feb 17 16:13:45 crc kubenswrapper[4808]: I0217 16:13:45.766839 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-kt8sq" Feb 17 16:13:45 crc kubenswrapper[4808]: I0217 16:13:45.778821 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4002815-8dd4-4668-bea7-0d54bdaa4dd6-combined-ca-bundle\") pod \"glance-db-sync-4mdzt\" (UID: \"e4002815-8dd4-4668-bea7-0d54bdaa4dd6\") " pod="openstack/glance-db-sync-4mdzt" Feb 17 16:13:45 crc kubenswrapper[4808]: I0217 16:13:45.778866 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rb486\" (UniqueName: \"kubernetes.io/projected/e4002815-8dd4-4668-bea7-0d54bdaa4dd6-kube-api-access-rb486\") pod \"glance-db-sync-4mdzt\" (UID: \"e4002815-8dd4-4668-bea7-0d54bdaa4dd6\") " pod="openstack/glance-db-sync-4mdzt" Feb 17 16:13:45 crc kubenswrapper[4808]: I0217 16:13:45.779015 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e4002815-8dd4-4668-bea7-0d54bdaa4dd6-db-sync-config-data\") pod \"glance-db-sync-4mdzt\" (UID: \"e4002815-8dd4-4668-bea7-0d54bdaa4dd6\") " pod="openstack/glance-db-sync-4mdzt" Feb 17 16:13:45 crc kubenswrapper[4808]: I0217 16:13:45.779049 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4002815-8dd4-4668-bea7-0d54bdaa4dd6-config-data\") pod \"glance-db-sync-4mdzt\" (UID: \"e4002815-8dd4-4668-bea7-0d54bdaa4dd6\") " pod="openstack/glance-db-sync-4mdzt" Feb 17 16:13:45 crc kubenswrapper[4808]: I0217 16:13:45.815264 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-wkzp6" Feb 17 16:13:45 crc kubenswrapper[4808]: I0217 16:13:45.820755 4808 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-pfcvm" podUID="8a76a2ff-ed1a-4279-898c-54e85973f024" containerName="ovn-controller" probeResult="failure" output=< Feb 17 16:13:45 crc kubenswrapper[4808]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 17 16:13:45 crc kubenswrapper[4808]: > Feb 17 16:13:45 crc kubenswrapper[4808]: I0217 16:13:45.821437 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-wkzp6" Feb 17 16:13:45 crc kubenswrapper[4808]: I0217 16:13:45.880207 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6940f857-9d37-4d69-8b1a-33208fe6de43-operator-scripts\") pod \"6940f857-9d37-4d69-8b1a-33208fe6de43\" (UID: \"6940f857-9d37-4d69-8b1a-33208fe6de43\") " Feb 17 16:13:45 crc kubenswrapper[4808]: I0217 16:13:45.880378 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kn7p2\" (UniqueName: \"kubernetes.io/projected/6940f857-9d37-4d69-8b1a-33208fe6de43-kube-api-access-kn7p2\") pod \"6940f857-9d37-4d69-8b1a-33208fe6de43\" (UID: \"6940f857-9d37-4d69-8b1a-33208fe6de43\") " Feb 17 16:13:45 crc kubenswrapper[4808]: I0217 16:13:45.880620 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4002815-8dd4-4668-bea7-0d54bdaa4dd6-combined-ca-bundle\") pod \"glance-db-sync-4mdzt\" (UID: \"e4002815-8dd4-4668-bea7-0d54bdaa4dd6\") " pod="openstack/glance-db-sync-4mdzt" Feb 17 16:13:45 crc kubenswrapper[4808]: I0217 16:13:45.880655 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rb486\" (UniqueName: \"kubernetes.io/projected/e4002815-8dd4-4668-bea7-0d54bdaa4dd6-kube-api-access-rb486\") pod \"glance-db-sync-4mdzt\" (UID: \"e4002815-8dd4-4668-bea7-0d54bdaa4dd6\") " pod="openstack/glance-db-sync-4mdzt" Feb 17 16:13:45 crc kubenswrapper[4808]: I0217 16:13:45.880763 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e4002815-8dd4-4668-bea7-0d54bdaa4dd6-db-sync-config-data\") pod \"glance-db-sync-4mdzt\" (UID: \"e4002815-8dd4-4668-bea7-0d54bdaa4dd6\") " pod="openstack/glance-db-sync-4mdzt" Feb 17 16:13:45 crc kubenswrapper[4808]: I0217 16:13:45.880787 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4002815-8dd4-4668-bea7-0d54bdaa4dd6-config-data\") pod \"glance-db-sync-4mdzt\" (UID: \"e4002815-8dd4-4668-bea7-0d54bdaa4dd6\") " pod="openstack/glance-db-sync-4mdzt" Feb 17 16:13:45 crc kubenswrapper[4808]: I0217 16:13:45.883248 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6940f857-9d37-4d69-8b1a-33208fe6de43-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6940f857-9d37-4d69-8b1a-33208fe6de43" (UID: "6940f857-9d37-4d69-8b1a-33208fe6de43"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:13:45 crc kubenswrapper[4808]: I0217 16:13:45.889309 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e4002815-8dd4-4668-bea7-0d54bdaa4dd6-db-sync-config-data\") pod \"glance-db-sync-4mdzt\" (UID: \"e4002815-8dd4-4668-bea7-0d54bdaa4dd6\") " pod="openstack/glance-db-sync-4mdzt" Feb 17 16:13:45 crc kubenswrapper[4808]: I0217 16:13:45.889768 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4002815-8dd4-4668-bea7-0d54bdaa4dd6-config-data\") pod \"glance-db-sync-4mdzt\" (UID: \"e4002815-8dd4-4668-bea7-0d54bdaa4dd6\") " pod="openstack/glance-db-sync-4mdzt" Feb 17 16:13:45 crc kubenswrapper[4808]: I0217 16:13:45.898192 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4002815-8dd4-4668-bea7-0d54bdaa4dd6-combined-ca-bundle\") pod \"glance-db-sync-4mdzt\" (UID: \"e4002815-8dd4-4668-bea7-0d54bdaa4dd6\") " pod="openstack/glance-db-sync-4mdzt" Feb 17 16:13:45 crc kubenswrapper[4808]: I0217 16:13:45.898399 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6940f857-9d37-4d69-8b1a-33208fe6de43-kube-api-access-kn7p2" (OuterVolumeSpecName: "kube-api-access-kn7p2") pod "6940f857-9d37-4d69-8b1a-33208fe6de43" (UID: "6940f857-9d37-4d69-8b1a-33208fe6de43"). InnerVolumeSpecName "kube-api-access-kn7p2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:13:45 crc kubenswrapper[4808]: I0217 16:13:45.920242 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rb486\" (UniqueName: \"kubernetes.io/projected/e4002815-8dd4-4668-bea7-0d54bdaa4dd6-kube-api-access-rb486\") pod \"glance-db-sync-4mdzt\" (UID: \"e4002815-8dd4-4668-bea7-0d54bdaa4dd6\") " pod="openstack/glance-db-sync-4mdzt" Feb 17 16:13:45 crc kubenswrapper[4808]: I0217 16:13:45.949014 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-4mdzt" Feb 17 16:13:45 crc kubenswrapper[4808]: I0217 16:13:45.985748 4808 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6940f857-9d37-4d69-8b1a-33208fe6de43-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:13:45 crc kubenswrapper[4808]: I0217 16:13:45.985784 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kn7p2\" (UniqueName: \"kubernetes.io/projected/6940f857-9d37-4d69-8b1a-33208fe6de43-kube-api-access-kn7p2\") on node \"crc\" DevicePath \"\"" Feb 17 16:13:46 crc kubenswrapper[4808]: I0217 16:13:46.272905 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-kt8sq" Feb 17 16:13:46 crc kubenswrapper[4808]: I0217 16:13:46.276889 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-kt8sq" event={"ID":"6940f857-9d37-4d69-8b1a-33208fe6de43","Type":"ContainerDied","Data":"98c2800077894190b1e9521bc93e98e57fb1374bafdeb5e31d595195ddc58cf4"} Feb 17 16:13:46 crc kubenswrapper[4808]: I0217 16:13:46.276950 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98c2800077894190b1e9521bc93e98e57fb1374bafdeb5e31d595195ddc58cf4" Feb 17 16:13:46 crc kubenswrapper[4808]: I0217 16:13:46.285064 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-pfcvm-config-zqwjk"] Feb 17 16:13:46 crc kubenswrapper[4808]: E0217 16:13:46.286806 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6940f857-9d37-4d69-8b1a-33208fe6de43" containerName="mariadb-account-create-update" Feb 17 16:13:46 crc kubenswrapper[4808]: I0217 16:13:46.286826 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="6940f857-9d37-4d69-8b1a-33208fe6de43" containerName="mariadb-account-create-update" Feb 17 16:13:46 crc kubenswrapper[4808]: I0217 16:13:46.287100 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="6940f857-9d37-4d69-8b1a-33208fe6de43" containerName="mariadb-account-create-update" Feb 17 16:13:46 crc kubenswrapper[4808]: I0217 16:13:46.290371 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-pfcvm-config-zqwjk" Feb 17 16:13:46 crc kubenswrapper[4808]: I0217 16:13:46.297800 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 17 16:13:46 crc kubenswrapper[4808]: I0217 16:13:46.368646 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-pfcvm-config-zqwjk"] Feb 17 16:13:46 crc kubenswrapper[4808]: I0217 16:13:46.409772 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/92655725-c36f-4e8a-bdb4-12fa4e41a3d7-var-run-ovn\") pod \"ovn-controller-pfcvm-config-zqwjk\" (UID: \"92655725-c36f-4e8a-bdb4-12fa4e41a3d7\") " pod="openstack/ovn-controller-pfcvm-config-zqwjk" Feb 17 16:13:46 crc kubenswrapper[4808]: I0217 16:13:46.409903 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/92655725-c36f-4e8a-bdb4-12fa4e41a3d7-var-run\") pod \"ovn-controller-pfcvm-config-zqwjk\" (UID: \"92655725-c36f-4e8a-bdb4-12fa4e41a3d7\") " pod="openstack/ovn-controller-pfcvm-config-zqwjk" Feb 17 16:13:46 crc kubenswrapper[4808]: I0217 16:13:46.409926 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/92655725-c36f-4e8a-bdb4-12fa4e41a3d7-var-log-ovn\") pod \"ovn-controller-pfcvm-config-zqwjk\" (UID: \"92655725-c36f-4e8a-bdb4-12fa4e41a3d7\") " pod="openstack/ovn-controller-pfcvm-config-zqwjk" Feb 17 16:13:46 crc kubenswrapper[4808]: I0217 16:13:46.410009 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfnrg\" (UniqueName: \"kubernetes.io/projected/92655725-c36f-4e8a-bdb4-12fa4e41a3d7-kube-api-access-gfnrg\") pod \"ovn-controller-pfcvm-config-zqwjk\" (UID: \"92655725-c36f-4e8a-bdb4-12fa4e41a3d7\") " pod="openstack/ovn-controller-pfcvm-config-zqwjk" Feb 17 16:13:46 crc kubenswrapper[4808]: I0217 16:13:46.410062 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/92655725-c36f-4e8a-bdb4-12fa4e41a3d7-scripts\") pod \"ovn-controller-pfcvm-config-zqwjk\" (UID: \"92655725-c36f-4e8a-bdb4-12fa4e41a3d7\") " pod="openstack/ovn-controller-pfcvm-config-zqwjk" Feb 17 16:13:46 crc kubenswrapper[4808]: I0217 16:13:46.410221 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/92655725-c36f-4e8a-bdb4-12fa4e41a3d7-additional-scripts\") pod \"ovn-controller-pfcvm-config-zqwjk\" (UID: \"92655725-c36f-4e8a-bdb4-12fa4e41a3d7\") " pod="openstack/ovn-controller-pfcvm-config-zqwjk" Feb 17 16:13:46 crc kubenswrapper[4808]: I0217 16:13:46.514361 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/92655725-c36f-4e8a-bdb4-12fa4e41a3d7-var-run\") pod \"ovn-controller-pfcvm-config-zqwjk\" (UID: \"92655725-c36f-4e8a-bdb4-12fa4e41a3d7\") " pod="openstack/ovn-controller-pfcvm-config-zqwjk" Feb 17 16:13:46 crc kubenswrapper[4808]: I0217 16:13:46.514399 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/92655725-c36f-4e8a-bdb4-12fa4e41a3d7-var-log-ovn\") pod \"ovn-controller-pfcvm-config-zqwjk\" (UID: \"92655725-c36f-4e8a-bdb4-12fa4e41a3d7\") " pod="openstack/ovn-controller-pfcvm-config-zqwjk" Feb 17 16:13:46 crc kubenswrapper[4808]: I0217 16:13:46.514438 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfnrg\" (UniqueName: \"kubernetes.io/projected/92655725-c36f-4e8a-bdb4-12fa4e41a3d7-kube-api-access-gfnrg\") pod \"ovn-controller-pfcvm-config-zqwjk\" (UID: \"92655725-c36f-4e8a-bdb4-12fa4e41a3d7\") " pod="openstack/ovn-controller-pfcvm-config-zqwjk" Feb 17 16:13:46 crc kubenswrapper[4808]: I0217 16:13:46.514479 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/92655725-c36f-4e8a-bdb4-12fa4e41a3d7-scripts\") pod \"ovn-controller-pfcvm-config-zqwjk\" (UID: \"92655725-c36f-4e8a-bdb4-12fa4e41a3d7\") " pod="openstack/ovn-controller-pfcvm-config-zqwjk" Feb 17 16:13:46 crc kubenswrapper[4808]: I0217 16:13:46.514540 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/92655725-c36f-4e8a-bdb4-12fa4e41a3d7-additional-scripts\") pod \"ovn-controller-pfcvm-config-zqwjk\" (UID: \"92655725-c36f-4e8a-bdb4-12fa4e41a3d7\") " pod="openstack/ovn-controller-pfcvm-config-zqwjk" Feb 17 16:13:46 crc kubenswrapper[4808]: I0217 16:13:46.514654 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/92655725-c36f-4e8a-bdb4-12fa4e41a3d7-var-run-ovn\") pod \"ovn-controller-pfcvm-config-zqwjk\" (UID: \"92655725-c36f-4e8a-bdb4-12fa4e41a3d7\") " pod="openstack/ovn-controller-pfcvm-config-zqwjk" Feb 17 16:13:46 crc kubenswrapper[4808]: I0217 16:13:46.514782 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/92655725-c36f-4e8a-bdb4-12fa4e41a3d7-var-log-ovn\") pod \"ovn-controller-pfcvm-config-zqwjk\" (UID: \"92655725-c36f-4e8a-bdb4-12fa4e41a3d7\") " pod="openstack/ovn-controller-pfcvm-config-zqwjk" Feb 17 16:13:46 crc kubenswrapper[4808]: I0217 16:13:46.514811 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/92655725-c36f-4e8a-bdb4-12fa4e41a3d7-var-run\") pod \"ovn-controller-pfcvm-config-zqwjk\" (UID: \"92655725-c36f-4e8a-bdb4-12fa4e41a3d7\") " pod="openstack/ovn-controller-pfcvm-config-zqwjk" Feb 17 16:13:46 crc kubenswrapper[4808]: I0217 16:13:46.514866 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/92655725-c36f-4e8a-bdb4-12fa4e41a3d7-var-run-ovn\") pod \"ovn-controller-pfcvm-config-zqwjk\" (UID: \"92655725-c36f-4e8a-bdb4-12fa4e41a3d7\") " pod="openstack/ovn-controller-pfcvm-config-zqwjk" Feb 17 16:13:46 crc kubenswrapper[4808]: I0217 16:13:46.515710 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/92655725-c36f-4e8a-bdb4-12fa4e41a3d7-additional-scripts\") pod \"ovn-controller-pfcvm-config-zqwjk\" (UID: \"92655725-c36f-4e8a-bdb4-12fa4e41a3d7\") " pod="openstack/ovn-controller-pfcvm-config-zqwjk" Feb 17 16:13:46 crc kubenswrapper[4808]: I0217 16:13:46.517707 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/92655725-c36f-4e8a-bdb4-12fa4e41a3d7-scripts\") pod \"ovn-controller-pfcvm-config-zqwjk\" (UID: \"92655725-c36f-4e8a-bdb4-12fa4e41a3d7\") " pod="openstack/ovn-controller-pfcvm-config-zqwjk" Feb 17 16:13:46 crc kubenswrapper[4808]: I0217 16:13:46.543967 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfnrg\" (UniqueName: \"kubernetes.io/projected/92655725-c36f-4e8a-bdb4-12fa4e41a3d7-kube-api-access-gfnrg\") pod \"ovn-controller-pfcvm-config-zqwjk\" (UID: \"92655725-c36f-4e8a-bdb4-12fa4e41a3d7\") " pod="openstack/ovn-controller-pfcvm-config-zqwjk" Feb 17 16:13:46 crc kubenswrapper[4808]: I0217 16:13:46.644774 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-pfcvm-config-zqwjk" Feb 17 16:13:46 crc kubenswrapper[4808]: I0217 16:13:46.773560 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-4mdzt"] Feb 17 16:13:46 crc kubenswrapper[4808]: I0217 16:13:46.926596 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-qg65w" Feb 17 16:13:47 crc kubenswrapper[4808]: I0217 16:13:47.023016 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/eb2856a7-c37a-4ecc-a4a2-c49864240315-swiftconf\") pod \"eb2856a7-c37a-4ecc-a4a2-c49864240315\" (UID: \"eb2856a7-c37a-4ecc-a4a2-c49864240315\") " Feb 17 16:13:47 crc kubenswrapper[4808]: I0217 16:13:47.023510 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/eb2856a7-c37a-4ecc-a4a2-c49864240315-ring-data-devices\") pod \"eb2856a7-c37a-4ecc-a4a2-c49864240315\" (UID: \"eb2856a7-c37a-4ecc-a4a2-c49864240315\") " Feb 17 16:13:47 crc kubenswrapper[4808]: I0217 16:13:47.023568 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/eb2856a7-c37a-4ecc-a4a2-c49864240315-etc-swift\") pod \"eb2856a7-c37a-4ecc-a4a2-c49864240315\" (UID: \"eb2856a7-c37a-4ecc-a4a2-c49864240315\") " Feb 17 16:13:47 crc kubenswrapper[4808]: I0217 16:13:47.023625 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/eb2856a7-c37a-4ecc-a4a2-c49864240315-dispersionconf\") pod \"eb2856a7-c37a-4ecc-a4a2-c49864240315\" (UID: \"eb2856a7-c37a-4ecc-a4a2-c49864240315\") " Feb 17 16:13:47 crc kubenswrapper[4808]: I0217 16:13:47.023802 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb2856a7-c37a-4ecc-a4a2-c49864240315-combined-ca-bundle\") pod \"eb2856a7-c37a-4ecc-a4a2-c49864240315\" (UID: \"eb2856a7-c37a-4ecc-a4a2-c49864240315\") " Feb 17 16:13:47 crc kubenswrapper[4808]: I0217 16:13:47.023877 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vndk\" (UniqueName: \"kubernetes.io/projected/eb2856a7-c37a-4ecc-a4a2-c49864240315-kube-api-access-9vndk\") pod \"eb2856a7-c37a-4ecc-a4a2-c49864240315\" (UID: \"eb2856a7-c37a-4ecc-a4a2-c49864240315\") " Feb 17 16:13:47 crc kubenswrapper[4808]: I0217 16:13:47.023910 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/eb2856a7-c37a-4ecc-a4a2-c49864240315-scripts\") pod \"eb2856a7-c37a-4ecc-a4a2-c49864240315\" (UID: \"eb2856a7-c37a-4ecc-a4a2-c49864240315\") " Feb 17 16:13:47 crc kubenswrapper[4808]: I0217 16:13:47.033252 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb2856a7-c37a-4ecc-a4a2-c49864240315-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "eb2856a7-c37a-4ecc-a4a2-c49864240315" (UID: "eb2856a7-c37a-4ecc-a4a2-c49864240315"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:13:47 crc kubenswrapper[4808]: I0217 16:13:47.033482 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb2856a7-c37a-4ecc-a4a2-c49864240315-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "eb2856a7-c37a-4ecc-a4a2-c49864240315" (UID: "eb2856a7-c37a-4ecc-a4a2-c49864240315"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:13:47 crc kubenswrapper[4808]: I0217 16:13:47.034288 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb2856a7-c37a-4ecc-a4a2-c49864240315-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "eb2856a7-c37a-4ecc-a4a2-c49864240315" (UID: "eb2856a7-c37a-4ecc-a4a2-c49864240315"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:13:47 crc kubenswrapper[4808]: I0217 16:13:47.058816 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb2856a7-c37a-4ecc-a4a2-c49864240315-kube-api-access-9vndk" (OuterVolumeSpecName: "kube-api-access-9vndk") pod "eb2856a7-c37a-4ecc-a4a2-c49864240315" (UID: "eb2856a7-c37a-4ecc-a4a2-c49864240315"). InnerVolumeSpecName "kube-api-access-9vndk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:13:47 crc kubenswrapper[4808]: I0217 16:13:47.059292 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb2856a7-c37a-4ecc-a4a2-c49864240315-scripts" (OuterVolumeSpecName: "scripts") pod "eb2856a7-c37a-4ecc-a4a2-c49864240315" (UID: "eb2856a7-c37a-4ecc-a4a2-c49864240315"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:13:47 crc kubenswrapper[4808]: I0217 16:13:47.073132 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb2856a7-c37a-4ecc-a4a2-c49864240315-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "eb2856a7-c37a-4ecc-a4a2-c49864240315" (UID: "eb2856a7-c37a-4ecc-a4a2-c49864240315"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:13:47 crc kubenswrapper[4808]: I0217 16:13:47.082786 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb2856a7-c37a-4ecc-a4a2-c49864240315-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eb2856a7-c37a-4ecc-a4a2-c49864240315" (UID: "eb2856a7-c37a-4ecc-a4a2-c49864240315"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:13:47 crc kubenswrapper[4808]: I0217 16:13:47.126134 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb2856a7-c37a-4ecc-a4a2-c49864240315-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:13:47 crc kubenswrapper[4808]: I0217 16:13:47.126355 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9vndk\" (UniqueName: \"kubernetes.io/projected/eb2856a7-c37a-4ecc-a4a2-c49864240315-kube-api-access-9vndk\") on node \"crc\" DevicePath \"\"" Feb 17 16:13:47 crc kubenswrapper[4808]: I0217 16:13:47.126476 4808 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/eb2856a7-c37a-4ecc-a4a2-c49864240315-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:13:47 crc kubenswrapper[4808]: I0217 16:13:47.126559 4808 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/eb2856a7-c37a-4ecc-a4a2-c49864240315-swiftconf\") on node \"crc\" DevicePath \"\"" Feb 17 16:13:47 crc kubenswrapper[4808]: I0217 16:13:47.126713 4808 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/eb2856a7-c37a-4ecc-a4a2-c49864240315-ring-data-devices\") on node \"crc\" DevicePath \"\"" Feb 17 16:13:47 crc kubenswrapper[4808]: I0217 16:13:47.126780 4808 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/eb2856a7-c37a-4ecc-a4a2-c49864240315-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 17 16:13:47 crc kubenswrapper[4808]: I0217 16:13:47.126873 4808 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/eb2856a7-c37a-4ecc-a4a2-c49864240315-dispersionconf\") on node \"crc\" DevicePath \"\"" Feb 17 16:13:47 crc kubenswrapper[4808]: I0217 16:13:47.217355 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-pfcvm-config-zqwjk"] Feb 17 16:13:47 crc kubenswrapper[4808]: I0217 16:13:47.286688 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-pfcvm-config-zqwjk" event={"ID":"92655725-c36f-4e8a-bdb4-12fa4e41a3d7","Type":"ContainerStarted","Data":"04e048c7a3bbfd39b61c305cae990b37bd53a929ece350691f5e86c6d1b68fd6"} Feb 17 16:13:47 crc kubenswrapper[4808]: I0217 16:13:47.289575 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-qg65w" event={"ID":"eb2856a7-c37a-4ecc-a4a2-c49864240315","Type":"ContainerDied","Data":"c158428c095eaa91f94460c1176f203740b31134ec5ab68c67c7165466a47208"} Feb 17 16:13:47 crc kubenswrapper[4808]: I0217 16:13:47.289631 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c158428c095eaa91f94460c1176f203740b31134ec5ab68c67c7165466a47208" Feb 17 16:13:47 crc kubenswrapper[4808]: I0217 16:13:47.290137 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-qg65w" Feb 17 16:13:47 crc kubenswrapper[4808]: I0217 16:13:47.291445 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-4mdzt" event={"ID":"e4002815-8dd4-4668-bea7-0d54bdaa4dd6","Type":"ContainerStarted","Data":"e5bfc747bb74b14a5184eb3f8c16443aca59a2667d60646ea7965a405418e0b0"} Feb 17 16:13:48 crc kubenswrapper[4808]: I0217 16:13:48.214884 4808 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-lokistack-ingester-0" podUID="c7929d5b-e791-419e-8039-50cc9f8202f2" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 17 16:13:48 crc kubenswrapper[4808]: I0217 16:13:48.561980 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 17 16:13:48 crc kubenswrapper[4808]: I0217 16:13:48.636097 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-kt8sq"] Feb 17 16:13:48 crc kubenswrapper[4808]: I0217 16:13:48.644938 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-kt8sq"] Feb 17 16:13:49 crc kubenswrapper[4808]: I0217 16:13:49.154545 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6940f857-9d37-4d69-8b1a-33208fe6de43" path="/var/lib/kubelet/pods/6940f857-9d37-4d69-8b1a-33208fe6de43/volumes" Feb 17 16:13:50 crc kubenswrapper[4808]: I0217 16:13:50.315266 4808 generic.go:334] "Generic (PLEG): container finished" podID="92655725-c36f-4e8a-bdb4-12fa4e41a3d7" containerID="393504cd886f25701edec85a116ae5e2c966bd8cc6f3213385ba9edc2a2c6ec3" exitCode=0 Feb 17 16:13:50 crc kubenswrapper[4808]: I0217 16:13:50.315332 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-pfcvm-config-zqwjk" event={"ID":"92655725-c36f-4e8a-bdb4-12fa4e41a3d7","Type":"ContainerDied","Data":"393504cd886f25701edec85a116ae5e2c966bd8cc6f3213385ba9edc2a2c6ec3"} Feb 17 16:13:50 crc kubenswrapper[4808]: I0217 16:13:50.441762 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Feb 17 16:13:50 crc kubenswrapper[4808]: I0217 16:13:50.794178 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-pfcvm" Feb 17 16:13:51 crc kubenswrapper[4808]: I0217 16:13:51.675506 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-pfcvm-config-zqwjk" Feb 17 16:13:51 crc kubenswrapper[4808]: I0217 16:13:51.843158 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/92655725-c36f-4e8a-bdb4-12fa4e41a3d7-var-log-ovn\") pod \"92655725-c36f-4e8a-bdb4-12fa4e41a3d7\" (UID: \"92655725-c36f-4e8a-bdb4-12fa4e41a3d7\") " Feb 17 16:13:51 crc kubenswrapper[4808]: I0217 16:13:51.843255 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/92655725-c36f-4e8a-bdb4-12fa4e41a3d7-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "92655725-c36f-4e8a-bdb4-12fa4e41a3d7" (UID: "92655725-c36f-4e8a-bdb4-12fa4e41a3d7"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:13:51 crc kubenswrapper[4808]: I0217 16:13:51.843298 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/92655725-c36f-4e8a-bdb4-12fa4e41a3d7-var-run-ovn\") pod \"92655725-c36f-4e8a-bdb4-12fa4e41a3d7\" (UID: \"92655725-c36f-4e8a-bdb4-12fa4e41a3d7\") " Feb 17 16:13:51 crc kubenswrapper[4808]: I0217 16:13:51.843362 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gfnrg\" (UniqueName: \"kubernetes.io/projected/92655725-c36f-4e8a-bdb4-12fa4e41a3d7-kube-api-access-gfnrg\") pod \"92655725-c36f-4e8a-bdb4-12fa4e41a3d7\" (UID: \"92655725-c36f-4e8a-bdb4-12fa4e41a3d7\") " Feb 17 16:13:51 crc kubenswrapper[4808]: I0217 16:13:51.843384 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/92655725-c36f-4e8a-bdb4-12fa4e41a3d7-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "92655725-c36f-4e8a-bdb4-12fa4e41a3d7" (UID: "92655725-c36f-4e8a-bdb4-12fa4e41a3d7"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:13:51 crc kubenswrapper[4808]: I0217 16:13:51.843420 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/92655725-c36f-4e8a-bdb4-12fa4e41a3d7-scripts\") pod \"92655725-c36f-4e8a-bdb4-12fa4e41a3d7\" (UID: \"92655725-c36f-4e8a-bdb4-12fa4e41a3d7\") " Feb 17 16:13:51 crc kubenswrapper[4808]: I0217 16:13:51.843465 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/92655725-c36f-4e8a-bdb4-12fa4e41a3d7-var-run\") pod \"92655725-c36f-4e8a-bdb4-12fa4e41a3d7\" (UID: \"92655725-c36f-4e8a-bdb4-12fa4e41a3d7\") " Feb 17 16:13:51 crc kubenswrapper[4808]: I0217 16:13:51.843488 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/92655725-c36f-4e8a-bdb4-12fa4e41a3d7-additional-scripts\") pod \"92655725-c36f-4e8a-bdb4-12fa4e41a3d7\" (UID: \"92655725-c36f-4e8a-bdb4-12fa4e41a3d7\") " Feb 17 16:13:51 crc kubenswrapper[4808]: I0217 16:13:51.843881 4808 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/92655725-c36f-4e8a-bdb4-12fa4e41a3d7-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 17 16:13:51 crc kubenswrapper[4808]: I0217 16:13:51.843895 4808 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/92655725-c36f-4e8a-bdb4-12fa4e41a3d7-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 17 16:13:51 crc kubenswrapper[4808]: I0217 16:13:51.844476 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92655725-c36f-4e8a-bdb4-12fa4e41a3d7-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "92655725-c36f-4e8a-bdb4-12fa4e41a3d7" (UID: "92655725-c36f-4e8a-bdb4-12fa4e41a3d7"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:13:51 crc kubenswrapper[4808]: I0217 16:13:51.844571 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/92655725-c36f-4e8a-bdb4-12fa4e41a3d7-var-run" (OuterVolumeSpecName: "var-run") pod "92655725-c36f-4e8a-bdb4-12fa4e41a3d7" (UID: "92655725-c36f-4e8a-bdb4-12fa4e41a3d7"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:13:51 crc kubenswrapper[4808]: I0217 16:13:51.845518 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92655725-c36f-4e8a-bdb4-12fa4e41a3d7-scripts" (OuterVolumeSpecName: "scripts") pod "92655725-c36f-4e8a-bdb4-12fa4e41a3d7" (UID: "92655725-c36f-4e8a-bdb4-12fa4e41a3d7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:13:51 crc kubenswrapper[4808]: I0217 16:13:51.851864 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92655725-c36f-4e8a-bdb4-12fa4e41a3d7-kube-api-access-gfnrg" (OuterVolumeSpecName: "kube-api-access-gfnrg") pod "92655725-c36f-4e8a-bdb4-12fa4e41a3d7" (UID: "92655725-c36f-4e8a-bdb4-12fa4e41a3d7"). InnerVolumeSpecName "kube-api-access-gfnrg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:13:51 crc kubenswrapper[4808]: I0217 16:13:51.945556 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gfnrg\" (UniqueName: \"kubernetes.io/projected/92655725-c36f-4e8a-bdb4-12fa4e41a3d7-kube-api-access-gfnrg\") on node \"crc\" DevicePath \"\"" Feb 17 16:13:51 crc kubenswrapper[4808]: I0217 16:13:51.945599 4808 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/92655725-c36f-4e8a-bdb4-12fa4e41a3d7-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:13:51 crc kubenswrapper[4808]: I0217 16:13:51.945608 4808 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/92655725-c36f-4e8a-bdb4-12fa4e41a3d7-var-run\") on node \"crc\" DevicePath \"\"" Feb 17 16:13:51 crc kubenswrapper[4808]: I0217 16:13:51.945618 4808 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/92655725-c36f-4e8a-bdb4-12fa4e41a3d7-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:13:52 crc kubenswrapper[4808]: I0217 16:13:52.334480 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-pfcvm-config-zqwjk" event={"ID":"92655725-c36f-4e8a-bdb4-12fa4e41a3d7","Type":"ContainerDied","Data":"04e048c7a3bbfd39b61c305cae990b37bd53a929ece350691f5e86c6d1b68fd6"} Feb 17 16:13:52 crc kubenswrapper[4808]: I0217 16:13:52.334748 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="04e048c7a3bbfd39b61c305cae990b37bd53a929ece350691f5e86c6d1b68fd6" Feb 17 16:13:52 crc kubenswrapper[4808]: I0217 16:13:52.334533 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-pfcvm-config-zqwjk" Feb 17 16:13:52 crc kubenswrapper[4808]: I0217 16:13:52.772427 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-pfcvm-config-zqwjk"] Feb 17 16:13:52 crc kubenswrapper[4808]: I0217 16:13:52.778749 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-pfcvm-config-zqwjk"] Feb 17 16:13:53 crc kubenswrapper[4808]: I0217 16:13:53.158897 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92655725-c36f-4e8a-bdb4-12fa4e41a3d7" path="/var/lib/kubelet/pods/92655725-c36f-4e8a-bdb4-12fa4e41a3d7/volumes" Feb 17 16:13:53 crc kubenswrapper[4808]: I0217 16:13:53.561531 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 17 16:13:53 crc kubenswrapper[4808]: I0217 16:13:53.564879 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 17 16:13:53 crc kubenswrapper[4808]: I0217 16:13:53.671703 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-f2jqv"] Feb 17 16:13:53 crc kubenswrapper[4808]: E0217 16:13:53.672158 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb2856a7-c37a-4ecc-a4a2-c49864240315" containerName="swift-ring-rebalance" Feb 17 16:13:53 crc kubenswrapper[4808]: I0217 16:13:53.672181 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb2856a7-c37a-4ecc-a4a2-c49864240315" containerName="swift-ring-rebalance" Feb 17 16:13:53 crc kubenswrapper[4808]: E0217 16:13:53.672190 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92655725-c36f-4e8a-bdb4-12fa4e41a3d7" containerName="ovn-config" Feb 17 16:13:53 crc kubenswrapper[4808]: I0217 16:13:53.672196 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="92655725-c36f-4e8a-bdb4-12fa4e41a3d7" containerName="ovn-config" Feb 17 16:13:53 crc kubenswrapper[4808]: I0217 16:13:53.672395 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="92655725-c36f-4e8a-bdb4-12fa4e41a3d7" containerName="ovn-config" Feb 17 16:13:53 crc kubenswrapper[4808]: I0217 16:13:53.672424 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb2856a7-c37a-4ecc-a4a2-c49864240315" containerName="swift-ring-rebalance" Feb 17 16:13:53 crc kubenswrapper[4808]: I0217 16:13:53.673732 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-f2jqv" Feb 17 16:13:53 crc kubenswrapper[4808]: I0217 16:13:53.676635 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 17 16:13:53 crc kubenswrapper[4808]: I0217 16:13:53.683916 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-f2jqv"] Feb 17 16:13:53 crc kubenswrapper[4808]: I0217 16:13:53.774728 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7377369f-b540-4b85-be05-4200c9695a41-operator-scripts\") pod \"root-account-create-update-f2jqv\" (UID: \"7377369f-b540-4b85-be05-4200c9695a41\") " pod="openstack/root-account-create-update-f2jqv" Feb 17 16:13:53 crc kubenswrapper[4808]: I0217 16:13:53.775689 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9t2pm\" (UniqueName: \"kubernetes.io/projected/7377369f-b540-4b85-be05-4200c9695a41-kube-api-access-9t2pm\") pod \"root-account-create-update-f2jqv\" (UID: \"7377369f-b540-4b85-be05-4200c9695a41\") " pod="openstack/root-account-create-update-f2jqv" Feb 17 16:13:53 crc kubenswrapper[4808]: I0217 16:13:53.877063 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7377369f-b540-4b85-be05-4200c9695a41-operator-scripts\") pod \"root-account-create-update-f2jqv\" (UID: \"7377369f-b540-4b85-be05-4200c9695a41\") " pod="openstack/root-account-create-update-f2jqv" Feb 17 16:13:53 crc kubenswrapper[4808]: I0217 16:13:53.877366 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9t2pm\" (UniqueName: \"kubernetes.io/projected/7377369f-b540-4b85-be05-4200c9695a41-kube-api-access-9t2pm\") pod \"root-account-create-update-f2jqv\" (UID: \"7377369f-b540-4b85-be05-4200c9695a41\") " pod="openstack/root-account-create-update-f2jqv" Feb 17 16:13:53 crc kubenswrapper[4808]: I0217 16:13:53.877769 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7377369f-b540-4b85-be05-4200c9695a41-operator-scripts\") pod \"root-account-create-update-f2jqv\" (UID: \"7377369f-b540-4b85-be05-4200c9695a41\") " pod="openstack/root-account-create-update-f2jqv" Feb 17 16:13:53 crc kubenswrapper[4808]: I0217 16:13:53.895227 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9t2pm\" (UniqueName: \"kubernetes.io/projected/7377369f-b540-4b85-be05-4200c9695a41-kube-api-access-9t2pm\") pod \"root-account-create-update-f2jqv\" (UID: \"7377369f-b540-4b85-be05-4200c9695a41\") " pod="openstack/root-account-create-update-f2jqv" Feb 17 16:13:53 crc kubenswrapper[4808]: I0217 16:13:53.988585 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-f2jqv" Feb 17 16:13:54 crc kubenswrapper[4808]: I0217 16:13:54.352058 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 17 16:13:54 crc kubenswrapper[4808]: I0217 16:13:54.467296 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-f2jqv"] Feb 17 16:13:56 crc kubenswrapper[4808]: I0217 16:13:56.831396 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 17 16:13:56 crc kubenswrapper[4808]: I0217 16:13:56.831988 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="2917eca2-0431-4bd6-ad96-ab8464cc4fd7" containerName="prometheus" containerID="cri-o://4b0c39d37d11b4b4e6ab329ec7e07436445d5087b94a405b5022cc84ee9f2693" gracePeriod=600 Feb 17 16:13:56 crc kubenswrapper[4808]: I0217 16:13:56.832219 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="2917eca2-0431-4bd6-ad96-ab8464cc4fd7" containerName="thanos-sidecar" containerID="cri-o://3e1259ba3d26a0e7de7e3a0ca80bca8985317419bb22e9888ef6fc0a7e83aec7" gracePeriod=600 Feb 17 16:13:56 crc kubenswrapper[4808]: I0217 16:13:56.832323 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="2917eca2-0431-4bd6-ad96-ab8464cc4fd7" containerName="config-reloader" containerID="cri-o://8d4b256de0544b61472bec728b8a9f6596b6505c3ff6baf74b4b74f9988e76dc" gracePeriod=600 Feb 17 16:13:57 crc kubenswrapper[4808]: I0217 16:13:57.382099 4808 generic.go:334] "Generic (PLEG): container finished" podID="2917eca2-0431-4bd6-ad96-ab8464cc4fd7" containerID="3e1259ba3d26a0e7de7e3a0ca80bca8985317419bb22e9888ef6fc0a7e83aec7" exitCode=0 Feb 17 16:13:57 crc kubenswrapper[4808]: I0217 16:13:57.382696 4808 generic.go:334] "Generic (PLEG): container finished" podID="2917eca2-0431-4bd6-ad96-ab8464cc4fd7" containerID="8d4b256de0544b61472bec728b8a9f6596b6505c3ff6baf74b4b74f9988e76dc" exitCode=0 Feb 17 16:13:57 crc kubenswrapper[4808]: I0217 16:13:57.382712 4808 generic.go:334] "Generic (PLEG): container finished" podID="2917eca2-0431-4bd6-ad96-ab8464cc4fd7" containerID="4b0c39d37d11b4b4e6ab329ec7e07436445d5087b94a405b5022cc84ee9f2693" exitCode=0 Feb 17 16:13:57 crc kubenswrapper[4808]: I0217 16:13:57.382198 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"2917eca2-0431-4bd6-ad96-ab8464cc4fd7","Type":"ContainerDied","Data":"3e1259ba3d26a0e7de7e3a0ca80bca8985317419bb22e9888ef6fc0a7e83aec7"} Feb 17 16:13:57 crc kubenswrapper[4808]: I0217 16:13:57.382756 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"2917eca2-0431-4bd6-ad96-ab8464cc4fd7","Type":"ContainerDied","Data":"8d4b256de0544b61472bec728b8a9f6596b6505c3ff6baf74b4b74f9988e76dc"} Feb 17 16:13:57 crc kubenswrapper[4808]: I0217 16:13:57.382775 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"2917eca2-0431-4bd6-ad96-ab8464cc4fd7","Type":"ContainerDied","Data":"4b0c39d37d11b4b4e6ab329ec7e07436445d5087b94a405b5022cc84ee9f2693"} Feb 17 16:13:58 crc kubenswrapper[4808]: I0217 16:13:58.215185 4808 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-lokistack-ingester-0" podUID="c7929d5b-e791-419e-8039-50cc9f8202f2" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 17 16:13:58 crc kubenswrapper[4808]: I0217 16:13:58.563106 4808 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="2917eca2-0431-4bd6-ad96-ab8464cc4fd7" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.0.112:9090/-/ready\": dial tcp 10.217.0.112:9090: connect: connection refused" Feb 17 16:14:00 crc kubenswrapper[4808]: I0217 16:14:00.412972 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8f52ebe4-f003-4d0b-8539-1d406db95b2f-etc-swift\") pod \"swift-storage-0\" (UID: \"8f52ebe4-f003-4d0b-8539-1d406db95b2f\") " pod="openstack/swift-storage-0" Feb 17 16:14:00 crc kubenswrapper[4808]: I0217 16:14:00.460331 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8f52ebe4-f003-4d0b-8539-1d406db95b2f-etc-swift\") pod \"swift-storage-0\" (UID: \"8f52ebe4-f003-4d0b-8539-1d406db95b2f\") " pod="openstack/swift-storage-0" Feb 17 16:14:00 crc kubenswrapper[4808]: I0217 16:14:00.462127 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 17 16:14:01 crc kubenswrapper[4808]: I0217 16:14:01.788840 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.047210 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.167093 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-jmq6n"] Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.173905 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-jmq6n" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.181721 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-jmq6n"] Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.304326 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-78cc-account-create-update-k7vgl"] Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.305908 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-78cc-account-create-update-k7vgl" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.310399 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.322384 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-78cc-account-create-update-k7vgl"] Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.350026 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mx65n\" (UniqueName: \"kubernetes.io/projected/3ccecd7d-0e59-4336-a6ec-a595adbb727e-kube-api-access-mx65n\") pod \"cinder-db-create-jmq6n\" (UID: \"3ccecd7d-0e59-4336-a6ec-a595adbb727e\") " pod="openstack/cinder-db-create-jmq6n" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.350108 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3ccecd7d-0e59-4336-a6ec-a595adbb727e-operator-scripts\") pod \"cinder-db-create-jmq6n\" (UID: \"3ccecd7d-0e59-4336-a6ec-a595adbb727e\") " pod="openstack/cinder-db-create-jmq6n" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.452077 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e183e901-16a0-43cf-9ce5-ef36da8686d1-operator-scripts\") pod \"cinder-78cc-account-create-update-k7vgl\" (UID: \"e183e901-16a0-43cf-9ce5-ef36da8686d1\") " pod="openstack/cinder-78cc-account-create-update-k7vgl" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.452239 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mx65n\" (UniqueName: \"kubernetes.io/projected/3ccecd7d-0e59-4336-a6ec-a595adbb727e-kube-api-access-mx65n\") pod \"cinder-db-create-jmq6n\" (UID: \"3ccecd7d-0e59-4336-a6ec-a595adbb727e\") " pod="openstack/cinder-db-create-jmq6n" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.452317 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rj74\" (UniqueName: \"kubernetes.io/projected/e183e901-16a0-43cf-9ce5-ef36da8686d1-kube-api-access-7rj74\") pod \"cinder-78cc-account-create-update-k7vgl\" (UID: \"e183e901-16a0-43cf-9ce5-ef36da8686d1\") " pod="openstack/cinder-78cc-account-create-update-k7vgl" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.452349 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3ccecd7d-0e59-4336-a6ec-a595adbb727e-operator-scripts\") pod \"cinder-db-create-jmq6n\" (UID: \"3ccecd7d-0e59-4336-a6ec-a595adbb727e\") " pod="openstack/cinder-db-create-jmq6n" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.453131 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3ccecd7d-0e59-4336-a6ec-a595adbb727e-operator-scripts\") pod \"cinder-db-create-jmq6n\" (UID: \"3ccecd7d-0e59-4336-a6ec-a595adbb727e\") " pod="openstack/cinder-db-create-jmq6n" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.473518 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-db-create-r5lfk"] Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.474967 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-create-r5lfk" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.490624 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mx65n\" (UniqueName: \"kubernetes.io/projected/3ccecd7d-0e59-4336-a6ec-a595adbb727e-kube-api-access-mx65n\") pod \"cinder-db-create-jmq6n\" (UID: \"3ccecd7d-0e59-4336-a6ec-a595adbb727e\") " pod="openstack/cinder-db-create-jmq6n" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.496529 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-jmq6n" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.497074 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-59d8-account-create-update-5vsvx"] Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.498418 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-59d8-account-create-update-5vsvx" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.500472 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.512725 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-59d8-account-create-update-5vsvx"] Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.525802 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-db-create-r5lfk"] Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.554871 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/72e328d4-94e9-42bc-ae1c-b07b01d80072-operator-scripts\") pod \"cloudkitty-db-create-r5lfk\" (UID: \"72e328d4-94e9-42bc-ae1c-b07b01d80072\") " pod="openstack/cloudkitty-db-create-r5lfk" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.555297 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rj74\" (UniqueName: \"kubernetes.io/projected/e183e901-16a0-43cf-9ce5-ef36da8686d1-kube-api-access-7rj74\") pod \"cinder-78cc-account-create-update-k7vgl\" (UID: \"e183e901-16a0-43cf-9ce5-ef36da8686d1\") " pod="openstack/cinder-78cc-account-create-update-k7vgl" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.555348 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sx7rg\" (UniqueName: \"kubernetes.io/projected/72e328d4-94e9-42bc-ae1c-b07b01d80072-kube-api-access-sx7rg\") pod \"cloudkitty-db-create-r5lfk\" (UID: \"72e328d4-94e9-42bc-ae1c-b07b01d80072\") " pod="openstack/cloudkitty-db-create-r5lfk" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.555472 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e183e901-16a0-43cf-9ce5-ef36da8686d1-operator-scripts\") pod \"cinder-78cc-account-create-update-k7vgl\" (UID: \"e183e901-16a0-43cf-9ce5-ef36da8686d1\") " pod="openstack/cinder-78cc-account-create-update-k7vgl" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.556312 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e183e901-16a0-43cf-9ce5-ef36da8686d1-operator-scripts\") pod \"cinder-78cc-account-create-update-k7vgl\" (UID: \"e183e901-16a0-43cf-9ce5-ef36da8686d1\") " pod="openstack/cinder-78cc-account-create-update-k7vgl" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.588896 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rj74\" (UniqueName: \"kubernetes.io/projected/e183e901-16a0-43cf-9ce5-ef36da8686d1-kube-api-access-7rj74\") pod \"cinder-78cc-account-create-update-k7vgl\" (UID: \"e183e901-16a0-43cf-9ce5-ef36da8686d1\") " pod="openstack/cinder-78cc-account-create-update-k7vgl" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.590763 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-jqrq2"] Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.591946 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-jqrq2" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.599305 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-8c80-account-create-update-rk4jj"] Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.600352 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-8c80-account-create-update-rk4jj" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.609880 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-jqrq2"] Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.614860 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.619242 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-8c80-account-create-update-rk4jj"] Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.626057 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-78cc-account-create-update-k7vgl" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.657977 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/72e328d4-94e9-42bc-ae1c-b07b01d80072-operator-scripts\") pod \"cloudkitty-db-create-r5lfk\" (UID: \"72e328d4-94e9-42bc-ae1c-b07b01d80072\") " pod="openstack/cloudkitty-db-create-r5lfk" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.658036 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/02478fdd-380d-42f9-b105-c3ae86d224a8-operator-scripts\") pod \"neutron-59d8-account-create-update-5vsvx\" (UID: \"02478fdd-380d-42f9-b105-c3ae86d224a8\") " pod="openstack/neutron-59d8-account-create-update-5vsvx" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.658109 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sx7rg\" (UniqueName: \"kubernetes.io/projected/72e328d4-94e9-42bc-ae1c-b07b01d80072-kube-api-access-sx7rg\") pod \"cloudkitty-db-create-r5lfk\" (UID: \"72e328d4-94e9-42bc-ae1c-b07b01d80072\") " pod="openstack/cloudkitty-db-create-r5lfk" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.658149 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6r6zf\" (UniqueName: \"kubernetes.io/projected/02478fdd-380d-42f9-b105-c3ae86d224a8-kube-api-access-6r6zf\") pod \"neutron-59d8-account-create-update-5vsvx\" (UID: \"02478fdd-380d-42f9-b105-c3ae86d224a8\") " pod="openstack/neutron-59d8-account-create-update-5vsvx" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.660001 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/72e328d4-94e9-42bc-ae1c-b07b01d80072-operator-scripts\") pod \"cloudkitty-db-create-r5lfk\" (UID: \"72e328d4-94e9-42bc-ae1c-b07b01d80072\") " pod="openstack/cloudkitty-db-create-r5lfk" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.671014 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-kzjns"] Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.673028 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-kzjns" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.676268 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.676494 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-6x2tm" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.676886 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.684892 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.687385 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sx7rg\" (UniqueName: \"kubernetes.io/projected/72e328d4-94e9-42bc-ae1c-b07b01d80072-kube-api-access-sx7rg\") pod \"cloudkitty-db-create-r5lfk\" (UID: \"72e328d4-94e9-42bc-ae1c-b07b01d80072\") " pod="openstack/cloudkitty-db-create-r5lfk" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.689699 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-ktddg"] Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.690862 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-ktddg" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.723744 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-ktddg"] Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.740782 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-kzjns"] Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.761490 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6rjq\" (UniqueName: \"kubernetes.io/projected/41c68bd6-6280-4a89-be87-4d65f06a5a4d-kube-api-access-f6rjq\") pod \"keystone-db-sync-kzjns\" (UID: \"41c68bd6-6280-4a89-be87-4d65f06a5a4d\") " pod="openstack/keystone-db-sync-kzjns" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.761847 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/02478fdd-380d-42f9-b105-c3ae86d224a8-operator-scripts\") pod \"neutron-59d8-account-create-update-5vsvx\" (UID: \"02478fdd-380d-42f9-b105-c3ae86d224a8\") " pod="openstack/neutron-59d8-account-create-update-5vsvx" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.762025 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c02cbd83-d077-4812-b852-7fe9a0182b71-operator-scripts\") pod \"barbican-db-create-jqrq2\" (UID: \"c02cbd83-d077-4812-b852-7fe9a0182b71\") " pod="openstack/barbican-db-create-jqrq2" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.762141 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6r6zf\" (UniqueName: \"kubernetes.io/projected/02478fdd-380d-42f9-b105-c3ae86d224a8-kube-api-access-6r6zf\") pod \"neutron-59d8-account-create-update-5vsvx\" (UID: \"02478fdd-380d-42f9-b105-c3ae86d224a8\") " pod="openstack/neutron-59d8-account-create-update-5vsvx" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.762236 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41c68bd6-6280-4a89-be87-4d65f06a5a4d-combined-ca-bundle\") pod \"keystone-db-sync-kzjns\" (UID: \"41c68bd6-6280-4a89-be87-4d65f06a5a4d\") " pod="openstack/keystone-db-sync-kzjns" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.762347 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e5180ea6-12c0-4463-8fe5-c35ab2a15b44-operator-scripts\") pod \"barbican-8c80-account-create-update-rk4jj\" (UID: \"e5180ea6-12c0-4463-8fe5-c35ab2a15b44\") " pod="openstack/barbican-8c80-account-create-update-rk4jj" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.762465 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xj2f8\" (UniqueName: \"kubernetes.io/projected/c02cbd83-d077-4812-b852-7fe9a0182b71-kube-api-access-xj2f8\") pod \"barbican-db-create-jqrq2\" (UID: \"c02cbd83-d077-4812-b852-7fe9a0182b71\") " pod="openstack/barbican-db-create-jqrq2" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.762559 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41c68bd6-6280-4a89-be87-4d65f06a5a4d-config-data\") pod \"keystone-db-sync-kzjns\" (UID: \"41c68bd6-6280-4a89-be87-4d65f06a5a4d\") " pod="openstack/keystone-db-sync-kzjns" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.762705 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8xgn\" (UniqueName: \"kubernetes.io/projected/e5180ea6-12c0-4463-8fe5-c35ab2a15b44-kube-api-access-j8xgn\") pod \"barbican-8c80-account-create-update-rk4jj\" (UID: \"e5180ea6-12c0-4463-8fe5-c35ab2a15b44\") " pod="openstack/barbican-8c80-account-create-update-rk4jj" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.762562 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/02478fdd-380d-42f9-b105-c3ae86d224a8-operator-scripts\") pod \"neutron-59d8-account-create-update-5vsvx\" (UID: \"02478fdd-380d-42f9-b105-c3ae86d224a8\") " pod="openstack/neutron-59d8-account-create-update-5vsvx" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.792190 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6r6zf\" (UniqueName: \"kubernetes.io/projected/02478fdd-380d-42f9-b105-c3ae86d224a8-kube-api-access-6r6zf\") pod \"neutron-59d8-account-create-update-5vsvx\" (UID: \"02478fdd-380d-42f9-b105-c3ae86d224a8\") " pod="openstack/neutron-59d8-account-create-update-5vsvx" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.842543 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-create-r5lfk" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.855749 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-59d8-account-create-update-5vsvx" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.861037 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-a9c6-account-create-update-48vv8"] Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.862248 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-a9c6-account-create-update-48vv8" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.864458 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-db-secret" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.864518 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c02cbd83-d077-4812-b852-7fe9a0182b71-operator-scripts\") pod \"barbican-db-create-jqrq2\" (UID: \"c02cbd83-d077-4812-b852-7fe9a0182b71\") " pod="openstack/barbican-db-create-jqrq2" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.864562 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41c68bd6-6280-4a89-be87-4d65f06a5a4d-combined-ca-bundle\") pod \"keystone-db-sync-kzjns\" (UID: \"41c68bd6-6280-4a89-be87-4d65f06a5a4d\") " pod="openstack/keystone-db-sync-kzjns" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.864601 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dspfh\" (UniqueName: \"kubernetes.io/projected/ff670244-5344-4409-9823-6bfcf9ed274d-kube-api-access-dspfh\") pod \"neutron-db-create-ktddg\" (UID: \"ff670244-5344-4409-9823-6bfcf9ed274d\") " pod="openstack/neutron-db-create-ktddg" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.864624 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e5180ea6-12c0-4463-8fe5-c35ab2a15b44-operator-scripts\") pod \"barbican-8c80-account-create-update-rk4jj\" (UID: \"e5180ea6-12c0-4463-8fe5-c35ab2a15b44\") " pod="openstack/barbican-8c80-account-create-update-rk4jj" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.864651 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff670244-5344-4409-9823-6bfcf9ed274d-operator-scripts\") pod \"neutron-db-create-ktddg\" (UID: \"ff670244-5344-4409-9823-6bfcf9ed274d\") " pod="openstack/neutron-db-create-ktddg" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.864668 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xj2f8\" (UniqueName: \"kubernetes.io/projected/c02cbd83-d077-4812-b852-7fe9a0182b71-kube-api-access-xj2f8\") pod \"barbican-db-create-jqrq2\" (UID: \"c02cbd83-d077-4812-b852-7fe9a0182b71\") " pod="openstack/barbican-db-create-jqrq2" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.864687 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41c68bd6-6280-4a89-be87-4d65f06a5a4d-config-data\") pod \"keystone-db-sync-kzjns\" (UID: \"41c68bd6-6280-4a89-be87-4d65f06a5a4d\") " pod="openstack/keystone-db-sync-kzjns" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.864708 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8xgn\" (UniqueName: \"kubernetes.io/projected/e5180ea6-12c0-4463-8fe5-c35ab2a15b44-kube-api-access-j8xgn\") pod \"barbican-8c80-account-create-update-rk4jj\" (UID: \"e5180ea6-12c0-4463-8fe5-c35ab2a15b44\") " pod="openstack/barbican-8c80-account-create-update-rk4jj" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.864789 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6rjq\" (UniqueName: \"kubernetes.io/projected/41c68bd6-6280-4a89-be87-4d65f06a5a4d-kube-api-access-f6rjq\") pod \"keystone-db-sync-kzjns\" (UID: \"41c68bd6-6280-4a89-be87-4d65f06a5a4d\") " pod="openstack/keystone-db-sync-kzjns" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.865481 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e5180ea6-12c0-4463-8fe5-c35ab2a15b44-operator-scripts\") pod \"barbican-8c80-account-create-update-rk4jj\" (UID: \"e5180ea6-12c0-4463-8fe5-c35ab2a15b44\") " pod="openstack/barbican-8c80-account-create-update-rk4jj" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.865595 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c02cbd83-d077-4812-b852-7fe9a0182b71-operator-scripts\") pod \"barbican-db-create-jqrq2\" (UID: \"c02cbd83-d077-4812-b852-7fe9a0182b71\") " pod="openstack/barbican-db-create-jqrq2" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.868885 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41c68bd6-6280-4a89-be87-4d65f06a5a4d-combined-ca-bundle\") pod \"keystone-db-sync-kzjns\" (UID: \"41c68bd6-6280-4a89-be87-4d65f06a5a4d\") " pod="openstack/keystone-db-sync-kzjns" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.869254 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41c68bd6-6280-4a89-be87-4d65f06a5a4d-config-data\") pod \"keystone-db-sync-kzjns\" (UID: \"41c68bd6-6280-4a89-be87-4d65f06a5a4d\") " pod="openstack/keystone-db-sync-kzjns" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.884814 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-a9c6-account-create-update-48vv8"] Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.886742 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6rjq\" (UniqueName: \"kubernetes.io/projected/41c68bd6-6280-4a89-be87-4d65f06a5a4d-kube-api-access-f6rjq\") pod \"keystone-db-sync-kzjns\" (UID: \"41c68bd6-6280-4a89-be87-4d65f06a5a4d\") " pod="openstack/keystone-db-sync-kzjns" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.887347 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8xgn\" (UniqueName: \"kubernetes.io/projected/e5180ea6-12c0-4463-8fe5-c35ab2a15b44-kube-api-access-j8xgn\") pod \"barbican-8c80-account-create-update-rk4jj\" (UID: \"e5180ea6-12c0-4463-8fe5-c35ab2a15b44\") " pod="openstack/barbican-8c80-account-create-update-rk4jj" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.918092 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xj2f8\" (UniqueName: \"kubernetes.io/projected/c02cbd83-d077-4812-b852-7fe9a0182b71-kube-api-access-xj2f8\") pod \"barbican-db-create-jqrq2\" (UID: \"c02cbd83-d077-4812-b852-7fe9a0182b71\") " pod="openstack/barbican-db-create-jqrq2" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.966007 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dspfh\" (UniqueName: \"kubernetes.io/projected/ff670244-5344-4409-9823-6bfcf9ed274d-kube-api-access-dspfh\") pod \"neutron-db-create-ktddg\" (UID: \"ff670244-5344-4409-9823-6bfcf9ed274d\") " pod="openstack/neutron-db-create-ktddg" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.966328 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff670244-5344-4409-9823-6bfcf9ed274d-operator-scripts\") pod \"neutron-db-create-ktddg\" (UID: \"ff670244-5344-4409-9823-6bfcf9ed274d\") " pod="openstack/neutron-db-create-ktddg" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.966456 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2495c4d6-8174-4b4d-9114-968620fbba31-operator-scripts\") pod \"cloudkitty-a9c6-account-create-update-48vv8\" (UID: \"2495c4d6-8174-4b4d-9114-968620fbba31\") " pod="openstack/cloudkitty-a9c6-account-create-update-48vv8" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.966476 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-jqrq2" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.966616 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dqw4\" (UniqueName: \"kubernetes.io/projected/2495c4d6-8174-4b4d-9114-968620fbba31-kube-api-access-5dqw4\") pod \"cloudkitty-a9c6-account-create-update-48vv8\" (UID: \"2495c4d6-8174-4b4d-9114-968620fbba31\") " pod="openstack/cloudkitty-a9c6-account-create-update-48vv8" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.967158 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff670244-5344-4409-9823-6bfcf9ed274d-operator-scripts\") pod \"neutron-db-create-ktddg\" (UID: \"ff670244-5344-4409-9823-6bfcf9ed274d\") " pod="openstack/neutron-db-create-ktddg" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.983916 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dspfh\" (UniqueName: \"kubernetes.io/projected/ff670244-5344-4409-9823-6bfcf9ed274d-kube-api-access-dspfh\") pod \"neutron-db-create-ktddg\" (UID: \"ff670244-5344-4409-9823-6bfcf9ed274d\") " pod="openstack/neutron-db-create-ktddg" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.983936 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-8c80-account-create-update-rk4jj" Feb 17 16:14:03 crc kubenswrapper[4808]: I0217 16:14:03.046063 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-kzjns" Feb 17 16:14:03 crc kubenswrapper[4808]: I0217 16:14:03.057409 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-ktddg" Feb 17 16:14:03 crc kubenswrapper[4808]: I0217 16:14:03.068429 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2495c4d6-8174-4b4d-9114-968620fbba31-operator-scripts\") pod \"cloudkitty-a9c6-account-create-update-48vv8\" (UID: \"2495c4d6-8174-4b4d-9114-968620fbba31\") " pod="openstack/cloudkitty-a9c6-account-create-update-48vv8" Feb 17 16:14:03 crc kubenswrapper[4808]: I0217 16:14:03.068514 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dqw4\" (UniqueName: \"kubernetes.io/projected/2495c4d6-8174-4b4d-9114-968620fbba31-kube-api-access-5dqw4\") pod \"cloudkitty-a9c6-account-create-update-48vv8\" (UID: \"2495c4d6-8174-4b4d-9114-968620fbba31\") " pod="openstack/cloudkitty-a9c6-account-create-update-48vv8" Feb 17 16:14:03 crc kubenswrapper[4808]: I0217 16:14:03.069113 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2495c4d6-8174-4b4d-9114-968620fbba31-operator-scripts\") pod \"cloudkitty-a9c6-account-create-update-48vv8\" (UID: \"2495c4d6-8174-4b4d-9114-968620fbba31\") " pod="openstack/cloudkitty-a9c6-account-create-update-48vv8" Feb 17 16:14:03 crc kubenswrapper[4808]: I0217 16:14:03.086541 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dqw4\" (UniqueName: \"kubernetes.io/projected/2495c4d6-8174-4b4d-9114-968620fbba31-kube-api-access-5dqw4\") pod \"cloudkitty-a9c6-account-create-update-48vv8\" (UID: \"2495c4d6-8174-4b4d-9114-968620fbba31\") " pod="openstack/cloudkitty-a9c6-account-create-update-48vv8" Feb 17 16:14:03 crc kubenswrapper[4808]: I0217 16:14:03.225920 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-a9c6-account-create-update-48vv8" Feb 17 16:14:03 crc kubenswrapper[4808]: W0217 16:14:03.522191 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7377369f_b540_4b85_be05_4200c9695a41.slice/crio-8d54a778f8d7c90911da4a862fcb3782ebd10a599385db7a3a37e16207cd66d3 WatchSource:0}: Error finding container 8d54a778f8d7c90911da4a862fcb3782ebd10a599385db7a3a37e16207cd66d3: Status 404 returned error can't find the container with id 8d54a778f8d7c90911da4a862fcb3782ebd10a599385db7a3a37e16207cd66d3 Feb 17 16:14:03 crc kubenswrapper[4808]: I0217 16:14:03.526975 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 17 16:14:03 crc kubenswrapper[4808]: I0217 16:14:03.562352 4808 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="2917eca2-0431-4bd6-ad96-ab8464cc4fd7" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.0.112:9090/-/ready\": dial tcp 10.217.0.112:9090: connect: connection refused" Feb 17 16:14:03 crc kubenswrapper[4808]: E0217 16:14:03.655047 4808 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-glance-api:current-podified" Feb 17 16:14:03 crc kubenswrapper[4808]: E0217 16:14:03.655207 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rb486,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-4mdzt_openstack(e4002815-8dd4-4668-bea7-0d54bdaa4dd6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 16:14:03 crc kubenswrapper[4808]: E0217 16:14:03.656433 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-4mdzt" podUID="e4002815-8dd4-4668-bea7-0d54bdaa4dd6" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.132711 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.301415 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0040876f-8578-4a75-9f3f-72945b4c5b7a\") pod \"2917eca2-0431-4bd6-ad96-ab8464cc4fd7\" (UID: \"2917eca2-0431-4bd6-ad96-ab8464cc4fd7\") " Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.301461 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/2917eca2-0431-4bd6-ad96-ab8464cc4fd7-prometheus-metric-storage-rulefiles-2\") pod \"2917eca2-0431-4bd6-ad96-ab8464cc4fd7\" (UID: \"2917eca2-0431-4bd6-ad96-ab8464cc4fd7\") " Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.301519 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/2917eca2-0431-4bd6-ad96-ab8464cc4fd7-tls-assets\") pod \"2917eca2-0431-4bd6-ad96-ab8464cc4fd7\" (UID: \"2917eca2-0431-4bd6-ad96-ab8464cc4fd7\") " Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.301583 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/2917eca2-0431-4bd6-ad96-ab8464cc4fd7-config-out\") pod \"2917eca2-0431-4bd6-ad96-ab8464cc4fd7\" (UID: \"2917eca2-0431-4bd6-ad96-ab8464cc4fd7\") " Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.301612 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2917eca2-0431-4bd6-ad96-ab8464cc4fd7-config\") pod \"2917eca2-0431-4bd6-ad96-ab8464cc4fd7\" (UID: \"2917eca2-0431-4bd6-ad96-ab8464cc4fd7\") " Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.301658 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/2917eca2-0431-4bd6-ad96-ab8464cc4fd7-web-config\") pod \"2917eca2-0431-4bd6-ad96-ab8464cc4fd7\" (UID: \"2917eca2-0431-4bd6-ad96-ab8464cc4fd7\") " Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.301683 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/2917eca2-0431-4bd6-ad96-ab8464cc4fd7-thanos-prometheus-http-client-file\") pod \"2917eca2-0431-4bd6-ad96-ab8464cc4fd7\" (UID: \"2917eca2-0431-4bd6-ad96-ab8464cc4fd7\") " Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.301747 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/2917eca2-0431-4bd6-ad96-ab8464cc4fd7-prometheus-metric-storage-rulefiles-1\") pod \"2917eca2-0431-4bd6-ad96-ab8464cc4fd7\" (UID: \"2917eca2-0431-4bd6-ad96-ab8464cc4fd7\") " Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.301791 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/2917eca2-0431-4bd6-ad96-ab8464cc4fd7-prometheus-metric-storage-rulefiles-0\") pod \"2917eca2-0431-4bd6-ad96-ab8464cc4fd7\" (UID: \"2917eca2-0431-4bd6-ad96-ab8464cc4fd7\") " Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.301819 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sh7d7\" (UniqueName: \"kubernetes.io/projected/2917eca2-0431-4bd6-ad96-ab8464cc4fd7-kube-api-access-sh7d7\") pod \"2917eca2-0431-4bd6-ad96-ab8464cc4fd7\" (UID: \"2917eca2-0431-4bd6-ad96-ab8464cc4fd7\") " Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.302856 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2917eca2-0431-4bd6-ad96-ab8464cc4fd7-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "2917eca2-0431-4bd6-ad96-ab8464cc4fd7" (UID: "2917eca2-0431-4bd6-ad96-ab8464cc4fd7"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.303715 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2917eca2-0431-4bd6-ad96-ab8464cc4fd7-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "2917eca2-0431-4bd6-ad96-ab8464cc4fd7" (UID: "2917eca2-0431-4bd6-ad96-ab8464cc4fd7"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.304716 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2917eca2-0431-4bd6-ad96-ab8464cc4fd7-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "2917eca2-0431-4bd6-ad96-ab8464cc4fd7" (UID: "2917eca2-0431-4bd6-ad96-ab8464cc4fd7"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.324963 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2917eca2-0431-4bd6-ad96-ab8464cc4fd7-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "2917eca2-0431-4bd6-ad96-ab8464cc4fd7" (UID: "2917eca2-0431-4bd6-ad96-ab8464cc4fd7"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.325033 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2917eca2-0431-4bd6-ad96-ab8464cc4fd7-kube-api-access-sh7d7" (OuterVolumeSpecName: "kube-api-access-sh7d7") pod "2917eca2-0431-4bd6-ad96-ab8464cc4fd7" (UID: "2917eca2-0431-4bd6-ad96-ab8464cc4fd7"). InnerVolumeSpecName "kube-api-access-sh7d7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.325177 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2917eca2-0431-4bd6-ad96-ab8464cc4fd7-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "2917eca2-0431-4bd6-ad96-ab8464cc4fd7" (UID: "2917eca2-0431-4bd6-ad96-ab8464cc4fd7"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.331802 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2917eca2-0431-4bd6-ad96-ab8464cc4fd7-config" (OuterVolumeSpecName: "config") pod "2917eca2-0431-4bd6-ad96-ab8464cc4fd7" (UID: "2917eca2-0431-4bd6-ad96-ab8464cc4fd7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.331815 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2917eca2-0431-4bd6-ad96-ab8464cc4fd7-config-out" (OuterVolumeSpecName: "config-out") pod "2917eca2-0431-4bd6-ad96-ab8464cc4fd7" (UID: "2917eca2-0431-4bd6-ad96-ab8464cc4fd7"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.364219 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0040876f-8578-4a75-9f3f-72945b4c5b7a" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "2917eca2-0431-4bd6-ad96-ab8464cc4fd7" (UID: "2917eca2-0431-4bd6-ad96-ab8464cc4fd7"). InnerVolumeSpecName "pvc-0040876f-8578-4a75-9f3f-72945b4c5b7a". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.380059 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2917eca2-0431-4bd6-ad96-ab8464cc4fd7-web-config" (OuterVolumeSpecName: "web-config") pod "2917eca2-0431-4bd6-ad96-ab8464cc4fd7" (UID: "2917eca2-0431-4bd6-ad96-ab8464cc4fd7"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.403621 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sh7d7\" (UniqueName: \"kubernetes.io/projected/2917eca2-0431-4bd6-ad96-ab8464cc4fd7-kube-api-access-sh7d7\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.403685 4808 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-0040876f-8578-4a75-9f3f-72945b4c5b7a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0040876f-8578-4a75-9f3f-72945b4c5b7a\") on node \"crc\" " Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.403698 4808 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/2917eca2-0431-4bd6-ad96-ab8464cc4fd7-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.403711 4808 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/2917eca2-0431-4bd6-ad96-ab8464cc4fd7-tls-assets\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.403721 4808 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/2917eca2-0431-4bd6-ad96-ab8464cc4fd7-config-out\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.403731 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/2917eca2-0431-4bd6-ad96-ab8464cc4fd7-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.403740 4808 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/2917eca2-0431-4bd6-ad96-ab8464cc4fd7-web-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.403749 4808 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/2917eca2-0431-4bd6-ad96-ab8464cc4fd7-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.403759 4808 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/2917eca2-0431-4bd6-ad96-ab8464cc4fd7-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.403768 4808 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/2917eca2-0431-4bd6-ad96-ab8464cc4fd7-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.423054 4808 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.423248 4808 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-0040876f-8578-4a75-9f3f-72945b4c5b7a" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0040876f-8578-4a75-9f3f-72945b4c5b7a") on node "crc" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.437785 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"2917eca2-0431-4bd6-ad96-ab8464cc4fd7","Type":"ContainerDied","Data":"c5db49362fb8e196d602a48475009fd093a64b0b760100ed93c1a54dba3d1832"} Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.437809 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.437841 4808 scope.go:117] "RemoveContainer" containerID="3e1259ba3d26a0e7de7e3a0ca80bca8985317419bb22e9888ef6fc0a7e83aec7" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.441211 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-f2jqv" event={"ID":"7377369f-b540-4b85-be05-4200c9695a41","Type":"ContainerStarted","Data":"2318a25c8a4fd490438531d7eb31b39589b2387c36e3e5db64b5abeb8c178d66"} Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.441265 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-f2jqv" event={"ID":"7377369f-b540-4b85-be05-4200c9695a41","Type":"ContainerStarted","Data":"8d54a778f8d7c90911da4a862fcb3782ebd10a599385db7a3a37e16207cd66d3"} Feb 17 16:14:04 crc kubenswrapper[4808]: E0217 16:14:04.459803 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-glance-api:current-podified\\\"\"" pod="openstack/glance-db-sync-4mdzt" podUID="e4002815-8dd4-4668-bea7-0d54bdaa4dd6" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.494150 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-f2jqv" podStartSLOduration=11.494133743999999 podStartE2EDuration="11.494133744s" podCreationTimestamp="2026-02-17 16:13:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:14:04.492712486 +0000 UTC m=+1208.009071559" watchObservedRunningTime="2026-02-17 16:14:04.494133744 +0000 UTC m=+1208.010492807" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.495702 4808 scope.go:117] "RemoveContainer" containerID="8d4b256de0544b61472bec728b8a9f6596b6505c3ff6baf74b4b74f9988e76dc" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.505396 4808 reconciler_common.go:293] "Volume detached for volume \"pvc-0040876f-8578-4a75-9f3f-72945b4c5b7a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0040876f-8578-4a75-9f3f-72945b4c5b7a\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.637732 4808 scope.go:117] "RemoveContainer" containerID="4b0c39d37d11b4b4e6ab329ec7e07436445d5087b94a405b5022cc84ee9f2693" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.645725 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.680921 4808 scope.go:117] "RemoveContainer" containerID="2fc63ca226fc458b6690177cc943e7e0ca56b5c8e5a076cf9854b9dccf7b50f0" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.692317 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.702372 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 17 16:14:04 crc kubenswrapper[4808]: E0217 16:14:04.702824 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2917eca2-0431-4bd6-ad96-ab8464cc4fd7" containerName="config-reloader" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.702835 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="2917eca2-0431-4bd6-ad96-ab8464cc4fd7" containerName="config-reloader" Feb 17 16:14:04 crc kubenswrapper[4808]: E0217 16:14:04.702849 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2917eca2-0431-4bd6-ad96-ab8464cc4fd7" containerName="init-config-reloader" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.702856 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="2917eca2-0431-4bd6-ad96-ab8464cc4fd7" containerName="init-config-reloader" Feb 17 16:14:04 crc kubenswrapper[4808]: E0217 16:14:04.702868 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2917eca2-0431-4bd6-ad96-ab8464cc4fd7" containerName="thanos-sidecar" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.702875 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="2917eca2-0431-4bd6-ad96-ab8464cc4fd7" containerName="thanos-sidecar" Feb 17 16:14:04 crc kubenswrapper[4808]: E0217 16:14:04.702887 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2917eca2-0431-4bd6-ad96-ab8464cc4fd7" containerName="prometheus" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.702893 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="2917eca2-0431-4bd6-ad96-ab8464cc4fd7" containerName="prometheus" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.703062 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="2917eca2-0431-4bd6-ad96-ab8464cc4fd7" containerName="prometheus" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.703077 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="2917eca2-0431-4bd6-ad96-ab8464cc4fd7" containerName="thanos-sidecar" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.703088 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="2917eca2-0431-4bd6-ad96-ab8464cc4fd7" containerName="config-reloader" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.704725 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.712025 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.723316 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.723598 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.723731 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.723857 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.724980 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.725112 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.726089 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-2wbtf" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.727791 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.727868 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.810310 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dadd7e91-13f0-4ba2-9f87-ad057567a56d-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"dadd7e91-13f0-4ba2-9f87-ad057567a56d\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.810349 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/dadd7e91-13f0-4ba2-9f87-ad057567a56d-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"dadd7e91-13f0-4ba2-9f87-ad057567a56d\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.810412 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/dadd7e91-13f0-4ba2-9f87-ad057567a56d-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"dadd7e91-13f0-4ba2-9f87-ad057567a56d\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.810476 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/dadd7e91-13f0-4ba2-9f87-ad057567a56d-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"dadd7e91-13f0-4ba2-9f87-ad057567a56d\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.810537 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/dadd7e91-13f0-4ba2-9f87-ad057567a56d-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"dadd7e91-13f0-4ba2-9f87-ad057567a56d\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.810568 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/dadd7e91-13f0-4ba2-9f87-ad057567a56d-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"dadd7e91-13f0-4ba2-9f87-ad057567a56d\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.810634 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/dadd7e91-13f0-4ba2-9f87-ad057567a56d-config\") pod \"prometheus-metric-storage-0\" (UID: \"dadd7e91-13f0-4ba2-9f87-ad057567a56d\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.810676 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/dadd7e91-13f0-4ba2-9f87-ad057567a56d-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"dadd7e91-13f0-4ba2-9f87-ad057567a56d\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.810806 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/dadd7e91-13f0-4ba2-9f87-ad057567a56d-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"dadd7e91-13f0-4ba2-9f87-ad057567a56d\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.810832 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lh72v\" (UniqueName: \"kubernetes.io/projected/dadd7e91-13f0-4ba2-9f87-ad057567a56d-kube-api-access-lh72v\") pod \"prometheus-metric-storage-0\" (UID: \"dadd7e91-13f0-4ba2-9f87-ad057567a56d\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.810857 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/dadd7e91-13f0-4ba2-9f87-ad057567a56d-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"dadd7e91-13f0-4ba2-9f87-ad057567a56d\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.810883 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0040876f-8578-4a75-9f3f-72945b4c5b7a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0040876f-8578-4a75-9f3f-72945b4c5b7a\") pod \"prometheus-metric-storage-0\" (UID: \"dadd7e91-13f0-4ba2-9f87-ad057567a56d\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.810925 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/dadd7e91-13f0-4ba2-9f87-ad057567a56d-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"dadd7e91-13f0-4ba2-9f87-ad057567a56d\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.852106 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-jqrq2"] Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.912127 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/dadd7e91-13f0-4ba2-9f87-ad057567a56d-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"dadd7e91-13f0-4ba2-9f87-ad057567a56d\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.912199 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/dadd7e91-13f0-4ba2-9f87-ad057567a56d-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"dadd7e91-13f0-4ba2-9f87-ad057567a56d\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.912220 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/dadd7e91-13f0-4ba2-9f87-ad057567a56d-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"dadd7e91-13f0-4ba2-9f87-ad057567a56d\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.912238 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/dadd7e91-13f0-4ba2-9f87-ad057567a56d-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"dadd7e91-13f0-4ba2-9f87-ad057567a56d\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.912262 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/dadd7e91-13f0-4ba2-9f87-ad057567a56d-config\") pod \"prometheus-metric-storage-0\" (UID: \"dadd7e91-13f0-4ba2-9f87-ad057567a56d\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.912285 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/dadd7e91-13f0-4ba2-9f87-ad057567a56d-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"dadd7e91-13f0-4ba2-9f87-ad057567a56d\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.912334 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/dadd7e91-13f0-4ba2-9f87-ad057567a56d-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"dadd7e91-13f0-4ba2-9f87-ad057567a56d\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.912354 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lh72v\" (UniqueName: \"kubernetes.io/projected/dadd7e91-13f0-4ba2-9f87-ad057567a56d-kube-api-access-lh72v\") pod \"prometheus-metric-storage-0\" (UID: \"dadd7e91-13f0-4ba2-9f87-ad057567a56d\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.912377 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/dadd7e91-13f0-4ba2-9f87-ad057567a56d-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"dadd7e91-13f0-4ba2-9f87-ad057567a56d\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.912399 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-0040876f-8578-4a75-9f3f-72945b4c5b7a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0040876f-8578-4a75-9f3f-72945b4c5b7a\") pod \"prometheus-metric-storage-0\" (UID: \"dadd7e91-13f0-4ba2-9f87-ad057567a56d\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.912426 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/dadd7e91-13f0-4ba2-9f87-ad057567a56d-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"dadd7e91-13f0-4ba2-9f87-ad057567a56d\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.912448 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dadd7e91-13f0-4ba2-9f87-ad057567a56d-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"dadd7e91-13f0-4ba2-9f87-ad057567a56d\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.912468 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/dadd7e91-13f0-4ba2-9f87-ad057567a56d-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"dadd7e91-13f0-4ba2-9f87-ad057567a56d\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.913597 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/dadd7e91-13f0-4ba2-9f87-ad057567a56d-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"dadd7e91-13f0-4ba2-9f87-ad057567a56d\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.914393 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/dadd7e91-13f0-4ba2-9f87-ad057567a56d-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"dadd7e91-13f0-4ba2-9f87-ad057567a56d\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.918124 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/dadd7e91-13f0-4ba2-9f87-ad057567a56d-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"dadd7e91-13f0-4ba2-9f87-ad057567a56d\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.919139 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/dadd7e91-13f0-4ba2-9f87-ad057567a56d-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"dadd7e91-13f0-4ba2-9f87-ad057567a56d\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.922396 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/dadd7e91-13f0-4ba2-9f87-ad057567a56d-config\") pod \"prometheus-metric-storage-0\" (UID: \"dadd7e91-13f0-4ba2-9f87-ad057567a56d\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.922593 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/dadd7e91-13f0-4ba2-9f87-ad057567a56d-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"dadd7e91-13f0-4ba2-9f87-ad057567a56d\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.926813 4808 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.926947 4808 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-0040876f-8578-4a75-9f3f-72945b4c5b7a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0040876f-8578-4a75-9f3f-72945b4c5b7a\") pod \"prometheus-metric-storage-0\" (UID: \"dadd7e91-13f0-4ba2-9f87-ad057567a56d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f40780962e64d13d6799d8a1c9a177793dc18d1eb26c87512c3b4aff3215b0d/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.928157 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/dadd7e91-13f0-4ba2-9f87-ad057567a56d-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"dadd7e91-13f0-4ba2-9f87-ad057567a56d\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.935151 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/dadd7e91-13f0-4ba2-9f87-ad057567a56d-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"dadd7e91-13f0-4ba2-9f87-ad057567a56d\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.935285 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/dadd7e91-13f0-4ba2-9f87-ad057567a56d-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"dadd7e91-13f0-4ba2-9f87-ad057567a56d\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.943533 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/dadd7e91-13f0-4ba2-9f87-ad057567a56d-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"dadd7e91-13f0-4ba2-9f87-ad057567a56d\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.944223 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lh72v\" (UniqueName: \"kubernetes.io/projected/dadd7e91-13f0-4ba2-9f87-ad057567a56d-kube-api-access-lh72v\") pod \"prometheus-metric-storage-0\" (UID: \"dadd7e91-13f0-4ba2-9f87-ad057567a56d\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.944239 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dadd7e91-13f0-4ba2-9f87-ad057567a56d-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"dadd7e91-13f0-4ba2-9f87-ad057567a56d\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.967873 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-0040876f-8578-4a75-9f3f-72945b4c5b7a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0040876f-8578-4a75-9f3f-72945b4c5b7a\") pod \"prometheus-metric-storage-0\" (UID: \"dadd7e91-13f0-4ba2-9f87-ad057567a56d\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:14:05 crc kubenswrapper[4808]: I0217 16:14:05.093242 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 17 16:14:05 crc kubenswrapper[4808]: I0217 16:14:05.163948 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2917eca2-0431-4bd6-ad96-ab8464cc4fd7" path="/var/lib/kubelet/pods/2917eca2-0431-4bd6-ad96-ab8464cc4fd7/volumes" Feb 17 16:14:05 crc kubenswrapper[4808]: I0217 16:14:05.198686 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-78cc-account-create-update-k7vgl"] Feb 17 16:14:05 crc kubenswrapper[4808]: I0217 16:14:05.302992 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-jmq6n"] Feb 17 16:14:05 crc kubenswrapper[4808]: I0217 16:14:05.311183 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-8c80-account-create-update-rk4jj"] Feb 17 16:14:05 crc kubenswrapper[4808]: I0217 16:14:05.330518 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-59d8-account-create-update-5vsvx"] Feb 17 16:14:05 crc kubenswrapper[4808]: I0217 16:14:05.339368 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-db-create-r5lfk"] Feb 17 16:14:05 crc kubenswrapper[4808]: I0217 16:14:05.351745 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-ktddg"] Feb 17 16:14:05 crc kubenswrapper[4808]: I0217 16:14:05.354617 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-kzjns"] Feb 17 16:14:05 crc kubenswrapper[4808]: I0217 16:14:05.361156 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-a9c6-account-create-update-48vv8"] Feb 17 16:14:05 crc kubenswrapper[4808]: W0217 16:14:05.387666 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2495c4d6_8174_4b4d_9114_968620fbba31.slice/crio-e222dc202c5439197b586024c1b5930706f3a75b7b984a24eceff61c9fc9bd51 WatchSource:0}: Error finding container e222dc202c5439197b586024c1b5930706f3a75b7b984a24eceff61c9fc9bd51: Status 404 returned error can't find the container with id e222dc202c5439197b586024c1b5930706f3a75b7b984a24eceff61c9fc9bd51 Feb 17 16:14:05 crc kubenswrapper[4808]: I0217 16:14:05.445061 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 17 16:14:05 crc kubenswrapper[4808]: I0217 16:14:05.451554 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-jmq6n" event={"ID":"3ccecd7d-0e59-4336-a6ec-a595adbb727e","Type":"ContainerStarted","Data":"6c4dad549168fd0fe9877db14a616f977db4f3678b2cef50d4cc95501cb7ec97"} Feb 17 16:14:05 crc kubenswrapper[4808]: I0217 16:14:05.454851 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-59d8-account-create-update-5vsvx" event={"ID":"02478fdd-380d-42f9-b105-c3ae86d224a8","Type":"ContainerStarted","Data":"3b5e73a2bf501307ef0912c3e2417e209a9bf79f1629e4736731809703ca6124"} Feb 17 16:14:05 crc kubenswrapper[4808]: I0217 16:14:05.465422 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-jqrq2" event={"ID":"c02cbd83-d077-4812-b852-7fe9a0182b71","Type":"ContainerStarted","Data":"c6b61ad973a4d676df7b94d7816cb334b0acc481ec5fdce3038641a24a062cf0"} Feb 17 16:14:05 crc kubenswrapper[4808]: I0217 16:14:05.465476 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-jqrq2" event={"ID":"c02cbd83-d077-4812-b852-7fe9a0182b71","Type":"ContainerStarted","Data":"f21b1b34203e339a6df9f3de1f3c14db9849e5fd507a49d6a22a7fc36cc73dbc"} Feb 17 16:14:05 crc kubenswrapper[4808]: I0217 16:14:05.468026 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-78cc-account-create-update-k7vgl" event={"ID":"e183e901-16a0-43cf-9ce5-ef36da8686d1","Type":"ContainerStarted","Data":"e734ff22797424d60d75d0ff894eb99b0a93ed10a3801fb6e5b9a52dcc8e1b52"} Feb 17 16:14:05 crc kubenswrapper[4808]: I0217 16:14:05.470216 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 17 16:14:05 crc kubenswrapper[4808]: I0217 16:14:05.470868 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-8c80-account-create-update-rk4jj" event={"ID":"e5180ea6-12c0-4463-8fe5-c35ab2a15b44","Type":"ContainerStarted","Data":"1c00c6c47bb9156cd63db3c65a93373cbafb8faed7fa643611f22da349c11bb0"} Feb 17 16:14:05 crc kubenswrapper[4808]: I0217 16:14:05.474236 4808 generic.go:334] "Generic (PLEG): container finished" podID="7377369f-b540-4b85-be05-4200c9695a41" containerID="2318a25c8a4fd490438531d7eb31b39589b2387c36e3e5db64b5abeb8c178d66" exitCode=0 Feb 17 16:14:05 crc kubenswrapper[4808]: I0217 16:14:05.474282 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-f2jqv" event={"ID":"7377369f-b540-4b85-be05-4200c9695a41","Type":"ContainerDied","Data":"2318a25c8a4fd490438531d7eb31b39589b2387c36e3e5db64b5abeb8c178d66"} Feb 17 16:14:05 crc kubenswrapper[4808]: I0217 16:14:05.478074 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-ktddg" event={"ID":"ff670244-5344-4409-9823-6bfcf9ed274d","Type":"ContainerStarted","Data":"db36c3bbf39537df83a4da37662c8e67b4aa150cf22c4630a5ddf0b8ff0b32b4"} Feb 17 16:14:05 crc kubenswrapper[4808]: I0217 16:14:05.479445 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-create-r5lfk" event={"ID":"72e328d4-94e9-42bc-ae1c-b07b01d80072","Type":"ContainerStarted","Data":"021f8a63c457f5f6931040c6e0c6166d1f2402d15c0182fc36f0fd1a25056869"} Feb 17 16:14:05 crc kubenswrapper[4808]: I0217 16:14:05.488204 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-a9c6-account-create-update-48vv8" event={"ID":"2495c4d6-8174-4b4d-9114-968620fbba31","Type":"ContainerStarted","Data":"e222dc202c5439197b586024c1b5930706f3a75b7b984a24eceff61c9fc9bd51"} Feb 17 16:14:05 crc kubenswrapper[4808]: I0217 16:14:05.488539 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-jqrq2" podStartSLOduration=3.4885275079999998 podStartE2EDuration="3.488527508s" podCreationTimestamp="2026-02-17 16:14:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:14:05.487254394 +0000 UTC m=+1209.003613467" watchObservedRunningTime="2026-02-17 16:14:05.488527508 +0000 UTC m=+1209.004886581" Feb 17 16:14:05 crc kubenswrapper[4808]: I0217 16:14:05.490280 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-kzjns" event={"ID":"41c68bd6-6280-4a89-be87-4d65f06a5a4d","Type":"ContainerStarted","Data":"775b438b7af2b3cc184f6f5f5f4c39d337ef64447d3370a28378044cb5ec6a4d"} Feb 17 16:14:05 crc kubenswrapper[4808]: W0217 16:14:05.516470 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8f52ebe4_f003_4d0b_8539_1d406db95b2f.slice/crio-770d1784cc30791394346c685d388c307608a4ff9fb0c6b6f3ca2670fbb6299c WatchSource:0}: Error finding container 770d1784cc30791394346c685d388c307608a4ff9fb0c6b6f3ca2670fbb6299c: Status 404 returned error can't find the container with id 770d1784cc30791394346c685d388c307608a4ff9fb0c6b6f3ca2670fbb6299c Feb 17 16:14:05 crc kubenswrapper[4808]: E0217 16:14:05.659187 4808 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2917eca2_0431_4bd6_ad96_ab8464cc4fd7.slice/crio-8d4b256de0544b61472bec728b8a9f6596b6505c3ff6baf74b4b74f9988e76dc.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2917eca2_0431_4bd6_ad96_ab8464cc4fd7.slice/crio-conmon-3e1259ba3d26a0e7de7e3a0ca80bca8985317419bb22e9888ef6fc0a7e83aec7.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc02cbd83_d077_4812_b852_7fe9a0182b71.slice/crio-c6b61ad973a4d676df7b94d7816cb334b0acc481ec5fdce3038641a24a062cf0.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc02cbd83_d077_4812_b852_7fe9a0182b71.slice/crio-conmon-c6b61ad973a4d676df7b94d7816cb334b0acc481ec5fdce3038641a24a062cf0.scope\": RecentStats: unable to find data in memory cache]" Feb 17 16:14:06 crc kubenswrapper[4808]: I0217 16:14:06.501383 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"dadd7e91-13f0-4ba2-9f87-ad057567a56d","Type":"ContainerStarted","Data":"ef24f9e78ce98b3bda972fae86b77ebebfb7fb39b2c1ff23acc62ed24557426c"} Feb 17 16:14:06 crc kubenswrapper[4808]: I0217 16:14:06.506106 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8f52ebe4-f003-4d0b-8539-1d406db95b2f","Type":"ContainerStarted","Data":"770d1784cc30791394346c685d388c307608a4ff9fb0c6b6f3ca2670fbb6299c"} Feb 17 16:14:06 crc kubenswrapper[4808]: I0217 16:14:06.509539 4808 generic.go:334] "Generic (PLEG): container finished" podID="3ccecd7d-0e59-4336-a6ec-a595adbb727e" containerID="b727a664b9c0061ba9f01801dd0228679fbc0026b1e712729a3b0f80c6eddfb3" exitCode=0 Feb 17 16:14:06 crc kubenswrapper[4808]: I0217 16:14:06.509842 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-jmq6n" event={"ID":"3ccecd7d-0e59-4336-a6ec-a595adbb727e","Type":"ContainerDied","Data":"b727a664b9c0061ba9f01801dd0228679fbc0026b1e712729a3b0f80c6eddfb3"} Feb 17 16:14:06 crc kubenswrapper[4808]: I0217 16:14:06.511641 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-59d8-account-create-update-5vsvx" event={"ID":"02478fdd-380d-42f9-b105-c3ae86d224a8","Type":"ContainerDied","Data":"468b053d64c80baec6de3b54c4b2f477a89ae15f7b2f83e72b93e7a2a09b7e47"} Feb 17 16:14:06 crc kubenswrapper[4808]: I0217 16:14:06.511732 4808 generic.go:334] "Generic (PLEG): container finished" podID="02478fdd-380d-42f9-b105-c3ae86d224a8" containerID="468b053d64c80baec6de3b54c4b2f477a89ae15f7b2f83e72b93e7a2a09b7e47" exitCode=0 Feb 17 16:14:06 crc kubenswrapper[4808]: I0217 16:14:06.514141 4808 generic.go:334] "Generic (PLEG): container finished" podID="ff670244-5344-4409-9823-6bfcf9ed274d" containerID="f07d48d83b8d167312f75dfe2e3617926d4c7c6a17b68b60f025f9a0615ec6aa" exitCode=0 Feb 17 16:14:06 crc kubenswrapper[4808]: I0217 16:14:06.514204 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-ktddg" event={"ID":"ff670244-5344-4409-9823-6bfcf9ed274d","Type":"ContainerDied","Data":"f07d48d83b8d167312f75dfe2e3617926d4c7c6a17b68b60f025f9a0615ec6aa"} Feb 17 16:14:06 crc kubenswrapper[4808]: I0217 16:14:06.516304 4808 generic.go:334] "Generic (PLEG): container finished" podID="72e328d4-94e9-42bc-ae1c-b07b01d80072" containerID="20f7389fa9f51fba5453c2a234db420d7d9f90654863c47b866a9ae0d75fd9b5" exitCode=0 Feb 17 16:14:06 crc kubenswrapper[4808]: I0217 16:14:06.516354 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-create-r5lfk" event={"ID":"72e328d4-94e9-42bc-ae1c-b07b01d80072","Type":"ContainerDied","Data":"20f7389fa9f51fba5453c2a234db420d7d9f90654863c47b866a9ae0d75fd9b5"} Feb 17 16:14:06 crc kubenswrapper[4808]: I0217 16:14:06.517506 4808 generic.go:334] "Generic (PLEG): container finished" podID="e183e901-16a0-43cf-9ce5-ef36da8686d1" containerID="ebb5009c36b8fd7590317bf3c492f0defedfa61fc35e3d839e79e88a3e507747" exitCode=0 Feb 17 16:14:06 crc kubenswrapper[4808]: I0217 16:14:06.517543 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-78cc-account-create-update-k7vgl" event={"ID":"e183e901-16a0-43cf-9ce5-ef36da8686d1","Type":"ContainerDied","Data":"ebb5009c36b8fd7590317bf3c492f0defedfa61fc35e3d839e79e88a3e507747"} Feb 17 16:14:06 crc kubenswrapper[4808]: I0217 16:14:06.518923 4808 generic.go:334] "Generic (PLEG): container finished" podID="e5180ea6-12c0-4463-8fe5-c35ab2a15b44" containerID="56b80ac7ee378fc8d9b7164abf8b6f6b4c7155149d6206a5a9c6aa08286e5594" exitCode=0 Feb 17 16:14:06 crc kubenswrapper[4808]: I0217 16:14:06.518963 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-8c80-account-create-update-rk4jj" event={"ID":"e5180ea6-12c0-4463-8fe5-c35ab2a15b44","Type":"ContainerDied","Data":"56b80ac7ee378fc8d9b7164abf8b6f6b4c7155149d6206a5a9c6aa08286e5594"} Feb 17 16:14:06 crc kubenswrapper[4808]: I0217 16:14:06.520344 4808 generic.go:334] "Generic (PLEG): container finished" podID="c02cbd83-d077-4812-b852-7fe9a0182b71" containerID="c6b61ad973a4d676df7b94d7816cb334b0acc481ec5fdce3038641a24a062cf0" exitCode=0 Feb 17 16:14:06 crc kubenswrapper[4808]: I0217 16:14:06.520380 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-jqrq2" event={"ID":"c02cbd83-d077-4812-b852-7fe9a0182b71","Type":"ContainerDied","Data":"c6b61ad973a4d676df7b94d7816cb334b0acc481ec5fdce3038641a24a062cf0"} Feb 17 16:14:06 crc kubenswrapper[4808]: I0217 16:14:06.522279 4808 generic.go:334] "Generic (PLEG): container finished" podID="2495c4d6-8174-4b4d-9114-968620fbba31" containerID="2e2ee0ccc758be665530168176318d177d82ba65213912cccc942306aee57326" exitCode=0 Feb 17 16:14:06 crc kubenswrapper[4808]: I0217 16:14:06.522424 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-a9c6-account-create-update-48vv8" event={"ID":"2495c4d6-8174-4b4d-9114-968620fbba31","Type":"ContainerDied","Data":"2e2ee0ccc758be665530168176318d177d82ba65213912cccc942306aee57326"} Feb 17 16:14:07 crc kubenswrapper[4808]: I0217 16:14:07.106376 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-f2jqv" Feb 17 16:14:07 crc kubenswrapper[4808]: I0217 16:14:07.220643 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9t2pm\" (UniqueName: \"kubernetes.io/projected/7377369f-b540-4b85-be05-4200c9695a41-kube-api-access-9t2pm\") pod \"7377369f-b540-4b85-be05-4200c9695a41\" (UID: \"7377369f-b540-4b85-be05-4200c9695a41\") " Feb 17 16:14:07 crc kubenswrapper[4808]: I0217 16:14:07.220708 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7377369f-b540-4b85-be05-4200c9695a41-operator-scripts\") pod \"7377369f-b540-4b85-be05-4200c9695a41\" (UID: \"7377369f-b540-4b85-be05-4200c9695a41\") " Feb 17 16:14:07 crc kubenswrapper[4808]: I0217 16:14:07.221813 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7377369f-b540-4b85-be05-4200c9695a41-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7377369f-b540-4b85-be05-4200c9695a41" (UID: "7377369f-b540-4b85-be05-4200c9695a41"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:14:07 crc kubenswrapper[4808]: I0217 16:14:07.277181 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7377369f-b540-4b85-be05-4200c9695a41-kube-api-access-9t2pm" (OuterVolumeSpecName: "kube-api-access-9t2pm") pod "7377369f-b540-4b85-be05-4200c9695a41" (UID: "7377369f-b540-4b85-be05-4200c9695a41"). InnerVolumeSpecName "kube-api-access-9t2pm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:14:07 crc kubenswrapper[4808]: I0217 16:14:07.323080 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9t2pm\" (UniqueName: \"kubernetes.io/projected/7377369f-b540-4b85-be05-4200c9695a41-kube-api-access-9t2pm\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:07 crc kubenswrapper[4808]: I0217 16:14:07.323110 4808 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7377369f-b540-4b85-be05-4200c9695a41-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:07 crc kubenswrapper[4808]: I0217 16:14:07.534189 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8f52ebe4-f003-4d0b-8539-1d406db95b2f","Type":"ContainerStarted","Data":"f2c18fb16875bf72623cb846c0041b7d6ff5f8cf313c79c3b111b6ad2358eedd"} Feb 17 16:14:07 crc kubenswrapper[4808]: I0217 16:14:07.535911 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-f2jqv" event={"ID":"7377369f-b540-4b85-be05-4200c9695a41","Type":"ContainerDied","Data":"8d54a778f8d7c90911da4a862fcb3782ebd10a599385db7a3a37e16207cd66d3"} Feb 17 16:14:07 crc kubenswrapper[4808]: I0217 16:14:07.535944 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d54a778f8d7c90911da4a862fcb3782ebd10a599385db7a3a37e16207cd66d3" Feb 17 16:14:07 crc kubenswrapper[4808]: I0217 16:14:07.536096 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-f2jqv" Feb 17 16:14:08 crc kubenswrapper[4808]: I0217 16:14:08.218138 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-ingester-0" Feb 17 16:14:08 crc kubenswrapper[4808]: I0217 16:14:08.548670 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"dadd7e91-13f0-4ba2-9f87-ad057567a56d","Type":"ContainerStarted","Data":"a537df6f55dce8af21497e898f451fd7563f1f90fb34c6f630089eb48e909606"} Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.568332 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-jmq6n" event={"ID":"3ccecd7d-0e59-4336-a6ec-a595adbb727e","Type":"ContainerDied","Data":"6c4dad549168fd0fe9877db14a616f977db4f3678b2cef50d4cc95501cb7ec97"} Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.568775 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6c4dad549168fd0fe9877db14a616f977db4f3678b2cef50d4cc95501cb7ec97" Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.575386 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-59d8-account-create-update-5vsvx" event={"ID":"02478fdd-380d-42f9-b105-c3ae86d224a8","Type":"ContainerDied","Data":"3b5e73a2bf501307ef0912c3e2417e209a9bf79f1629e4736731809703ca6124"} Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.575430 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b5e73a2bf501307ef0912c3e2417e209a9bf79f1629e4736731809703ca6124" Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.577645 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-ktddg" event={"ID":"ff670244-5344-4409-9823-6bfcf9ed274d","Type":"ContainerDied","Data":"db36c3bbf39537df83a4da37662c8e67b4aa150cf22c4630a5ddf0b8ff0b32b4"} Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.577690 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="db36c3bbf39537df83a4da37662c8e67b4aa150cf22c4630a5ddf0b8ff0b32b4" Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.582950 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-a9c6-account-create-update-48vv8" event={"ID":"2495c4d6-8174-4b4d-9114-968620fbba31","Type":"ContainerDied","Data":"e222dc202c5439197b586024c1b5930706f3a75b7b984a24eceff61c9fc9bd51"} Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.583025 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e222dc202c5439197b586024c1b5930706f3a75b7b984a24eceff61c9fc9bd51" Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.589208 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-78cc-account-create-update-k7vgl" event={"ID":"e183e901-16a0-43cf-9ce5-ef36da8686d1","Type":"ContainerDied","Data":"e734ff22797424d60d75d0ff894eb99b0a93ed10a3801fb6e5b9a52dcc8e1b52"} Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.589383 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e734ff22797424d60d75d0ff894eb99b0a93ed10a3801fb6e5b9a52dcc8e1b52" Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.593548 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-8c80-account-create-update-rk4jj" event={"ID":"e5180ea6-12c0-4463-8fe5-c35ab2a15b44","Type":"ContainerDied","Data":"1c00c6c47bb9156cd63db3c65a93373cbafb8faed7fa643611f22da349c11bb0"} Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.593610 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c00c6c47bb9156cd63db3c65a93373cbafb8faed7fa643611f22da349c11bb0" Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.595696 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-jqrq2" event={"ID":"c02cbd83-d077-4812-b852-7fe9a0182b71","Type":"ContainerDied","Data":"f21b1b34203e339a6df9f3de1f3c14db9849e5fd507a49d6a22a7fc36cc73dbc"} Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.595724 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f21b1b34203e339a6df9f3de1f3c14db9849e5fd507a49d6a22a7fc36cc73dbc" Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.597423 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-create-r5lfk" event={"ID":"72e328d4-94e9-42bc-ae1c-b07b01d80072","Type":"ContainerDied","Data":"021f8a63c457f5f6931040c6e0c6166d1f2402d15c0182fc36f0fd1a25056869"} Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.597502 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="021f8a63c457f5f6931040c6e0c6166d1f2402d15c0182fc36f0fd1a25056869" Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.672852 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-ktddg" Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.681715 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-8c80-account-create-update-rk4jj" Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.691796 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-59d8-account-create-update-5vsvx" Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.726673 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-jqrq2" Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.740195 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-create-r5lfk" Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.771156 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-78cc-account-create-update-k7vgl" Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.773765 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-a9c6-account-create-update-48vv8" Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.787452 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-jmq6n" Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.845409 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff670244-5344-4409-9823-6bfcf9ed274d-operator-scripts\") pod \"ff670244-5344-4409-9823-6bfcf9ed274d\" (UID: \"ff670244-5344-4409-9823-6bfcf9ed274d\") " Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.845453 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c02cbd83-d077-4812-b852-7fe9a0182b71-operator-scripts\") pod \"c02cbd83-d077-4812-b852-7fe9a0182b71\" (UID: \"c02cbd83-d077-4812-b852-7fe9a0182b71\") " Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.845481 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e5180ea6-12c0-4463-8fe5-c35ab2a15b44-operator-scripts\") pod \"e5180ea6-12c0-4463-8fe5-c35ab2a15b44\" (UID: \"e5180ea6-12c0-4463-8fe5-c35ab2a15b44\") " Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.845517 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dspfh\" (UniqueName: \"kubernetes.io/projected/ff670244-5344-4409-9823-6bfcf9ed274d-kube-api-access-dspfh\") pod \"ff670244-5344-4409-9823-6bfcf9ed274d\" (UID: \"ff670244-5344-4409-9823-6bfcf9ed274d\") " Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.845537 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7rj74\" (UniqueName: \"kubernetes.io/projected/e183e901-16a0-43cf-9ce5-ef36da8686d1-kube-api-access-7rj74\") pod \"e183e901-16a0-43cf-9ce5-ef36da8686d1\" (UID: \"e183e901-16a0-43cf-9ce5-ef36da8686d1\") " Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.845557 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/02478fdd-380d-42f9-b105-c3ae86d224a8-operator-scripts\") pod \"02478fdd-380d-42f9-b105-c3ae86d224a8\" (UID: \"02478fdd-380d-42f9-b105-c3ae86d224a8\") " Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.845610 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/72e328d4-94e9-42bc-ae1c-b07b01d80072-operator-scripts\") pod \"72e328d4-94e9-42bc-ae1c-b07b01d80072\" (UID: \"72e328d4-94e9-42bc-ae1c-b07b01d80072\") " Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.845643 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6r6zf\" (UniqueName: \"kubernetes.io/projected/02478fdd-380d-42f9-b105-c3ae86d224a8-kube-api-access-6r6zf\") pod \"02478fdd-380d-42f9-b105-c3ae86d224a8\" (UID: \"02478fdd-380d-42f9-b105-c3ae86d224a8\") " Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.845675 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e183e901-16a0-43cf-9ce5-ef36da8686d1-operator-scripts\") pod \"e183e901-16a0-43cf-9ce5-ef36da8686d1\" (UID: \"e183e901-16a0-43cf-9ce5-ef36da8686d1\") " Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.845710 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sx7rg\" (UniqueName: \"kubernetes.io/projected/72e328d4-94e9-42bc-ae1c-b07b01d80072-kube-api-access-sx7rg\") pod \"72e328d4-94e9-42bc-ae1c-b07b01d80072\" (UID: \"72e328d4-94e9-42bc-ae1c-b07b01d80072\") " Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.845731 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xj2f8\" (UniqueName: \"kubernetes.io/projected/c02cbd83-d077-4812-b852-7fe9a0182b71-kube-api-access-xj2f8\") pod \"c02cbd83-d077-4812-b852-7fe9a0182b71\" (UID: \"c02cbd83-d077-4812-b852-7fe9a0182b71\") " Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.845761 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j8xgn\" (UniqueName: \"kubernetes.io/projected/e5180ea6-12c0-4463-8fe5-c35ab2a15b44-kube-api-access-j8xgn\") pod \"e5180ea6-12c0-4463-8fe5-c35ab2a15b44\" (UID: \"e5180ea6-12c0-4463-8fe5-c35ab2a15b44\") " Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.846878 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/72e328d4-94e9-42bc-ae1c-b07b01d80072-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "72e328d4-94e9-42bc-ae1c-b07b01d80072" (UID: "72e328d4-94e9-42bc-ae1c-b07b01d80072"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.846911 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5180ea6-12c0-4463-8fe5-c35ab2a15b44-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e5180ea6-12c0-4463-8fe5-c35ab2a15b44" (UID: "e5180ea6-12c0-4463-8fe5-c35ab2a15b44"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.847354 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff670244-5344-4409-9823-6bfcf9ed274d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ff670244-5344-4409-9823-6bfcf9ed274d" (UID: "ff670244-5344-4409-9823-6bfcf9ed274d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.847453 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c02cbd83-d077-4812-b852-7fe9a0182b71-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c02cbd83-d077-4812-b852-7fe9a0182b71" (UID: "c02cbd83-d077-4812-b852-7fe9a0182b71"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.848611 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e183e901-16a0-43cf-9ce5-ef36da8686d1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e183e901-16a0-43cf-9ce5-ef36da8686d1" (UID: "e183e901-16a0-43cf-9ce5-ef36da8686d1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.849643 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02478fdd-380d-42f9-b105-c3ae86d224a8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "02478fdd-380d-42f9-b105-c3ae86d224a8" (UID: "02478fdd-380d-42f9-b105-c3ae86d224a8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.856181 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5180ea6-12c0-4463-8fe5-c35ab2a15b44-kube-api-access-j8xgn" (OuterVolumeSpecName: "kube-api-access-j8xgn") pod "e5180ea6-12c0-4463-8fe5-c35ab2a15b44" (UID: "e5180ea6-12c0-4463-8fe5-c35ab2a15b44"). InnerVolumeSpecName "kube-api-access-j8xgn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.857073 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72e328d4-94e9-42bc-ae1c-b07b01d80072-kube-api-access-sx7rg" (OuterVolumeSpecName: "kube-api-access-sx7rg") pod "72e328d4-94e9-42bc-ae1c-b07b01d80072" (UID: "72e328d4-94e9-42bc-ae1c-b07b01d80072"). InnerVolumeSpecName "kube-api-access-sx7rg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.859340 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02478fdd-380d-42f9-b105-c3ae86d224a8-kube-api-access-6r6zf" (OuterVolumeSpecName: "kube-api-access-6r6zf") pod "02478fdd-380d-42f9-b105-c3ae86d224a8" (UID: "02478fdd-380d-42f9-b105-c3ae86d224a8"). InnerVolumeSpecName "kube-api-access-6r6zf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.859692 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c02cbd83-d077-4812-b852-7fe9a0182b71-kube-api-access-xj2f8" (OuterVolumeSpecName: "kube-api-access-xj2f8") pod "c02cbd83-d077-4812-b852-7fe9a0182b71" (UID: "c02cbd83-d077-4812-b852-7fe9a0182b71"). InnerVolumeSpecName "kube-api-access-xj2f8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.860295 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e183e901-16a0-43cf-9ce5-ef36da8686d1-kube-api-access-7rj74" (OuterVolumeSpecName: "kube-api-access-7rj74") pod "e183e901-16a0-43cf-9ce5-ef36da8686d1" (UID: "e183e901-16a0-43cf-9ce5-ef36da8686d1"). InnerVolumeSpecName "kube-api-access-7rj74". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.860675 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff670244-5344-4409-9823-6bfcf9ed274d-kube-api-access-dspfh" (OuterVolumeSpecName: "kube-api-access-dspfh") pod "ff670244-5344-4409-9823-6bfcf9ed274d" (UID: "ff670244-5344-4409-9823-6bfcf9ed274d"). InnerVolumeSpecName "kube-api-access-dspfh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.947613 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mx65n\" (UniqueName: \"kubernetes.io/projected/3ccecd7d-0e59-4336-a6ec-a595adbb727e-kube-api-access-mx65n\") pod \"3ccecd7d-0e59-4336-a6ec-a595adbb727e\" (UID: \"3ccecd7d-0e59-4336-a6ec-a595adbb727e\") " Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.947696 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5dqw4\" (UniqueName: \"kubernetes.io/projected/2495c4d6-8174-4b4d-9114-968620fbba31-kube-api-access-5dqw4\") pod \"2495c4d6-8174-4b4d-9114-968620fbba31\" (UID: \"2495c4d6-8174-4b4d-9114-968620fbba31\") " Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.947742 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3ccecd7d-0e59-4336-a6ec-a595adbb727e-operator-scripts\") pod \"3ccecd7d-0e59-4336-a6ec-a595adbb727e\" (UID: \"3ccecd7d-0e59-4336-a6ec-a595adbb727e\") " Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.947885 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2495c4d6-8174-4b4d-9114-968620fbba31-operator-scripts\") pod \"2495c4d6-8174-4b4d-9114-968620fbba31\" (UID: \"2495c4d6-8174-4b4d-9114-968620fbba31\") " Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.948325 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ccecd7d-0e59-4336-a6ec-a595adbb727e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3ccecd7d-0e59-4336-a6ec-a595adbb727e" (UID: "3ccecd7d-0e59-4336-a6ec-a595adbb727e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.948447 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2495c4d6-8174-4b4d-9114-968620fbba31-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2495c4d6-8174-4b4d-9114-968620fbba31" (UID: "2495c4d6-8174-4b4d-9114-968620fbba31"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.948930 4808 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff670244-5344-4409-9823-6bfcf9ed274d-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.948958 4808 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c02cbd83-d077-4812-b852-7fe9a0182b71-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.948971 4808 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e5180ea6-12c0-4463-8fe5-c35ab2a15b44-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.948984 4808 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3ccecd7d-0e59-4336-a6ec-a595adbb727e-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.948998 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dspfh\" (UniqueName: \"kubernetes.io/projected/ff670244-5344-4409-9823-6bfcf9ed274d-kube-api-access-dspfh\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.949012 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7rj74\" (UniqueName: \"kubernetes.io/projected/e183e901-16a0-43cf-9ce5-ef36da8686d1-kube-api-access-7rj74\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.949024 4808 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/02478fdd-380d-42f9-b105-c3ae86d224a8-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.949036 4808 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/72e328d4-94e9-42bc-ae1c-b07b01d80072-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.949048 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6r6zf\" (UniqueName: \"kubernetes.io/projected/02478fdd-380d-42f9-b105-c3ae86d224a8-kube-api-access-6r6zf\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.949062 4808 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e183e901-16a0-43cf-9ce5-ef36da8686d1-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.949075 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sx7rg\" (UniqueName: \"kubernetes.io/projected/72e328d4-94e9-42bc-ae1c-b07b01d80072-kube-api-access-sx7rg\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.949087 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xj2f8\" (UniqueName: \"kubernetes.io/projected/c02cbd83-d077-4812-b852-7fe9a0182b71-kube-api-access-xj2f8\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.949099 4808 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2495c4d6-8174-4b4d-9114-968620fbba31-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.949110 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j8xgn\" (UniqueName: \"kubernetes.io/projected/e5180ea6-12c0-4463-8fe5-c35ab2a15b44-kube-api-access-j8xgn\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.951762 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ccecd7d-0e59-4336-a6ec-a595adbb727e-kube-api-access-mx65n" (OuterVolumeSpecName: "kube-api-access-mx65n") pod "3ccecd7d-0e59-4336-a6ec-a595adbb727e" (UID: "3ccecd7d-0e59-4336-a6ec-a595adbb727e"). InnerVolumeSpecName "kube-api-access-mx65n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.952682 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2495c4d6-8174-4b4d-9114-968620fbba31-kube-api-access-5dqw4" (OuterVolumeSpecName: "kube-api-access-5dqw4") pod "2495c4d6-8174-4b4d-9114-968620fbba31" (UID: "2495c4d6-8174-4b4d-9114-968620fbba31"). InnerVolumeSpecName "kube-api-access-5dqw4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:14:11 crc kubenswrapper[4808]: I0217 16:14:11.050467 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mx65n\" (UniqueName: \"kubernetes.io/projected/3ccecd7d-0e59-4336-a6ec-a595adbb727e-kube-api-access-mx65n\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:11 crc kubenswrapper[4808]: I0217 16:14:11.050494 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5dqw4\" (UniqueName: \"kubernetes.io/projected/2495c4d6-8174-4b4d-9114-968620fbba31-kube-api-access-5dqw4\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:11 crc kubenswrapper[4808]: I0217 16:14:11.610767 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-kzjns" event={"ID":"41c68bd6-6280-4a89-be87-4d65f06a5a4d","Type":"ContainerStarted","Data":"1cff9cf3eadd10df7be967e33cf8e5d78b57505ed6a912803f00cfd78dd0e31c"} Feb 17 16:14:11 crc kubenswrapper[4808]: I0217 16:14:11.614482 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-jqrq2" Feb 17 16:14:11 crc kubenswrapper[4808]: I0217 16:14:11.614613 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-78cc-account-create-update-k7vgl" Feb 17 16:14:11 crc kubenswrapper[4808]: I0217 16:14:11.614662 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8f52ebe4-f003-4d0b-8539-1d406db95b2f","Type":"ContainerStarted","Data":"491643042c0152f38129738d60fe00177c88399b512e5240d03ab9d4b0d4ece7"} Feb 17 16:14:11 crc kubenswrapper[4808]: I0217 16:14:11.614675 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-a9c6-account-create-update-48vv8" Feb 17 16:14:11 crc kubenswrapper[4808]: I0217 16:14:11.614692 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8f52ebe4-f003-4d0b-8539-1d406db95b2f","Type":"ContainerStarted","Data":"59cbb05f824a3ef841fb687bc9090d82b0e7e6d58f0798feee4f22da8aef9866"} Feb 17 16:14:11 crc kubenswrapper[4808]: I0217 16:14:11.614713 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8f52ebe4-f003-4d0b-8539-1d406db95b2f","Type":"ContainerStarted","Data":"27b6ab5d4d28f4a5b479a7551ba71f2fa6d495478c1afa92ecb29a9b87576d4a"} Feb 17 16:14:11 crc kubenswrapper[4808]: I0217 16:14:11.614971 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-jmq6n" Feb 17 16:14:11 crc kubenswrapper[4808]: I0217 16:14:11.615040 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-8c80-account-create-update-rk4jj" Feb 17 16:14:11 crc kubenswrapper[4808]: I0217 16:14:11.615066 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-create-r5lfk" Feb 17 16:14:11 crc kubenswrapper[4808]: I0217 16:14:11.615095 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-59d8-account-create-update-5vsvx" Feb 17 16:14:11 crc kubenswrapper[4808]: I0217 16:14:11.615125 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-ktddg" Feb 17 16:14:11 crc kubenswrapper[4808]: I0217 16:14:11.642109 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-kzjns" podStartSLOduration=4.504135956 podStartE2EDuration="9.642085208s" podCreationTimestamp="2026-02-17 16:14:02 +0000 UTC" firstStartedPulling="2026-02-17 16:14:05.401891083 +0000 UTC m=+1208.918250156" lastFinishedPulling="2026-02-17 16:14:10.539840325 +0000 UTC m=+1214.056199408" observedRunningTime="2026-02-17 16:14:11.629323223 +0000 UTC m=+1215.145682326" watchObservedRunningTime="2026-02-17 16:14:11.642085208 +0000 UTC m=+1215.158444301" Feb 17 16:14:12 crc kubenswrapper[4808]: I0217 16:14:12.626766 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8f52ebe4-f003-4d0b-8539-1d406db95b2f","Type":"ContainerStarted","Data":"b2a7c5ffc9b4e3884f38f87b5b2eda9b703b71f1e4c9a4c9c858de2db7371020"} Feb 17 16:14:13 crc kubenswrapper[4808]: I0217 16:14:13.654684 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8f52ebe4-f003-4d0b-8539-1d406db95b2f","Type":"ContainerStarted","Data":"df9f68a5854b4f1558a0524fced4d38e13660337302c92bef5248d815dfd21c4"} Feb 17 16:14:13 crc kubenswrapper[4808]: I0217 16:14:13.655059 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8f52ebe4-f003-4d0b-8539-1d406db95b2f","Type":"ContainerStarted","Data":"e48beb2c358671a6c7db7f0ee8e9fb94bf4431513f6021181e53bc794008621a"} Feb 17 16:14:13 crc kubenswrapper[4808]: I0217 16:14:13.655085 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8f52ebe4-f003-4d0b-8539-1d406db95b2f","Type":"ContainerStarted","Data":"6c3bb6aea8cbb30b8a9eae461c406068fbf442b0f36daa227bb7270c104f357f"} Feb 17 16:14:15 crc kubenswrapper[4808]: I0217 16:14:15.675612 4808 generic.go:334] "Generic (PLEG): container finished" podID="dadd7e91-13f0-4ba2-9f87-ad057567a56d" containerID="a537df6f55dce8af21497e898f451fd7563f1f90fb34c6f630089eb48e909606" exitCode=0 Feb 17 16:14:15 crc kubenswrapper[4808]: I0217 16:14:15.675737 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"dadd7e91-13f0-4ba2-9f87-ad057567a56d","Type":"ContainerDied","Data":"a537df6f55dce8af21497e898f451fd7563f1f90fb34c6f630089eb48e909606"} Feb 17 16:14:15 crc kubenswrapper[4808]: I0217 16:14:15.683243 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8f52ebe4-f003-4d0b-8539-1d406db95b2f","Type":"ContainerStarted","Data":"7b4da3c810403b3cdb3db26c8c3246fd68acd6115ee8aeff40464c7a3ebc9c97"} Feb 17 16:14:15 crc kubenswrapper[4808]: I0217 16:14:15.683282 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8f52ebe4-f003-4d0b-8539-1d406db95b2f","Type":"ContainerStarted","Data":"e2dcb95417c1f379bf96d646de9cbf2961f747d1dd658fac1841cf7282542ac5"} Feb 17 16:14:15 crc kubenswrapper[4808]: E0217 16:14:15.952786 4808 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2917eca2_0431_4bd6_ad96_ab8464cc4fd7.slice/crio-8d4b256de0544b61472bec728b8a9f6596b6505c3ff6baf74b4b74f9988e76dc.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2917eca2_0431_4bd6_ad96_ab8464cc4fd7.slice/crio-conmon-3e1259ba3d26a0e7de7e3a0ca80bca8985317419bb22e9888ef6fc0a7e83aec7.scope\": RecentStats: unable to find data in memory cache]" Feb 17 16:14:16 crc kubenswrapper[4808]: I0217 16:14:16.697634 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"dadd7e91-13f0-4ba2-9f87-ad057567a56d","Type":"ContainerStarted","Data":"e60078b07b0caf9d38ff7dd0a579724180a348b6373ed99a735f3a21becd9e5f"} Feb 17 16:14:16 crc kubenswrapper[4808]: I0217 16:14:16.699792 4808 generic.go:334] "Generic (PLEG): container finished" podID="41c68bd6-6280-4a89-be87-4d65f06a5a4d" containerID="1cff9cf3eadd10df7be967e33cf8e5d78b57505ed6a912803f00cfd78dd0e31c" exitCode=0 Feb 17 16:14:16 crc kubenswrapper[4808]: I0217 16:14:16.699844 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-kzjns" event={"ID":"41c68bd6-6280-4a89-be87-4d65f06a5a4d","Type":"ContainerDied","Data":"1cff9cf3eadd10df7be967e33cf8e5d78b57505ed6a912803f00cfd78dd0e31c"} Feb 17 16:14:16 crc kubenswrapper[4808]: I0217 16:14:16.711973 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8f52ebe4-f003-4d0b-8539-1d406db95b2f","Type":"ContainerStarted","Data":"6595237b391e67ed09cb1881b7b4f03893623f863075fed0e65248cf65ce7c4b"} Feb 17 16:14:16 crc kubenswrapper[4808]: I0217 16:14:16.712025 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8f52ebe4-f003-4d0b-8539-1d406db95b2f","Type":"ContainerStarted","Data":"ecdc1158d969e6da45456366a446f550a2b7d52f06dc7596569b8baa90a8a564"} Feb 17 16:14:16 crc kubenswrapper[4808]: I0217 16:14:16.712037 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8f52ebe4-f003-4d0b-8539-1d406db95b2f","Type":"ContainerStarted","Data":"a034788c022750136ff34bf82590c806a6e424137889c25f3dbef22d52899426"} Feb 17 16:14:16 crc kubenswrapper[4808]: I0217 16:14:16.712047 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8f52ebe4-f003-4d0b-8539-1d406db95b2f","Type":"ContainerStarted","Data":"f1073016d5f2f6b5d054bce37c43a6a88228df020ecfd931154f637eafef3d55"} Feb 17 16:14:16 crc kubenswrapper[4808]: I0217 16:14:16.712057 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8f52ebe4-f003-4d0b-8539-1d406db95b2f","Type":"ContainerStarted","Data":"d1a018be7a22a09cf47a08d07b315fda3ffd60d6f30745e5dd18c23d950530e1"} Feb 17 16:14:16 crc kubenswrapper[4808]: I0217 16:14:16.753465 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=40.133134597 podStartE2EDuration="49.753444821s" podCreationTimestamp="2026-02-17 16:13:27 +0000 UTC" firstStartedPulling="2026-02-17 16:14:05.519878527 +0000 UTC m=+1209.036237600" lastFinishedPulling="2026-02-17 16:14:15.140188711 +0000 UTC m=+1218.656547824" observedRunningTime="2026-02-17 16:14:16.749972256 +0000 UTC m=+1220.266331339" watchObservedRunningTime="2026-02-17 16:14:16.753444821 +0000 UTC m=+1220.269803904" Feb 17 16:14:17 crc kubenswrapper[4808]: I0217 16:14:17.025820 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-5dcwb"] Feb 17 16:14:17 crc kubenswrapper[4808]: E0217 16:14:17.026297 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e183e901-16a0-43cf-9ce5-ef36da8686d1" containerName="mariadb-account-create-update" Feb 17 16:14:17 crc kubenswrapper[4808]: I0217 16:14:17.026323 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="e183e901-16a0-43cf-9ce5-ef36da8686d1" containerName="mariadb-account-create-update" Feb 17 16:14:17 crc kubenswrapper[4808]: E0217 16:14:17.026341 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2495c4d6-8174-4b4d-9114-968620fbba31" containerName="mariadb-account-create-update" Feb 17 16:14:17 crc kubenswrapper[4808]: I0217 16:14:17.026350 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="2495c4d6-8174-4b4d-9114-968620fbba31" containerName="mariadb-account-create-update" Feb 17 16:14:17 crc kubenswrapper[4808]: E0217 16:14:17.026359 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72e328d4-94e9-42bc-ae1c-b07b01d80072" containerName="mariadb-database-create" Feb 17 16:14:17 crc kubenswrapper[4808]: I0217 16:14:17.026367 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="72e328d4-94e9-42bc-ae1c-b07b01d80072" containerName="mariadb-database-create" Feb 17 16:14:17 crc kubenswrapper[4808]: E0217 16:14:17.026381 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7377369f-b540-4b85-be05-4200c9695a41" containerName="mariadb-account-create-update" Feb 17 16:14:17 crc kubenswrapper[4808]: I0217 16:14:17.026388 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="7377369f-b540-4b85-be05-4200c9695a41" containerName="mariadb-account-create-update" Feb 17 16:14:17 crc kubenswrapper[4808]: E0217 16:14:17.026398 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c02cbd83-d077-4812-b852-7fe9a0182b71" containerName="mariadb-database-create" Feb 17 16:14:17 crc kubenswrapper[4808]: I0217 16:14:17.026404 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="c02cbd83-d077-4812-b852-7fe9a0182b71" containerName="mariadb-database-create" Feb 17 16:14:17 crc kubenswrapper[4808]: E0217 16:14:17.026421 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02478fdd-380d-42f9-b105-c3ae86d224a8" containerName="mariadb-account-create-update" Feb 17 16:14:17 crc kubenswrapper[4808]: I0217 16:14:17.026428 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="02478fdd-380d-42f9-b105-c3ae86d224a8" containerName="mariadb-account-create-update" Feb 17 16:14:17 crc kubenswrapper[4808]: E0217 16:14:17.026442 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ccecd7d-0e59-4336-a6ec-a595adbb727e" containerName="mariadb-database-create" Feb 17 16:14:17 crc kubenswrapper[4808]: I0217 16:14:17.026450 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ccecd7d-0e59-4336-a6ec-a595adbb727e" containerName="mariadb-database-create" Feb 17 16:14:17 crc kubenswrapper[4808]: E0217 16:14:17.026475 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5180ea6-12c0-4463-8fe5-c35ab2a15b44" containerName="mariadb-account-create-update" Feb 17 16:14:17 crc kubenswrapper[4808]: I0217 16:14:17.026484 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5180ea6-12c0-4463-8fe5-c35ab2a15b44" containerName="mariadb-account-create-update" Feb 17 16:14:17 crc kubenswrapper[4808]: E0217 16:14:17.026494 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff670244-5344-4409-9823-6bfcf9ed274d" containerName="mariadb-database-create" Feb 17 16:14:17 crc kubenswrapper[4808]: I0217 16:14:17.026502 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff670244-5344-4409-9823-6bfcf9ed274d" containerName="mariadb-database-create" Feb 17 16:14:17 crc kubenswrapper[4808]: I0217 16:14:17.027511 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff670244-5344-4409-9823-6bfcf9ed274d" containerName="mariadb-database-create" Feb 17 16:14:17 crc kubenswrapper[4808]: I0217 16:14:17.027544 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="72e328d4-94e9-42bc-ae1c-b07b01d80072" containerName="mariadb-database-create" Feb 17 16:14:17 crc kubenswrapper[4808]: I0217 16:14:17.027559 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="c02cbd83-d077-4812-b852-7fe9a0182b71" containerName="mariadb-database-create" Feb 17 16:14:17 crc kubenswrapper[4808]: I0217 16:14:17.027590 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ccecd7d-0e59-4336-a6ec-a595adbb727e" containerName="mariadb-database-create" Feb 17 16:14:17 crc kubenswrapper[4808]: I0217 16:14:17.027601 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="2495c4d6-8174-4b4d-9114-968620fbba31" containerName="mariadb-account-create-update" Feb 17 16:14:17 crc kubenswrapper[4808]: I0217 16:14:17.027614 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="02478fdd-380d-42f9-b105-c3ae86d224a8" containerName="mariadb-account-create-update" Feb 17 16:14:17 crc kubenswrapper[4808]: I0217 16:14:17.027626 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="e183e901-16a0-43cf-9ce5-ef36da8686d1" containerName="mariadb-account-create-update" Feb 17 16:14:17 crc kubenswrapper[4808]: I0217 16:14:17.027638 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5180ea6-12c0-4463-8fe5-c35ab2a15b44" containerName="mariadb-account-create-update" Feb 17 16:14:17 crc kubenswrapper[4808]: I0217 16:14:17.027653 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="7377369f-b540-4b85-be05-4200c9695a41" containerName="mariadb-account-create-update" Feb 17 16:14:17 crc kubenswrapper[4808]: I0217 16:14:17.028969 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-5dcwb" Feb 17 16:14:17 crc kubenswrapper[4808]: I0217 16:14:17.034841 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Feb 17 16:14:17 crc kubenswrapper[4808]: I0217 16:14:17.035037 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-5dcwb"] Feb 17 16:14:17 crc kubenswrapper[4808]: I0217 16:14:17.173240 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/75b951c6-37fc-4757-bafd-ef3647e3b701-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-5dcwb\" (UID: \"75b951c6-37fc-4757-bafd-ef3647e3b701\") " pod="openstack/dnsmasq-dns-764c5664d7-5dcwb" Feb 17 16:14:17 crc kubenswrapper[4808]: I0217 16:14:17.173312 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/75b951c6-37fc-4757-bafd-ef3647e3b701-dns-svc\") pod \"dnsmasq-dns-764c5664d7-5dcwb\" (UID: \"75b951c6-37fc-4757-bafd-ef3647e3b701\") " pod="openstack/dnsmasq-dns-764c5664d7-5dcwb" Feb 17 16:14:17 crc kubenswrapper[4808]: I0217 16:14:17.173348 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/75b951c6-37fc-4757-bafd-ef3647e3b701-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-5dcwb\" (UID: \"75b951c6-37fc-4757-bafd-ef3647e3b701\") " pod="openstack/dnsmasq-dns-764c5664d7-5dcwb" Feb 17 16:14:17 crc kubenswrapper[4808]: I0217 16:14:17.173471 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75b951c6-37fc-4757-bafd-ef3647e3b701-config\") pod \"dnsmasq-dns-764c5664d7-5dcwb\" (UID: \"75b951c6-37fc-4757-bafd-ef3647e3b701\") " pod="openstack/dnsmasq-dns-764c5664d7-5dcwb" Feb 17 16:14:17 crc kubenswrapper[4808]: I0217 16:14:17.173631 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcsqp\" (UniqueName: \"kubernetes.io/projected/75b951c6-37fc-4757-bafd-ef3647e3b701-kube-api-access-rcsqp\") pod \"dnsmasq-dns-764c5664d7-5dcwb\" (UID: \"75b951c6-37fc-4757-bafd-ef3647e3b701\") " pod="openstack/dnsmasq-dns-764c5664d7-5dcwb" Feb 17 16:14:17 crc kubenswrapper[4808]: I0217 16:14:17.173798 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/75b951c6-37fc-4757-bafd-ef3647e3b701-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-5dcwb\" (UID: \"75b951c6-37fc-4757-bafd-ef3647e3b701\") " pod="openstack/dnsmasq-dns-764c5664d7-5dcwb" Feb 17 16:14:17 crc kubenswrapper[4808]: I0217 16:14:17.276605 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/75b951c6-37fc-4757-bafd-ef3647e3b701-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-5dcwb\" (UID: \"75b951c6-37fc-4757-bafd-ef3647e3b701\") " pod="openstack/dnsmasq-dns-764c5664d7-5dcwb" Feb 17 16:14:17 crc kubenswrapper[4808]: I0217 16:14:17.277316 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/75b951c6-37fc-4757-bafd-ef3647e3b701-dns-svc\") pod \"dnsmasq-dns-764c5664d7-5dcwb\" (UID: \"75b951c6-37fc-4757-bafd-ef3647e3b701\") " pod="openstack/dnsmasq-dns-764c5664d7-5dcwb" Feb 17 16:14:17 crc kubenswrapper[4808]: I0217 16:14:17.277351 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/75b951c6-37fc-4757-bafd-ef3647e3b701-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-5dcwb\" (UID: \"75b951c6-37fc-4757-bafd-ef3647e3b701\") " pod="openstack/dnsmasq-dns-764c5664d7-5dcwb" Feb 17 16:14:17 crc kubenswrapper[4808]: I0217 16:14:17.277815 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75b951c6-37fc-4757-bafd-ef3647e3b701-config\") pod \"dnsmasq-dns-764c5664d7-5dcwb\" (UID: \"75b951c6-37fc-4757-bafd-ef3647e3b701\") " pod="openstack/dnsmasq-dns-764c5664d7-5dcwb" Feb 17 16:14:17 crc kubenswrapper[4808]: I0217 16:14:17.277872 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rcsqp\" (UniqueName: \"kubernetes.io/projected/75b951c6-37fc-4757-bafd-ef3647e3b701-kube-api-access-rcsqp\") pod \"dnsmasq-dns-764c5664d7-5dcwb\" (UID: \"75b951c6-37fc-4757-bafd-ef3647e3b701\") " pod="openstack/dnsmasq-dns-764c5664d7-5dcwb" Feb 17 16:14:17 crc kubenswrapper[4808]: I0217 16:14:17.277956 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/75b951c6-37fc-4757-bafd-ef3647e3b701-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-5dcwb\" (UID: \"75b951c6-37fc-4757-bafd-ef3647e3b701\") " pod="openstack/dnsmasq-dns-764c5664d7-5dcwb" Feb 17 16:14:17 crc kubenswrapper[4808]: I0217 16:14:17.278012 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/75b951c6-37fc-4757-bafd-ef3647e3b701-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-5dcwb\" (UID: \"75b951c6-37fc-4757-bafd-ef3647e3b701\") " pod="openstack/dnsmasq-dns-764c5664d7-5dcwb" Feb 17 16:14:17 crc kubenswrapper[4808]: I0217 16:14:17.278362 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/75b951c6-37fc-4757-bafd-ef3647e3b701-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-5dcwb\" (UID: \"75b951c6-37fc-4757-bafd-ef3647e3b701\") " pod="openstack/dnsmasq-dns-764c5664d7-5dcwb" Feb 17 16:14:17 crc kubenswrapper[4808]: I0217 16:14:17.279124 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75b951c6-37fc-4757-bafd-ef3647e3b701-config\") pod \"dnsmasq-dns-764c5664d7-5dcwb\" (UID: \"75b951c6-37fc-4757-bafd-ef3647e3b701\") " pod="openstack/dnsmasq-dns-764c5664d7-5dcwb" Feb 17 16:14:17 crc kubenswrapper[4808]: I0217 16:14:17.279566 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/75b951c6-37fc-4757-bafd-ef3647e3b701-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-5dcwb\" (UID: \"75b951c6-37fc-4757-bafd-ef3647e3b701\") " pod="openstack/dnsmasq-dns-764c5664d7-5dcwb" Feb 17 16:14:17 crc kubenswrapper[4808]: I0217 16:14:17.280004 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/75b951c6-37fc-4757-bafd-ef3647e3b701-dns-svc\") pod \"dnsmasq-dns-764c5664d7-5dcwb\" (UID: \"75b951c6-37fc-4757-bafd-ef3647e3b701\") " pod="openstack/dnsmasq-dns-764c5664d7-5dcwb" Feb 17 16:14:17 crc kubenswrapper[4808]: I0217 16:14:17.301334 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcsqp\" (UniqueName: \"kubernetes.io/projected/75b951c6-37fc-4757-bafd-ef3647e3b701-kube-api-access-rcsqp\") pod \"dnsmasq-dns-764c5664d7-5dcwb\" (UID: \"75b951c6-37fc-4757-bafd-ef3647e3b701\") " pod="openstack/dnsmasq-dns-764c5664d7-5dcwb" Feb 17 16:14:17 crc kubenswrapper[4808]: I0217 16:14:17.348215 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-5dcwb" Feb 17 16:14:17 crc kubenswrapper[4808]: I0217 16:14:17.822168 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-5dcwb"] Feb 17 16:14:17 crc kubenswrapper[4808]: W0217 16:14:17.844718 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod75b951c6_37fc_4757_bafd_ef3647e3b701.slice/crio-1b646decde62c27e860d00c8b40a1f84672ace9f752cc2f00a47cf4ad3e6b50e WatchSource:0}: Error finding container 1b646decde62c27e860d00c8b40a1f84672ace9f752cc2f00a47cf4ad3e6b50e: Status 404 returned error can't find the container with id 1b646decde62c27e860d00c8b40a1f84672ace9f752cc2f00a47cf4ad3e6b50e Feb 17 16:14:18 crc kubenswrapper[4808]: I0217 16:14:18.064829 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-kzjns" Feb 17 16:14:18 crc kubenswrapper[4808]: I0217 16:14:18.193301 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f6rjq\" (UniqueName: \"kubernetes.io/projected/41c68bd6-6280-4a89-be87-4d65f06a5a4d-kube-api-access-f6rjq\") pod \"41c68bd6-6280-4a89-be87-4d65f06a5a4d\" (UID: \"41c68bd6-6280-4a89-be87-4d65f06a5a4d\") " Feb 17 16:14:18 crc kubenswrapper[4808]: I0217 16:14:18.193631 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41c68bd6-6280-4a89-be87-4d65f06a5a4d-config-data\") pod \"41c68bd6-6280-4a89-be87-4d65f06a5a4d\" (UID: \"41c68bd6-6280-4a89-be87-4d65f06a5a4d\") " Feb 17 16:14:18 crc kubenswrapper[4808]: I0217 16:14:18.194423 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41c68bd6-6280-4a89-be87-4d65f06a5a4d-combined-ca-bundle\") pod \"41c68bd6-6280-4a89-be87-4d65f06a5a4d\" (UID: \"41c68bd6-6280-4a89-be87-4d65f06a5a4d\") " Feb 17 16:14:18 crc kubenswrapper[4808]: I0217 16:14:18.198509 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41c68bd6-6280-4a89-be87-4d65f06a5a4d-kube-api-access-f6rjq" (OuterVolumeSpecName: "kube-api-access-f6rjq") pod "41c68bd6-6280-4a89-be87-4d65f06a5a4d" (UID: "41c68bd6-6280-4a89-be87-4d65f06a5a4d"). InnerVolumeSpecName "kube-api-access-f6rjq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:14:18 crc kubenswrapper[4808]: I0217 16:14:18.230107 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41c68bd6-6280-4a89-be87-4d65f06a5a4d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "41c68bd6-6280-4a89-be87-4d65f06a5a4d" (UID: "41c68bd6-6280-4a89-be87-4d65f06a5a4d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:14:18 crc kubenswrapper[4808]: I0217 16:14:18.252191 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41c68bd6-6280-4a89-be87-4d65f06a5a4d-config-data" (OuterVolumeSpecName: "config-data") pod "41c68bd6-6280-4a89-be87-4d65f06a5a4d" (UID: "41c68bd6-6280-4a89-be87-4d65f06a5a4d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:14:18 crc kubenswrapper[4808]: I0217 16:14:18.297133 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41c68bd6-6280-4a89-be87-4d65f06a5a4d-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:18 crc kubenswrapper[4808]: I0217 16:14:18.297165 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41c68bd6-6280-4a89-be87-4d65f06a5a4d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:18 crc kubenswrapper[4808]: I0217 16:14:18.297181 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f6rjq\" (UniqueName: \"kubernetes.io/projected/41c68bd6-6280-4a89-be87-4d65f06a5a4d-kube-api-access-f6rjq\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:18 crc kubenswrapper[4808]: I0217 16:14:18.747191 4808 generic.go:334] "Generic (PLEG): container finished" podID="75b951c6-37fc-4757-bafd-ef3647e3b701" containerID="6c36b7f72b37c3fb336e2a5f15220b8f1aec757f894754e35bf7cd4461ad3109" exitCode=0 Feb 17 16:14:18 crc kubenswrapper[4808]: I0217 16:14:18.747706 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-5dcwb" event={"ID":"75b951c6-37fc-4757-bafd-ef3647e3b701","Type":"ContainerDied","Data":"6c36b7f72b37c3fb336e2a5f15220b8f1aec757f894754e35bf7cd4461ad3109"} Feb 17 16:14:18 crc kubenswrapper[4808]: I0217 16:14:18.747787 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-5dcwb" event={"ID":"75b951c6-37fc-4757-bafd-ef3647e3b701","Type":"ContainerStarted","Data":"1b646decde62c27e860d00c8b40a1f84672ace9f752cc2f00a47cf4ad3e6b50e"} Feb 17 16:14:18 crc kubenswrapper[4808]: I0217 16:14:18.752849 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-4mdzt" event={"ID":"e4002815-8dd4-4668-bea7-0d54bdaa4dd6","Type":"ContainerStarted","Data":"be39fd3404d415b22eff1029ee90e816412441ea7651c949f01bcda15108e232"} Feb 17 16:14:18 crc kubenswrapper[4808]: I0217 16:14:18.759165 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-kzjns" event={"ID":"41c68bd6-6280-4a89-be87-4d65f06a5a4d","Type":"ContainerDied","Data":"775b438b7af2b3cc184f6f5f5f4c39d337ef64447d3370a28378044cb5ec6a4d"} Feb 17 16:14:18 crc kubenswrapper[4808]: I0217 16:14:18.759222 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="775b438b7af2b3cc184f6f5f5f4c39d337ef64447d3370a28378044cb5ec6a4d" Feb 17 16:14:18 crc kubenswrapper[4808]: I0217 16:14:18.759301 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-kzjns" Feb 17 16:14:18 crc kubenswrapper[4808]: I0217 16:14:18.806407 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-4mdzt" podStartSLOduration=2.945918163 podStartE2EDuration="33.806386984s" podCreationTimestamp="2026-02-17 16:13:45 +0000 UTC" firstStartedPulling="2026-02-17 16:13:46.852862329 +0000 UTC m=+1190.369221402" lastFinishedPulling="2026-02-17 16:14:17.71333115 +0000 UTC m=+1221.229690223" observedRunningTime="2026-02-17 16:14:18.805197663 +0000 UTC m=+1222.321556736" watchObservedRunningTime="2026-02-17 16:14:18.806386984 +0000 UTC m=+1222.322746067" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.018502 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-5dcwb"] Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.038878 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-p2fwj"] Feb 17 16:14:19 crc kubenswrapper[4808]: E0217 16:14:19.045907 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41c68bd6-6280-4a89-be87-4d65f06a5a4d" containerName="keystone-db-sync" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.046028 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="41c68bd6-6280-4a89-be87-4d65f06a5a4d" containerName="keystone-db-sync" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.046252 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="41c68bd6-6280-4a89-be87-4d65f06a5a4d" containerName="keystone-db-sync" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.050228 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-p2fwj" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.058922 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.059808 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.059828 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-6x2tm" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.059940 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.060458 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.081813 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-p2fwj"] Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.112601 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-kpwh4"] Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.114030 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e39a33f-5d00-4171-bf63-6b12226901d3-scripts\") pod \"keystone-bootstrap-p2fwj\" (UID: \"4e39a33f-5d00-4171-bf63-6b12226901d3\") " pod="openstack/keystone-bootstrap-p2fwj" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.114124 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nklnb\" (UniqueName: \"kubernetes.io/projected/4e39a33f-5d00-4171-bf63-6b12226901d3-kube-api-access-nklnb\") pod \"keystone-bootstrap-p2fwj\" (UID: \"4e39a33f-5d00-4171-bf63-6b12226901d3\") " pod="openstack/keystone-bootstrap-p2fwj" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.114219 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e39a33f-5d00-4171-bf63-6b12226901d3-combined-ca-bundle\") pod \"keystone-bootstrap-p2fwj\" (UID: \"4e39a33f-5d00-4171-bf63-6b12226901d3\") " pod="openstack/keystone-bootstrap-p2fwj" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.114242 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4e39a33f-5d00-4171-bf63-6b12226901d3-credential-keys\") pod \"keystone-bootstrap-p2fwj\" (UID: \"4e39a33f-5d00-4171-bf63-6b12226901d3\") " pod="openstack/keystone-bootstrap-p2fwj" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.114282 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e39a33f-5d00-4171-bf63-6b12226901d3-config-data\") pod \"keystone-bootstrap-p2fwj\" (UID: \"4e39a33f-5d00-4171-bf63-6b12226901d3\") " pod="openstack/keystone-bootstrap-p2fwj" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.114360 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4e39a33f-5d00-4171-bf63-6b12226901d3-fernet-keys\") pod \"keystone-bootstrap-p2fwj\" (UID: \"4e39a33f-5d00-4171-bf63-6b12226901d3\") " pod="openstack/keystone-bootstrap-p2fwj" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.114950 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-kpwh4" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.166884 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-kpwh4"] Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.216257 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e39a33f-5d00-4171-bf63-6b12226901d3-combined-ca-bundle\") pod \"keystone-bootstrap-p2fwj\" (UID: \"4e39a33f-5d00-4171-bf63-6b12226901d3\") " pod="openstack/keystone-bootstrap-p2fwj" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.216314 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4e39a33f-5d00-4171-bf63-6b12226901d3-credential-keys\") pod \"keystone-bootstrap-p2fwj\" (UID: \"4e39a33f-5d00-4171-bf63-6b12226901d3\") " pod="openstack/keystone-bootstrap-p2fwj" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.216337 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e39a33f-5d00-4171-bf63-6b12226901d3-config-data\") pod \"keystone-bootstrap-p2fwj\" (UID: \"4e39a33f-5d00-4171-bf63-6b12226901d3\") " pod="openstack/keystone-bootstrap-p2fwj" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.216367 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4cdfa661-fa28-48be-b416-f2e69927fc9b-config\") pod \"dnsmasq-dns-5959f8865f-kpwh4\" (UID: \"4cdfa661-fa28-48be-b416-f2e69927fc9b\") " pod="openstack/dnsmasq-dns-5959f8865f-kpwh4" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.216385 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4cdfa661-fa28-48be-b416-f2e69927fc9b-dns-svc\") pod \"dnsmasq-dns-5959f8865f-kpwh4\" (UID: \"4cdfa661-fa28-48be-b416-f2e69927fc9b\") " pod="openstack/dnsmasq-dns-5959f8865f-kpwh4" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.216400 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4e39a33f-5d00-4171-bf63-6b12226901d3-fernet-keys\") pod \"keystone-bootstrap-p2fwj\" (UID: \"4e39a33f-5d00-4171-bf63-6b12226901d3\") " pod="openstack/keystone-bootstrap-p2fwj" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.216418 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4cdfa661-fa28-48be-b416-f2e69927fc9b-ovsdbserver-nb\") pod \"dnsmasq-dns-5959f8865f-kpwh4\" (UID: \"4cdfa661-fa28-48be-b416-f2e69927fc9b\") " pod="openstack/dnsmasq-dns-5959f8865f-kpwh4" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.216450 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4cdfa661-fa28-48be-b416-f2e69927fc9b-ovsdbserver-sb\") pod \"dnsmasq-dns-5959f8865f-kpwh4\" (UID: \"4cdfa661-fa28-48be-b416-f2e69927fc9b\") " pod="openstack/dnsmasq-dns-5959f8865f-kpwh4" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.216488 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e39a33f-5d00-4171-bf63-6b12226901d3-scripts\") pod \"keystone-bootstrap-p2fwj\" (UID: \"4e39a33f-5d00-4171-bf63-6b12226901d3\") " pod="openstack/keystone-bootstrap-p2fwj" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.216535 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nklnb\" (UniqueName: \"kubernetes.io/projected/4e39a33f-5d00-4171-bf63-6b12226901d3-kube-api-access-nklnb\") pod \"keystone-bootstrap-p2fwj\" (UID: \"4e39a33f-5d00-4171-bf63-6b12226901d3\") " pod="openstack/keystone-bootstrap-p2fwj" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.216552 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4mdl\" (UniqueName: \"kubernetes.io/projected/4cdfa661-fa28-48be-b416-f2e69927fc9b-kube-api-access-b4mdl\") pod \"dnsmasq-dns-5959f8865f-kpwh4\" (UID: \"4cdfa661-fa28-48be-b416-f2e69927fc9b\") " pod="openstack/dnsmasq-dns-5959f8865f-kpwh4" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.216597 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4cdfa661-fa28-48be-b416-f2e69927fc9b-dns-swift-storage-0\") pod \"dnsmasq-dns-5959f8865f-kpwh4\" (UID: \"4cdfa661-fa28-48be-b416-f2e69927fc9b\") " pod="openstack/dnsmasq-dns-5959f8865f-kpwh4" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.227337 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e39a33f-5d00-4171-bf63-6b12226901d3-config-data\") pod \"keystone-bootstrap-p2fwj\" (UID: \"4e39a33f-5d00-4171-bf63-6b12226901d3\") " pod="openstack/keystone-bootstrap-p2fwj" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.235903 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e39a33f-5d00-4171-bf63-6b12226901d3-scripts\") pod \"keystone-bootstrap-p2fwj\" (UID: \"4e39a33f-5d00-4171-bf63-6b12226901d3\") " pod="openstack/keystone-bootstrap-p2fwj" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.241676 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e39a33f-5d00-4171-bf63-6b12226901d3-combined-ca-bundle\") pod \"keystone-bootstrap-p2fwj\" (UID: \"4e39a33f-5d00-4171-bf63-6b12226901d3\") " pod="openstack/keystone-bootstrap-p2fwj" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.243021 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4e39a33f-5d00-4171-bf63-6b12226901d3-credential-keys\") pod \"keystone-bootstrap-p2fwj\" (UID: \"4e39a33f-5d00-4171-bf63-6b12226901d3\") " pod="openstack/keystone-bootstrap-p2fwj" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.247456 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4e39a33f-5d00-4171-bf63-6b12226901d3-fernet-keys\") pod \"keystone-bootstrap-p2fwj\" (UID: \"4e39a33f-5d00-4171-bf63-6b12226901d3\") " pod="openstack/keystone-bootstrap-p2fwj" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.289247 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nklnb\" (UniqueName: \"kubernetes.io/projected/4e39a33f-5d00-4171-bf63-6b12226901d3-kube-api-access-nklnb\") pod \"keystone-bootstrap-p2fwj\" (UID: \"4e39a33f-5d00-4171-bf63-6b12226901d3\") " pod="openstack/keystone-bootstrap-p2fwj" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.302655 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-jcqjf"] Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.303878 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-jcqjf" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.309767 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.310007 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-bqdgs" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.310162 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.318627 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-jskwv"] Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.319973 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-jskwv" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.320633 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4cdfa661-fa28-48be-b416-f2e69927fc9b-config\") pod \"dnsmasq-dns-5959f8865f-kpwh4\" (UID: \"4cdfa661-fa28-48be-b416-f2e69927fc9b\") " pod="openstack/dnsmasq-dns-5959f8865f-kpwh4" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.320679 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4cdfa661-fa28-48be-b416-f2e69927fc9b-dns-svc\") pod \"dnsmasq-dns-5959f8865f-kpwh4\" (UID: \"4cdfa661-fa28-48be-b416-f2e69927fc9b\") " pod="openstack/dnsmasq-dns-5959f8865f-kpwh4" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.320706 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4cdfa661-fa28-48be-b416-f2e69927fc9b-ovsdbserver-nb\") pod \"dnsmasq-dns-5959f8865f-kpwh4\" (UID: \"4cdfa661-fa28-48be-b416-f2e69927fc9b\") " pod="openstack/dnsmasq-dns-5959f8865f-kpwh4" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.320754 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4cdfa661-fa28-48be-b416-f2e69927fc9b-ovsdbserver-sb\") pod \"dnsmasq-dns-5959f8865f-kpwh4\" (UID: \"4cdfa661-fa28-48be-b416-f2e69927fc9b\") " pod="openstack/dnsmasq-dns-5959f8865f-kpwh4" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.320852 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4mdl\" (UniqueName: \"kubernetes.io/projected/4cdfa661-fa28-48be-b416-f2e69927fc9b-kube-api-access-b4mdl\") pod \"dnsmasq-dns-5959f8865f-kpwh4\" (UID: \"4cdfa661-fa28-48be-b416-f2e69927fc9b\") " pod="openstack/dnsmasq-dns-5959f8865f-kpwh4" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.320894 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4cdfa661-fa28-48be-b416-f2e69927fc9b-dns-swift-storage-0\") pod \"dnsmasq-dns-5959f8865f-kpwh4\" (UID: \"4cdfa661-fa28-48be-b416-f2e69927fc9b\") " pod="openstack/dnsmasq-dns-5959f8865f-kpwh4" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.321989 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4cdfa661-fa28-48be-b416-f2e69927fc9b-dns-swift-storage-0\") pod \"dnsmasq-dns-5959f8865f-kpwh4\" (UID: \"4cdfa661-fa28-48be-b416-f2e69927fc9b\") " pod="openstack/dnsmasq-dns-5959f8865f-kpwh4" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.322627 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4cdfa661-fa28-48be-b416-f2e69927fc9b-ovsdbserver-nb\") pod \"dnsmasq-dns-5959f8865f-kpwh4\" (UID: \"4cdfa661-fa28-48be-b416-f2e69927fc9b\") " pod="openstack/dnsmasq-dns-5959f8865f-kpwh4" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.322902 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4cdfa661-fa28-48be-b416-f2e69927fc9b-ovsdbserver-sb\") pod \"dnsmasq-dns-5959f8865f-kpwh4\" (UID: \"4cdfa661-fa28-48be-b416-f2e69927fc9b\") " pod="openstack/dnsmasq-dns-5959f8865f-kpwh4" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.323252 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4cdfa661-fa28-48be-b416-f2e69927fc9b-config\") pod \"dnsmasq-dns-5959f8865f-kpwh4\" (UID: \"4cdfa661-fa28-48be-b416-f2e69927fc9b\") " pod="openstack/dnsmasq-dns-5959f8865f-kpwh4" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.323940 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4cdfa661-fa28-48be-b416-f2e69927fc9b-dns-svc\") pod \"dnsmasq-dns-5959f8865f-kpwh4\" (UID: \"4cdfa661-fa28-48be-b416-f2e69927fc9b\") " pod="openstack/dnsmasq-dns-5959f8865f-kpwh4" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.332009 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.332222 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-89rvs" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.332343 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.344655 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-jcqjf"] Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.364222 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-jskwv"] Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.379230 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4mdl\" (UniqueName: \"kubernetes.io/projected/4cdfa661-fa28-48be-b416-f2e69927fc9b-kube-api-access-b4mdl\") pod \"dnsmasq-dns-5959f8865f-kpwh4\" (UID: \"4cdfa661-fa28-48be-b416-f2e69927fc9b\") " pod="openstack/dnsmasq-dns-5959f8865f-kpwh4" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.390778 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-p2fwj" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.432468 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/436b0400-6c82-450b-9505-61bf124b5db5-config\") pod \"neutron-db-sync-jskwv\" (UID: \"436b0400-6c82-450b-9505-61bf124b5db5\") " pod="openstack/neutron-db-sync-jskwv" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.432525 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d0cc3be3-7aa7-4384-97ed-1ec7bf75f026-db-sync-config-data\") pod \"cinder-db-sync-jcqjf\" (UID: \"d0cc3be3-7aa7-4384-97ed-1ec7bf75f026\") " pod="openstack/cinder-db-sync-jcqjf" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.432552 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/436b0400-6c82-450b-9505-61bf124b5db5-combined-ca-bundle\") pod \"neutron-db-sync-jskwv\" (UID: \"436b0400-6c82-450b-9505-61bf124b5db5\") " pod="openstack/neutron-db-sync-jskwv" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.432611 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d0cc3be3-7aa7-4384-97ed-1ec7bf75f026-scripts\") pod \"cinder-db-sync-jcqjf\" (UID: \"d0cc3be3-7aa7-4384-97ed-1ec7bf75f026\") " pod="openstack/cinder-db-sync-jcqjf" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.432629 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zvwj\" (UniqueName: \"kubernetes.io/projected/436b0400-6c82-450b-9505-61bf124b5db5-kube-api-access-8zvwj\") pod \"neutron-db-sync-jskwv\" (UID: \"436b0400-6c82-450b-9505-61bf124b5db5\") " pod="openstack/neutron-db-sync-jskwv" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.432684 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0cc3be3-7aa7-4384-97ed-1ec7bf75f026-combined-ca-bundle\") pod \"cinder-db-sync-jcqjf\" (UID: \"d0cc3be3-7aa7-4384-97ed-1ec7bf75f026\") " pod="openstack/cinder-db-sync-jcqjf" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.432706 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mc46\" (UniqueName: \"kubernetes.io/projected/d0cc3be3-7aa7-4384-97ed-1ec7bf75f026-kube-api-access-9mc46\") pod \"cinder-db-sync-jcqjf\" (UID: \"d0cc3be3-7aa7-4384-97ed-1ec7bf75f026\") " pod="openstack/cinder-db-sync-jcqjf" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.432725 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d0cc3be3-7aa7-4384-97ed-1ec7bf75f026-etc-machine-id\") pod \"cinder-db-sync-jcqjf\" (UID: \"d0cc3be3-7aa7-4384-97ed-1ec7bf75f026\") " pod="openstack/cinder-db-sync-jcqjf" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.432805 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0cc3be3-7aa7-4384-97ed-1ec7bf75f026-config-data\") pod \"cinder-db-sync-jcqjf\" (UID: \"d0cc3be3-7aa7-4384-97ed-1ec7bf75f026\") " pod="openstack/cinder-db-sync-jcqjf" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.446252 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-kpwh4" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.471781 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.474071 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.480388 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.480726 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.492715 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.511891 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-kpwh4"] Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.534640 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-rwld8"] Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.535838 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-rwld8" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.538500 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-26x5l" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.539473 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0cc3be3-7aa7-4384-97ed-1ec7bf75f026-config-data\") pod \"cinder-db-sync-jcqjf\" (UID: \"d0cc3be3-7aa7-4384-97ed-1ec7bf75f026\") " pod="openstack/cinder-db-sync-jcqjf" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.539512 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/436b0400-6c82-450b-9505-61bf124b5db5-config\") pod \"neutron-db-sync-jskwv\" (UID: \"436b0400-6c82-450b-9505-61bf124b5db5\") " pod="openstack/neutron-db-sync-jskwv" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.539555 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d0cc3be3-7aa7-4384-97ed-1ec7bf75f026-db-sync-config-data\") pod \"cinder-db-sync-jcqjf\" (UID: \"d0cc3be3-7aa7-4384-97ed-1ec7bf75f026\") " pod="openstack/cinder-db-sync-jcqjf" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.539598 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/436b0400-6c82-450b-9505-61bf124b5db5-combined-ca-bundle\") pod \"neutron-db-sync-jskwv\" (UID: \"436b0400-6c82-450b-9505-61bf124b5db5\") " pod="openstack/neutron-db-sync-jskwv" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.539619 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d0cc3be3-7aa7-4384-97ed-1ec7bf75f026-scripts\") pod \"cinder-db-sync-jcqjf\" (UID: \"d0cc3be3-7aa7-4384-97ed-1ec7bf75f026\") " pod="openstack/cinder-db-sync-jcqjf" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.539659 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zvwj\" (UniqueName: \"kubernetes.io/projected/436b0400-6c82-450b-9505-61bf124b5db5-kube-api-access-8zvwj\") pod \"neutron-db-sync-jskwv\" (UID: \"436b0400-6c82-450b-9505-61bf124b5db5\") " pod="openstack/neutron-db-sync-jskwv" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.539699 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0cc3be3-7aa7-4384-97ed-1ec7bf75f026-combined-ca-bundle\") pod \"cinder-db-sync-jcqjf\" (UID: \"d0cc3be3-7aa7-4384-97ed-1ec7bf75f026\") " pod="openstack/cinder-db-sync-jcqjf" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.539722 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mc46\" (UniqueName: \"kubernetes.io/projected/d0cc3be3-7aa7-4384-97ed-1ec7bf75f026-kube-api-access-9mc46\") pod \"cinder-db-sync-jcqjf\" (UID: \"d0cc3be3-7aa7-4384-97ed-1ec7bf75f026\") " pod="openstack/cinder-db-sync-jcqjf" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.539742 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d0cc3be3-7aa7-4384-97ed-1ec7bf75f026-etc-machine-id\") pod \"cinder-db-sync-jcqjf\" (UID: \"d0cc3be3-7aa7-4384-97ed-1ec7bf75f026\") " pod="openstack/cinder-db-sync-jcqjf" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.539866 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d0cc3be3-7aa7-4384-97ed-1ec7bf75f026-etc-machine-id\") pod \"cinder-db-sync-jcqjf\" (UID: \"d0cc3be3-7aa7-4384-97ed-1ec7bf75f026\") " pod="openstack/cinder-db-sync-jcqjf" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.546181 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.549130 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/436b0400-6c82-450b-9505-61bf124b5db5-config\") pod \"neutron-db-sync-jskwv\" (UID: \"436b0400-6c82-450b-9505-61bf124b5db5\") " pod="openstack/neutron-db-sync-jskwv" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.556472 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/436b0400-6c82-450b-9505-61bf124b5db5-combined-ca-bundle\") pod \"neutron-db-sync-jskwv\" (UID: \"436b0400-6c82-450b-9505-61bf124b5db5\") " pod="openstack/neutron-db-sync-jskwv" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.558828 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0cc3be3-7aa7-4384-97ed-1ec7bf75f026-config-data\") pod \"cinder-db-sync-jcqjf\" (UID: \"d0cc3be3-7aa7-4384-97ed-1ec7bf75f026\") " pod="openstack/cinder-db-sync-jcqjf" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.559089 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0cc3be3-7aa7-4384-97ed-1ec7bf75f026-combined-ca-bundle\") pod \"cinder-db-sync-jcqjf\" (UID: \"d0cc3be3-7aa7-4384-97ed-1ec7bf75f026\") " pod="openstack/cinder-db-sync-jcqjf" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.578180 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mc46\" (UniqueName: \"kubernetes.io/projected/d0cc3be3-7aa7-4384-97ed-1ec7bf75f026-kube-api-access-9mc46\") pod \"cinder-db-sync-jcqjf\" (UID: \"d0cc3be3-7aa7-4384-97ed-1ec7bf75f026\") " pod="openstack/cinder-db-sync-jcqjf" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.596206 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zvwj\" (UniqueName: \"kubernetes.io/projected/436b0400-6c82-450b-9505-61bf124b5db5-kube-api-access-8zvwj\") pod \"neutron-db-sync-jskwv\" (UID: \"436b0400-6c82-450b-9505-61bf124b5db5\") " pod="openstack/neutron-db-sync-jskwv" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.596277 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-rwld8"] Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.604186 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d0cc3be3-7aa7-4384-97ed-1ec7bf75f026-db-sync-config-data\") pod \"cinder-db-sync-jcqjf\" (UID: \"d0cc3be3-7aa7-4384-97ed-1ec7bf75f026\") " pod="openstack/cinder-db-sync-jcqjf" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.605495 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d0cc3be3-7aa7-4384-97ed-1ec7bf75f026-scripts\") pod \"cinder-db-sync-jcqjf\" (UID: \"d0cc3be3-7aa7-4384-97ed-1ec7bf75f026\") " pod="openstack/cinder-db-sync-jcqjf" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.624300 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-jcqjf" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.682887 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-bbhtn"] Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.705069 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-bbhtn" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.799056 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5bf4d932-664a-46c6-bec5-f2b70950c824-db-sync-config-data\") pod \"barbican-db-sync-rwld8\" (UID: \"5bf4d932-664a-46c6-bec5-f2b70950c824\") " pod="openstack/barbican-db-sync-rwld8" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.799414 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce9fba55-1b70-4d39-a052-bff96bd8e93a-scripts\") pod \"ceilometer-0\" (UID: \"ce9fba55-1b70-4d39-a052-bff96bd8e93a\") " pod="openstack/ceilometer-0" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.799440 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ce9fba55-1b70-4d39-a052-bff96bd8e93a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ce9fba55-1b70-4d39-a052-bff96bd8e93a\") " pod="openstack/ceilometer-0" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.799512 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ce9fba55-1b70-4d39-a052-bff96bd8e93a-log-httpd\") pod \"ceilometer-0\" (UID: \"ce9fba55-1b70-4d39-a052-bff96bd8e93a\") " pod="openstack/ceilometer-0" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.799620 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce9fba55-1b70-4d39-a052-bff96bd8e93a-config-data\") pod \"ceilometer-0\" (UID: \"ce9fba55-1b70-4d39-a052-bff96bd8e93a\") " pod="openstack/ceilometer-0" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.799824 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5gdz\" (UniqueName: \"kubernetes.io/projected/ce9fba55-1b70-4d39-a052-bff96bd8e93a-kube-api-access-j5gdz\") pod \"ceilometer-0\" (UID: \"ce9fba55-1b70-4d39-a052-bff96bd8e93a\") " pod="openstack/ceilometer-0" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.799854 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce9fba55-1b70-4d39-a052-bff96bd8e93a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ce9fba55-1b70-4d39-a052-bff96bd8e93a\") " pod="openstack/ceilometer-0" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.799885 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zvc8\" (UniqueName: \"kubernetes.io/projected/5bf4d932-664a-46c6-bec5-f2b70950c824-kube-api-access-2zvc8\") pod \"barbican-db-sync-rwld8\" (UID: \"5bf4d932-664a-46c6-bec5-f2b70950c824\") " pod="openstack/barbican-db-sync-rwld8" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.799938 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bf4d932-664a-46c6-bec5-f2b70950c824-combined-ca-bundle\") pod \"barbican-db-sync-rwld8\" (UID: \"5bf4d932-664a-46c6-bec5-f2b70950c824\") " pod="openstack/barbican-db-sync-rwld8" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.799967 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ce9fba55-1b70-4d39-a052-bff96bd8e93a-run-httpd\") pod \"ceilometer-0\" (UID: \"ce9fba55-1b70-4d39-a052-bff96bd8e93a\") " pod="openstack/ceilometer-0" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.801264 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-jskwv" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.843671 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-d52vg"] Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.845360 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-d52vg" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.850485 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-p4pcv" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.850736 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.850844 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.865014 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-bbhtn"] Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.879731 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"dadd7e91-13f0-4ba2-9f87-ad057567a56d","Type":"ContainerStarted","Data":"242e1b17b83477623f3db53de91633b1733bef1f427e3e630e934f7135ecb6d2"} Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.879823 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"dadd7e91-13f0-4ba2-9f87-ad057567a56d","Type":"ContainerStarted","Data":"c94cacbebe726d53c1cfb7a9941c3178ffe9137486d282c308ad8f46f0586896"} Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.904971 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-5dcwb" event={"ID":"75b951c6-37fc-4757-bafd-ef3647e3b701","Type":"ContainerStarted","Data":"5aa14312c0a8d458b64e8098392b9450553a2c278c532aea42aac37dc71148ad"} Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.905137 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-764c5664d7-5dcwb" podUID="75b951c6-37fc-4757-bafd-ef3647e3b701" containerName="dnsmasq-dns" containerID="cri-o://5aa14312c0a8d458b64e8098392b9450553a2c278c532aea42aac37dc71148ad" gracePeriod=10 Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.906006 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-764c5664d7-5dcwb" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.908270 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bzxr\" (UniqueName: \"kubernetes.io/projected/b7820c3c-fe38-46dd-906a-498a579d0805-kube-api-access-7bzxr\") pod \"placement-db-sync-d52vg\" (UID: \"b7820c3c-fe38-46dd-906a-498a579d0805\") " pod="openstack/placement-db-sync-d52vg" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.908335 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce9fba55-1b70-4d39-a052-bff96bd8e93a-config-data\") pod \"ceilometer-0\" (UID: \"ce9fba55-1b70-4d39-a052-bff96bd8e93a\") " pod="openstack/ceilometer-0" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.908395 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7820c3c-fe38-46dd-906a-498a579d0805-config-data\") pod \"placement-db-sync-d52vg\" (UID: \"b7820c3c-fe38-46dd-906a-498a579d0805\") " pod="openstack/placement-db-sync-d52vg" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.908419 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ac763412-39e7-40d0-892a-57ac801af2bb-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-bbhtn\" (UID: \"ac763412-39e7-40d0-892a-57ac801af2bb\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-bbhtn" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.908437 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b7820c3c-fe38-46dd-906a-498a579d0805-logs\") pod \"placement-db-sync-d52vg\" (UID: \"b7820c3c-fe38-46dd-906a-498a579d0805\") " pod="openstack/placement-db-sync-d52vg" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.908463 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5gdz\" (UniqueName: \"kubernetes.io/projected/ce9fba55-1b70-4d39-a052-bff96bd8e93a-kube-api-access-j5gdz\") pod \"ceilometer-0\" (UID: \"ce9fba55-1b70-4d39-a052-bff96bd8e93a\") " pod="openstack/ceilometer-0" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.908480 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce9fba55-1b70-4d39-a052-bff96bd8e93a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ce9fba55-1b70-4d39-a052-bff96bd8e93a\") " pod="openstack/ceilometer-0" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.908498 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ac763412-39e7-40d0-892a-57ac801af2bb-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-bbhtn\" (UID: \"ac763412-39e7-40d0-892a-57ac801af2bb\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-bbhtn" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.908525 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2zvc8\" (UniqueName: \"kubernetes.io/projected/5bf4d932-664a-46c6-bec5-f2b70950c824-kube-api-access-2zvc8\") pod \"barbican-db-sync-rwld8\" (UID: \"5bf4d932-664a-46c6-bec5-f2b70950c824\") " pod="openstack/barbican-db-sync-rwld8" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.908543 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac763412-39e7-40d0-892a-57ac801af2bb-config\") pod \"dnsmasq-dns-58dd9ff6bc-bbhtn\" (UID: \"ac763412-39e7-40d0-892a-57ac801af2bb\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-bbhtn" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.908590 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zz8lw\" (UniqueName: \"kubernetes.io/projected/ac763412-39e7-40d0-892a-57ac801af2bb-kube-api-access-zz8lw\") pod \"dnsmasq-dns-58dd9ff6bc-bbhtn\" (UID: \"ac763412-39e7-40d0-892a-57ac801af2bb\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-bbhtn" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.908612 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bf4d932-664a-46c6-bec5-f2b70950c824-combined-ca-bundle\") pod \"barbican-db-sync-rwld8\" (UID: \"5bf4d932-664a-46c6-bec5-f2b70950c824\") " pod="openstack/barbican-db-sync-rwld8" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.908629 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ac763412-39e7-40d0-892a-57ac801af2bb-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-bbhtn\" (UID: \"ac763412-39e7-40d0-892a-57ac801af2bb\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-bbhtn" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.912127 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7820c3c-fe38-46dd-906a-498a579d0805-combined-ca-bundle\") pod \"placement-db-sync-d52vg\" (UID: \"b7820c3c-fe38-46dd-906a-498a579d0805\") " pod="openstack/placement-db-sync-d52vg" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.912156 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ce9fba55-1b70-4d39-a052-bff96bd8e93a-run-httpd\") pod \"ceilometer-0\" (UID: \"ce9fba55-1b70-4d39-a052-bff96bd8e93a\") " pod="openstack/ceilometer-0" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.912210 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5bf4d932-664a-46c6-bec5-f2b70950c824-db-sync-config-data\") pod \"barbican-db-sync-rwld8\" (UID: \"5bf4d932-664a-46c6-bec5-f2b70950c824\") " pod="openstack/barbican-db-sync-rwld8" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.912235 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b7820c3c-fe38-46dd-906a-498a579d0805-scripts\") pod \"placement-db-sync-d52vg\" (UID: \"b7820c3c-fe38-46dd-906a-498a579d0805\") " pod="openstack/placement-db-sync-d52vg" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.912275 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce9fba55-1b70-4d39-a052-bff96bd8e93a-scripts\") pod \"ceilometer-0\" (UID: \"ce9fba55-1b70-4d39-a052-bff96bd8e93a\") " pod="openstack/ceilometer-0" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.912307 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ce9fba55-1b70-4d39-a052-bff96bd8e93a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ce9fba55-1b70-4d39-a052-bff96bd8e93a\") " pod="openstack/ceilometer-0" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.912354 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ac763412-39e7-40d0-892a-57ac801af2bb-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-bbhtn\" (UID: \"ac763412-39e7-40d0-892a-57ac801af2bb\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-bbhtn" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.912383 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ce9fba55-1b70-4d39-a052-bff96bd8e93a-log-httpd\") pod \"ceilometer-0\" (UID: \"ce9fba55-1b70-4d39-a052-bff96bd8e93a\") " pod="openstack/ceilometer-0" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.912923 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ce9fba55-1b70-4d39-a052-bff96bd8e93a-log-httpd\") pod \"ceilometer-0\" (UID: \"ce9fba55-1b70-4d39-a052-bff96bd8e93a\") " pod="openstack/ceilometer-0" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.913464 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-db-sync-wdrmd"] Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.914114 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ce9fba55-1b70-4d39-a052-bff96bd8e93a-run-httpd\") pod \"ceilometer-0\" (UID: \"ce9fba55-1b70-4d39-a052-bff96bd8e93a\") " pod="openstack/ceilometer-0" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.915115 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-sync-wdrmd" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.918883 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5bf4d932-664a-46c6-bec5-f2b70950c824-db-sync-config-data\") pod \"barbican-db-sync-rwld8\" (UID: \"5bf4d932-664a-46c6-bec5-f2b70950c824\") " pod="openstack/barbican-db-sync-rwld8" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.919049 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cloudkitty-client-internal" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.925041 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-scripts" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.925662 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-cloudkitty-dockercfg-kqv9d" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.928912 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce9fba55-1b70-4d39-a052-bff96bd8e93a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ce9fba55-1b70-4d39-a052-bff96bd8e93a\") " pod="openstack/ceilometer-0" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.929312 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-config-data" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.936867 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce9fba55-1b70-4d39-a052-bff96bd8e93a-config-data\") pod \"ceilometer-0\" (UID: \"ce9fba55-1b70-4d39-a052-bff96bd8e93a\") " pod="openstack/ceilometer-0" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.943335 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ce9fba55-1b70-4d39-a052-bff96bd8e93a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ce9fba55-1b70-4d39-a052-bff96bd8e93a\") " pod="openstack/ceilometer-0" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.944893 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zvc8\" (UniqueName: \"kubernetes.io/projected/5bf4d932-664a-46c6-bec5-f2b70950c824-kube-api-access-2zvc8\") pod \"barbican-db-sync-rwld8\" (UID: \"5bf4d932-664a-46c6-bec5-f2b70950c824\") " pod="openstack/barbican-db-sync-rwld8" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.950264 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5gdz\" (UniqueName: \"kubernetes.io/projected/ce9fba55-1b70-4d39-a052-bff96bd8e93a-kube-api-access-j5gdz\") pod \"ceilometer-0\" (UID: \"ce9fba55-1b70-4d39-a052-bff96bd8e93a\") " pod="openstack/ceilometer-0" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.950438 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce9fba55-1b70-4d39-a052-bff96bd8e93a-scripts\") pod \"ceilometer-0\" (UID: \"ce9fba55-1b70-4d39-a052-bff96bd8e93a\") " pod="openstack/ceilometer-0" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.951398 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bf4d932-664a-46c6-bec5-f2b70950c824-combined-ca-bundle\") pod \"barbican-db-sync-rwld8\" (UID: \"5bf4d932-664a-46c6-bec5-f2b70950c824\") " pod="openstack/barbican-db-sync-rwld8" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.951450 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-d52vg"] Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.964742 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-db-sync-wdrmd"] Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.988624 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-764c5664d7-5dcwb" podStartSLOduration=2.988603103 podStartE2EDuration="2.988603103s" podCreationTimestamp="2026-02-17 16:14:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:14:19.929633997 +0000 UTC m=+1223.445993080" watchObservedRunningTime="2026-02-17 16:14:19.988603103 +0000 UTC m=+1223.504962176" Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.024275 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bzxr\" (UniqueName: \"kubernetes.io/projected/b7820c3c-fe38-46dd-906a-498a579d0805-kube-api-access-7bzxr\") pod \"placement-db-sync-d52vg\" (UID: \"b7820c3c-fe38-46dd-906a-498a579d0805\") " pod="openstack/placement-db-sync-d52vg" Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.024484 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ec52dbb-ca2f-4013-8536-972042607240-scripts\") pod \"cloudkitty-db-sync-wdrmd\" (UID: \"2ec52dbb-ca2f-4013-8536-972042607240\") " pod="openstack/cloudkitty-db-sync-wdrmd" Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.024508 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/2ec52dbb-ca2f-4013-8536-972042607240-certs\") pod \"cloudkitty-db-sync-wdrmd\" (UID: \"2ec52dbb-ca2f-4013-8536-972042607240\") " pod="openstack/cloudkitty-db-sync-wdrmd" Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.024601 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ec52dbb-ca2f-4013-8536-972042607240-combined-ca-bundle\") pod \"cloudkitty-db-sync-wdrmd\" (UID: \"2ec52dbb-ca2f-4013-8536-972042607240\") " pod="openstack/cloudkitty-db-sync-wdrmd" Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.024647 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7820c3c-fe38-46dd-906a-498a579d0805-config-data\") pod \"placement-db-sync-d52vg\" (UID: \"b7820c3c-fe38-46dd-906a-498a579d0805\") " pod="openstack/placement-db-sync-d52vg" Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.024686 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ec52dbb-ca2f-4013-8536-972042607240-config-data\") pod \"cloudkitty-db-sync-wdrmd\" (UID: \"2ec52dbb-ca2f-4013-8536-972042607240\") " pod="openstack/cloudkitty-db-sync-wdrmd" Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.024720 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ac763412-39e7-40d0-892a-57ac801af2bb-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-bbhtn\" (UID: \"ac763412-39e7-40d0-892a-57ac801af2bb\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-bbhtn" Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.024741 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b7820c3c-fe38-46dd-906a-498a579d0805-logs\") pod \"placement-db-sync-d52vg\" (UID: \"b7820c3c-fe38-46dd-906a-498a579d0805\") " pod="openstack/placement-db-sync-d52vg" Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.024814 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ac763412-39e7-40d0-892a-57ac801af2bb-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-bbhtn\" (UID: \"ac763412-39e7-40d0-892a-57ac801af2bb\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-bbhtn" Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.024853 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac763412-39e7-40d0-892a-57ac801af2bb-config\") pod \"dnsmasq-dns-58dd9ff6bc-bbhtn\" (UID: \"ac763412-39e7-40d0-892a-57ac801af2bb\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-bbhtn" Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.024876 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jmms\" (UniqueName: \"kubernetes.io/projected/2ec52dbb-ca2f-4013-8536-972042607240-kube-api-access-5jmms\") pod \"cloudkitty-db-sync-wdrmd\" (UID: \"2ec52dbb-ca2f-4013-8536-972042607240\") " pod="openstack/cloudkitty-db-sync-wdrmd" Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.024913 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zz8lw\" (UniqueName: \"kubernetes.io/projected/ac763412-39e7-40d0-892a-57ac801af2bb-kube-api-access-zz8lw\") pod \"dnsmasq-dns-58dd9ff6bc-bbhtn\" (UID: \"ac763412-39e7-40d0-892a-57ac801af2bb\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-bbhtn" Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.024954 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ac763412-39e7-40d0-892a-57ac801af2bb-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-bbhtn\" (UID: \"ac763412-39e7-40d0-892a-57ac801af2bb\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-bbhtn" Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.024982 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7820c3c-fe38-46dd-906a-498a579d0805-combined-ca-bundle\") pod \"placement-db-sync-d52vg\" (UID: \"b7820c3c-fe38-46dd-906a-498a579d0805\") " pod="openstack/placement-db-sync-d52vg" Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.025055 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b7820c3c-fe38-46dd-906a-498a579d0805-scripts\") pod \"placement-db-sync-d52vg\" (UID: \"b7820c3c-fe38-46dd-906a-498a579d0805\") " pod="openstack/placement-db-sync-d52vg" Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.025096 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ac763412-39e7-40d0-892a-57ac801af2bb-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-bbhtn\" (UID: \"ac763412-39e7-40d0-892a-57ac801af2bb\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-bbhtn" Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.029205 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b7820c3c-fe38-46dd-906a-498a579d0805-logs\") pod \"placement-db-sync-d52vg\" (UID: \"b7820c3c-fe38-46dd-906a-498a579d0805\") " pod="openstack/placement-db-sync-d52vg" Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.029895 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac763412-39e7-40d0-892a-57ac801af2bb-config\") pod \"dnsmasq-dns-58dd9ff6bc-bbhtn\" (UID: \"ac763412-39e7-40d0-892a-57ac801af2bb\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-bbhtn" Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.030340 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ac763412-39e7-40d0-892a-57ac801af2bb-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-bbhtn\" (UID: \"ac763412-39e7-40d0-892a-57ac801af2bb\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-bbhtn" Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.030419 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ac763412-39e7-40d0-892a-57ac801af2bb-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-bbhtn\" (UID: \"ac763412-39e7-40d0-892a-57ac801af2bb\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-bbhtn" Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.031001 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ac763412-39e7-40d0-892a-57ac801af2bb-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-bbhtn\" (UID: \"ac763412-39e7-40d0-892a-57ac801af2bb\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-bbhtn" Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.031720 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ac763412-39e7-40d0-892a-57ac801af2bb-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-bbhtn\" (UID: \"ac763412-39e7-40d0-892a-57ac801af2bb\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-bbhtn" Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.035533 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7820c3c-fe38-46dd-906a-498a579d0805-config-data\") pod \"placement-db-sync-d52vg\" (UID: \"b7820c3c-fe38-46dd-906a-498a579d0805\") " pod="openstack/placement-db-sync-d52vg" Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.035703 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7820c3c-fe38-46dd-906a-498a579d0805-combined-ca-bundle\") pod \"placement-db-sync-d52vg\" (UID: \"b7820c3c-fe38-46dd-906a-498a579d0805\") " pod="openstack/placement-db-sync-d52vg" Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.043229 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b7820c3c-fe38-46dd-906a-498a579d0805-scripts\") pod \"placement-db-sync-d52vg\" (UID: \"b7820c3c-fe38-46dd-906a-498a579d0805\") " pod="openstack/placement-db-sync-d52vg" Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.043394 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bzxr\" (UniqueName: \"kubernetes.io/projected/b7820c3c-fe38-46dd-906a-498a579d0805-kube-api-access-7bzxr\") pod \"placement-db-sync-d52vg\" (UID: \"b7820c3c-fe38-46dd-906a-498a579d0805\") " pod="openstack/placement-db-sync-d52vg" Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.045482 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zz8lw\" (UniqueName: \"kubernetes.io/projected/ac763412-39e7-40d0-892a-57ac801af2bb-kube-api-access-zz8lw\") pod \"dnsmasq-dns-58dd9ff6bc-bbhtn\" (UID: \"ac763412-39e7-40d0-892a-57ac801af2bb\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-bbhtn" Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.127764 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jmms\" (UniqueName: \"kubernetes.io/projected/2ec52dbb-ca2f-4013-8536-972042607240-kube-api-access-5jmms\") pod \"cloudkitty-db-sync-wdrmd\" (UID: \"2ec52dbb-ca2f-4013-8536-972042607240\") " pod="openstack/cloudkitty-db-sync-wdrmd" Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.127942 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/2ec52dbb-ca2f-4013-8536-972042607240-certs\") pod \"cloudkitty-db-sync-wdrmd\" (UID: \"2ec52dbb-ca2f-4013-8536-972042607240\") " pod="openstack/cloudkitty-db-sync-wdrmd" Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.127968 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ec52dbb-ca2f-4013-8536-972042607240-scripts\") pod \"cloudkitty-db-sync-wdrmd\" (UID: \"2ec52dbb-ca2f-4013-8536-972042607240\") " pod="openstack/cloudkitty-db-sync-wdrmd" Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.128005 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ec52dbb-ca2f-4013-8536-972042607240-combined-ca-bundle\") pod \"cloudkitty-db-sync-wdrmd\" (UID: \"2ec52dbb-ca2f-4013-8536-972042607240\") " pod="openstack/cloudkitty-db-sync-wdrmd" Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.128042 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ec52dbb-ca2f-4013-8536-972042607240-config-data\") pod \"cloudkitty-db-sync-wdrmd\" (UID: \"2ec52dbb-ca2f-4013-8536-972042607240\") " pod="openstack/cloudkitty-db-sync-wdrmd" Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.132122 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/2ec52dbb-ca2f-4013-8536-972042607240-certs\") pod \"cloudkitty-db-sync-wdrmd\" (UID: \"2ec52dbb-ca2f-4013-8536-972042607240\") " pod="openstack/cloudkitty-db-sync-wdrmd" Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.132896 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ec52dbb-ca2f-4013-8536-972042607240-config-data\") pod \"cloudkitty-db-sync-wdrmd\" (UID: \"2ec52dbb-ca2f-4013-8536-972042607240\") " pod="openstack/cloudkitty-db-sync-wdrmd" Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.136164 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ec52dbb-ca2f-4013-8536-972042607240-combined-ca-bundle\") pod \"cloudkitty-db-sync-wdrmd\" (UID: \"2ec52dbb-ca2f-4013-8536-972042607240\") " pod="openstack/cloudkitty-db-sync-wdrmd" Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.137478 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ec52dbb-ca2f-4013-8536-972042607240-scripts\") pod \"cloudkitty-db-sync-wdrmd\" (UID: \"2ec52dbb-ca2f-4013-8536-972042607240\") " pod="openstack/cloudkitty-db-sync-wdrmd" Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.153498 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jmms\" (UniqueName: \"kubernetes.io/projected/2ec52dbb-ca2f-4013-8536-972042607240-kube-api-access-5jmms\") pod \"cloudkitty-db-sync-wdrmd\" (UID: \"2ec52dbb-ca2f-4013-8536-972042607240\") " pod="openstack/cloudkitty-db-sync-wdrmd" Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.174037 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.190985 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-rwld8" Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.221318 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-bbhtn" Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.237514 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-d52vg" Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.239703 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-p2fwj"] Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.257010 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-sync-wdrmd" Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.344941 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-kpwh4"] Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.446915 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-jcqjf"] Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.569198 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-jskwv"] Feb 17 16:14:20 crc kubenswrapper[4808]: W0217 16:14:20.580052 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd0cc3be3_7aa7_4384_97ed_1ec7bf75f026.slice/crio-722abc1b9b4878938b1d63e6058f446e8ab4a259fcfed886248ba3ca8f6e13fc WatchSource:0}: Error finding container 722abc1b9b4878938b1d63e6058f446e8ab4a259fcfed886248ba3ca8f6e13fc: Status 404 returned error can't find the container with id 722abc1b9b4878938b1d63e6058f446e8ab4a259fcfed886248ba3ca8f6e13fc Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.699957 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-5dcwb" Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.753263 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rcsqp\" (UniqueName: \"kubernetes.io/projected/75b951c6-37fc-4757-bafd-ef3647e3b701-kube-api-access-rcsqp\") pod \"75b951c6-37fc-4757-bafd-ef3647e3b701\" (UID: \"75b951c6-37fc-4757-bafd-ef3647e3b701\") " Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.753675 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/75b951c6-37fc-4757-bafd-ef3647e3b701-ovsdbserver-sb\") pod \"75b951c6-37fc-4757-bafd-ef3647e3b701\" (UID: \"75b951c6-37fc-4757-bafd-ef3647e3b701\") " Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.753865 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/75b951c6-37fc-4757-bafd-ef3647e3b701-dns-svc\") pod \"75b951c6-37fc-4757-bafd-ef3647e3b701\" (UID: \"75b951c6-37fc-4757-bafd-ef3647e3b701\") " Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.754025 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75b951c6-37fc-4757-bafd-ef3647e3b701-config\") pod \"75b951c6-37fc-4757-bafd-ef3647e3b701\" (UID: \"75b951c6-37fc-4757-bafd-ef3647e3b701\") " Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.754139 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/75b951c6-37fc-4757-bafd-ef3647e3b701-dns-swift-storage-0\") pod \"75b951c6-37fc-4757-bafd-ef3647e3b701\" (UID: \"75b951c6-37fc-4757-bafd-ef3647e3b701\") " Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.754247 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/75b951c6-37fc-4757-bafd-ef3647e3b701-ovsdbserver-nb\") pod \"75b951c6-37fc-4757-bafd-ef3647e3b701\" (UID: \"75b951c6-37fc-4757-bafd-ef3647e3b701\") " Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.755913 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.775632 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75b951c6-37fc-4757-bafd-ef3647e3b701-kube-api-access-rcsqp" (OuterVolumeSpecName: "kube-api-access-rcsqp") pod "75b951c6-37fc-4757-bafd-ef3647e3b701" (UID: "75b951c6-37fc-4757-bafd-ef3647e3b701"). InnerVolumeSpecName "kube-api-access-rcsqp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:14:20 crc kubenswrapper[4808]: W0217 16:14:20.783758 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podce9fba55_1b70_4d39_a052_bff96bd8e93a.slice/crio-722643afae2a4e200c6ad3b18d935dcb7ed1baa99b37d21d611a112237864c00 WatchSource:0}: Error finding container 722643afae2a4e200c6ad3b18d935dcb7ed1baa99b37d21d611a112237864c00: Status 404 returned error can't find the container with id 722643afae2a4e200c6ad3b18d935dcb7ed1baa99b37d21d611a112237864c00 Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.857084 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rcsqp\" (UniqueName: \"kubernetes.io/projected/75b951c6-37fc-4757-bafd-ef3647e3b701-kube-api-access-rcsqp\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.910473 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-rwld8"] Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.917230 4808 generic.go:334] "Generic (PLEG): container finished" podID="4cdfa661-fa28-48be-b416-f2e69927fc9b" containerID="29684d96c1943280e84c76de58aee0550b74d290c562f5eaa5511b6310aa658b" exitCode=0 Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.917298 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5959f8865f-kpwh4" event={"ID":"4cdfa661-fa28-48be-b416-f2e69927fc9b","Type":"ContainerDied","Data":"29684d96c1943280e84c76de58aee0550b74d290c562f5eaa5511b6310aa658b"} Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.917339 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5959f8865f-kpwh4" event={"ID":"4cdfa661-fa28-48be-b416-f2e69927fc9b","Type":"ContainerStarted","Data":"d136669bdec3d3e1777e3899ee2de7762492e7209be6f5909cc9b03217da4323"} Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.923459 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-bbhtn"] Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.929080 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ce9fba55-1b70-4d39-a052-bff96bd8e93a","Type":"ContainerStarted","Data":"722643afae2a4e200c6ad3b18d935dcb7ed1baa99b37d21d611a112237864c00"} Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.932603 4808 generic.go:334] "Generic (PLEG): container finished" podID="75b951c6-37fc-4757-bafd-ef3647e3b701" containerID="5aa14312c0a8d458b64e8098392b9450553a2c278c532aea42aac37dc71148ad" exitCode=0 Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.932667 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-5dcwb" event={"ID":"75b951c6-37fc-4757-bafd-ef3647e3b701","Type":"ContainerDied","Data":"5aa14312c0a8d458b64e8098392b9450553a2c278c532aea42aac37dc71148ad"} Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.932692 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-5dcwb" event={"ID":"75b951c6-37fc-4757-bafd-ef3647e3b701","Type":"ContainerDied","Data":"1b646decde62c27e860d00c8b40a1f84672ace9f752cc2f00a47cf4ad3e6b50e"} Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.932709 4808 scope.go:117] "RemoveContainer" containerID="5aa14312c0a8d458b64e8098392b9450553a2c278c532aea42aac37dc71148ad" Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.932831 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-5dcwb" Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.937424 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-p2fwj" event={"ID":"4e39a33f-5d00-4171-bf63-6b12226901d3","Type":"ContainerStarted","Data":"17116d89192a8613360b83b9abc0d23bc6d3cc17099f32067b22d7c7d3c6494e"} Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.940334 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-d52vg"] Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.948016 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-jskwv" event={"ID":"436b0400-6c82-450b-9505-61bf124b5db5","Type":"ContainerStarted","Data":"5717dd2ef8af55d59bb6a6c87c756928ce372bb105a7380fa60e88c0fb60d552"} Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.955951 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-jcqjf" event={"ID":"d0cc3be3-7aa7-4384-97ed-1ec7bf75f026","Type":"ContainerStarted","Data":"722abc1b9b4878938b1d63e6058f446e8ab4a259fcfed886248ba3ca8f6e13fc"} Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.991027 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=16.991004294 podStartE2EDuration="16.991004294s" podCreationTimestamp="2026-02-17 16:14:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:14:20.986073361 +0000 UTC m=+1224.502432434" watchObservedRunningTime="2026-02-17 16:14:20.991004294 +0000 UTC m=+1224.507363377" Feb 17 16:14:21 crc kubenswrapper[4808]: I0217 16:14:21.010625 4808 scope.go:117] "RemoveContainer" containerID="6c36b7f72b37c3fb336e2a5f15220b8f1aec757f894754e35bf7cd4461ad3109" Feb 17 16:14:21 crc kubenswrapper[4808]: I0217 16:14:21.017508 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75b951c6-37fc-4757-bafd-ef3647e3b701-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "75b951c6-37fc-4757-bafd-ef3647e3b701" (UID: "75b951c6-37fc-4757-bafd-ef3647e3b701"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:14:21 crc kubenswrapper[4808]: I0217 16:14:21.063290 4808 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/75b951c6-37fc-4757-bafd-ef3647e3b701-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:21 crc kubenswrapper[4808]: I0217 16:14:21.067141 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75b951c6-37fc-4757-bafd-ef3647e3b701-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "75b951c6-37fc-4757-bafd-ef3647e3b701" (UID: "75b951c6-37fc-4757-bafd-ef3647e3b701"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:14:21 crc kubenswrapper[4808]: I0217 16:14:21.068426 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75b951c6-37fc-4757-bafd-ef3647e3b701-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "75b951c6-37fc-4757-bafd-ef3647e3b701" (UID: "75b951c6-37fc-4757-bafd-ef3647e3b701"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:14:21 crc kubenswrapper[4808]: I0217 16:14:21.071635 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75b951c6-37fc-4757-bafd-ef3647e3b701-config" (OuterVolumeSpecName: "config") pod "75b951c6-37fc-4757-bafd-ef3647e3b701" (UID: "75b951c6-37fc-4757-bafd-ef3647e3b701"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:14:21 crc kubenswrapper[4808]: I0217 16:14:21.074865 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75b951c6-37fc-4757-bafd-ef3647e3b701-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "75b951c6-37fc-4757-bafd-ef3647e3b701" (UID: "75b951c6-37fc-4757-bafd-ef3647e3b701"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:14:21 crc kubenswrapper[4808]: I0217 16:14:21.127387 4808 scope.go:117] "RemoveContainer" containerID="5aa14312c0a8d458b64e8098392b9450553a2c278c532aea42aac37dc71148ad" Feb 17 16:14:21 crc kubenswrapper[4808]: E0217 16:14:21.130655 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5aa14312c0a8d458b64e8098392b9450553a2c278c532aea42aac37dc71148ad\": container with ID starting with 5aa14312c0a8d458b64e8098392b9450553a2c278c532aea42aac37dc71148ad not found: ID does not exist" containerID="5aa14312c0a8d458b64e8098392b9450553a2c278c532aea42aac37dc71148ad" Feb 17 16:14:21 crc kubenswrapper[4808]: I0217 16:14:21.130687 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5aa14312c0a8d458b64e8098392b9450553a2c278c532aea42aac37dc71148ad"} err="failed to get container status \"5aa14312c0a8d458b64e8098392b9450553a2c278c532aea42aac37dc71148ad\": rpc error: code = NotFound desc = could not find container \"5aa14312c0a8d458b64e8098392b9450553a2c278c532aea42aac37dc71148ad\": container with ID starting with 5aa14312c0a8d458b64e8098392b9450553a2c278c532aea42aac37dc71148ad not found: ID does not exist" Feb 17 16:14:21 crc kubenswrapper[4808]: I0217 16:14:21.130708 4808 scope.go:117] "RemoveContainer" containerID="6c36b7f72b37c3fb336e2a5f15220b8f1aec757f894754e35bf7cd4461ad3109" Feb 17 16:14:21 crc kubenswrapper[4808]: E0217 16:14:21.141829 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c36b7f72b37c3fb336e2a5f15220b8f1aec757f894754e35bf7cd4461ad3109\": container with ID starting with 6c36b7f72b37c3fb336e2a5f15220b8f1aec757f894754e35bf7cd4461ad3109 not found: ID does not exist" containerID="6c36b7f72b37c3fb336e2a5f15220b8f1aec757f894754e35bf7cd4461ad3109" Feb 17 16:14:21 crc kubenswrapper[4808]: I0217 16:14:21.141863 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c36b7f72b37c3fb336e2a5f15220b8f1aec757f894754e35bf7cd4461ad3109"} err="failed to get container status \"6c36b7f72b37c3fb336e2a5f15220b8f1aec757f894754e35bf7cd4461ad3109\": rpc error: code = NotFound desc = could not find container \"6c36b7f72b37c3fb336e2a5f15220b8f1aec757f894754e35bf7cd4461ad3109\": container with ID starting with 6c36b7f72b37c3fb336e2a5f15220b8f1aec757f894754e35bf7cd4461ad3109 not found: ID does not exist" Feb 17 16:14:21 crc kubenswrapper[4808]: I0217 16:14:21.165466 4808 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/75b951c6-37fc-4757-bafd-ef3647e3b701-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:21 crc kubenswrapper[4808]: I0217 16:14:21.165496 4808 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/75b951c6-37fc-4757-bafd-ef3647e3b701-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:21 crc kubenswrapper[4808]: I0217 16:14:21.165507 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75b951c6-37fc-4757-bafd-ef3647e3b701-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:21 crc kubenswrapper[4808]: I0217 16:14:21.165517 4808 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/75b951c6-37fc-4757-bafd-ef3647e3b701-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:21 crc kubenswrapper[4808]: I0217 16:14:21.187431 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-db-sync-wdrmd"] Feb 17 16:14:21 crc kubenswrapper[4808]: I0217 16:14:21.293475 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-5dcwb"] Feb 17 16:14:21 crc kubenswrapper[4808]: I0217 16:14:21.307730 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-5dcwb"] Feb 17 16:14:21 crc kubenswrapper[4808]: I0217 16:14:21.575315 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-kpwh4" Feb 17 16:14:21 crc kubenswrapper[4808]: I0217 16:14:21.707023 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4cdfa661-fa28-48be-b416-f2e69927fc9b-config\") pod \"4cdfa661-fa28-48be-b416-f2e69927fc9b\" (UID: \"4cdfa661-fa28-48be-b416-f2e69927fc9b\") " Feb 17 16:14:21 crc kubenswrapper[4808]: I0217 16:14:21.707162 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4cdfa661-fa28-48be-b416-f2e69927fc9b-dns-swift-storage-0\") pod \"4cdfa661-fa28-48be-b416-f2e69927fc9b\" (UID: \"4cdfa661-fa28-48be-b416-f2e69927fc9b\") " Feb 17 16:14:21 crc kubenswrapper[4808]: I0217 16:14:21.707264 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4cdfa661-fa28-48be-b416-f2e69927fc9b-dns-svc\") pod \"4cdfa661-fa28-48be-b416-f2e69927fc9b\" (UID: \"4cdfa661-fa28-48be-b416-f2e69927fc9b\") " Feb 17 16:14:21 crc kubenswrapper[4808]: I0217 16:14:21.707321 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4cdfa661-fa28-48be-b416-f2e69927fc9b-ovsdbserver-sb\") pod \"4cdfa661-fa28-48be-b416-f2e69927fc9b\" (UID: \"4cdfa661-fa28-48be-b416-f2e69927fc9b\") " Feb 17 16:14:21 crc kubenswrapper[4808]: I0217 16:14:21.707461 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b4mdl\" (UniqueName: \"kubernetes.io/projected/4cdfa661-fa28-48be-b416-f2e69927fc9b-kube-api-access-b4mdl\") pod \"4cdfa661-fa28-48be-b416-f2e69927fc9b\" (UID: \"4cdfa661-fa28-48be-b416-f2e69927fc9b\") " Feb 17 16:14:21 crc kubenswrapper[4808]: I0217 16:14:21.707554 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4cdfa661-fa28-48be-b416-f2e69927fc9b-ovsdbserver-nb\") pod \"4cdfa661-fa28-48be-b416-f2e69927fc9b\" (UID: \"4cdfa661-fa28-48be-b416-f2e69927fc9b\") " Feb 17 16:14:21 crc kubenswrapper[4808]: I0217 16:14:21.714737 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4cdfa661-fa28-48be-b416-f2e69927fc9b-kube-api-access-b4mdl" (OuterVolumeSpecName: "kube-api-access-b4mdl") pod "4cdfa661-fa28-48be-b416-f2e69927fc9b" (UID: "4cdfa661-fa28-48be-b416-f2e69927fc9b"). InnerVolumeSpecName "kube-api-access-b4mdl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:14:21 crc kubenswrapper[4808]: I0217 16:14:21.741124 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4cdfa661-fa28-48be-b416-f2e69927fc9b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4cdfa661-fa28-48be-b416-f2e69927fc9b" (UID: "4cdfa661-fa28-48be-b416-f2e69927fc9b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:14:21 crc kubenswrapper[4808]: I0217 16:14:21.741770 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4cdfa661-fa28-48be-b416-f2e69927fc9b-config" (OuterVolumeSpecName: "config") pod "4cdfa661-fa28-48be-b416-f2e69927fc9b" (UID: "4cdfa661-fa28-48be-b416-f2e69927fc9b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:14:21 crc kubenswrapper[4808]: I0217 16:14:21.750164 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4cdfa661-fa28-48be-b416-f2e69927fc9b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4cdfa661-fa28-48be-b416-f2e69927fc9b" (UID: "4cdfa661-fa28-48be-b416-f2e69927fc9b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:14:21 crc kubenswrapper[4808]: I0217 16:14:21.753056 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4cdfa661-fa28-48be-b416-f2e69927fc9b-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "4cdfa661-fa28-48be-b416-f2e69927fc9b" (UID: "4cdfa661-fa28-48be-b416-f2e69927fc9b"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:14:21 crc kubenswrapper[4808]: I0217 16:14:21.765028 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4cdfa661-fa28-48be-b416-f2e69927fc9b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4cdfa661-fa28-48be-b416-f2e69927fc9b" (UID: "4cdfa661-fa28-48be-b416-f2e69927fc9b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:14:21 crc kubenswrapper[4808]: I0217 16:14:21.814003 4808 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4cdfa661-fa28-48be-b416-f2e69927fc9b-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:21 crc kubenswrapper[4808]: I0217 16:14:21.814038 4808 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4cdfa661-fa28-48be-b416-f2e69927fc9b-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:21 crc kubenswrapper[4808]: I0217 16:14:21.814048 4808 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4cdfa661-fa28-48be-b416-f2e69927fc9b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:21 crc kubenswrapper[4808]: I0217 16:14:21.814057 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b4mdl\" (UniqueName: \"kubernetes.io/projected/4cdfa661-fa28-48be-b416-f2e69927fc9b-kube-api-access-b4mdl\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:21 crc kubenswrapper[4808]: I0217 16:14:21.814066 4808 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4cdfa661-fa28-48be-b416-f2e69927fc9b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:21 crc kubenswrapper[4808]: I0217 16:14:21.814076 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4cdfa661-fa28-48be-b416-f2e69927fc9b-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:21 crc kubenswrapper[4808]: I0217 16:14:21.975745 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-jskwv" event={"ID":"436b0400-6c82-450b-9505-61bf124b5db5","Type":"ContainerStarted","Data":"f426da7c0095388c504bdd496cb29b45871594e3a52a02106d296d950a35b8b0"} Feb 17 16:14:21 crc kubenswrapper[4808]: I0217 16:14:21.984530 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-rwld8" event={"ID":"5bf4d932-664a-46c6-bec5-f2b70950c824","Type":"ContainerStarted","Data":"9ba656f842dfb00605cd2712c9679dadbf966fdee137e5405e4ec802b02357c9"} Feb 17 16:14:22 crc kubenswrapper[4808]: I0217 16:14:22.014870 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-d52vg" event={"ID":"b7820c3c-fe38-46dd-906a-498a579d0805","Type":"ContainerStarted","Data":"5b531905add091d4dfe9c3b871669f1b4764b98e78ffc02ea10bcfde5b754358"} Feb 17 16:14:22 crc kubenswrapper[4808]: I0217 16:14:22.016752 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5959f8865f-kpwh4" event={"ID":"4cdfa661-fa28-48be-b416-f2e69927fc9b","Type":"ContainerDied","Data":"d136669bdec3d3e1777e3899ee2de7762492e7209be6f5909cc9b03217da4323"} Feb 17 16:14:22 crc kubenswrapper[4808]: I0217 16:14:22.016784 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-kpwh4" Feb 17 16:14:22 crc kubenswrapper[4808]: I0217 16:14:22.016785 4808 scope.go:117] "RemoveContainer" containerID="29684d96c1943280e84c76de58aee0550b74d290c562f5eaa5511b6310aa658b" Feb 17 16:14:22 crc kubenswrapper[4808]: I0217 16:14:22.020745 4808 generic.go:334] "Generic (PLEG): container finished" podID="ac763412-39e7-40d0-892a-57ac801af2bb" containerID="3cd5c53464fedd37e9d9819c27c7cd7bc3734963bedd089eb5eac87ece7032f0" exitCode=0 Feb 17 16:14:22 crc kubenswrapper[4808]: I0217 16:14:22.020796 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-bbhtn" event={"ID":"ac763412-39e7-40d0-892a-57ac801af2bb","Type":"ContainerDied","Data":"3cd5c53464fedd37e9d9819c27c7cd7bc3734963bedd089eb5eac87ece7032f0"} Feb 17 16:14:22 crc kubenswrapper[4808]: I0217 16:14:22.020814 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-bbhtn" event={"ID":"ac763412-39e7-40d0-892a-57ac801af2bb","Type":"ContainerStarted","Data":"027ce35e95410cc92a867a6b938a45485c623b5bfa8d8827b979b970dbe86f22"} Feb 17 16:14:22 crc kubenswrapper[4808]: I0217 16:14:22.021953 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-sync-wdrmd" event={"ID":"2ec52dbb-ca2f-4013-8536-972042607240","Type":"ContainerStarted","Data":"e334d06468b3a37f46d5f6db68268b3881996656b8f3df2be0b3c006d2589a72"} Feb 17 16:14:22 crc kubenswrapper[4808]: I0217 16:14:22.026208 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-p2fwj" event={"ID":"4e39a33f-5d00-4171-bf63-6b12226901d3","Type":"ContainerStarted","Data":"256eec0493e7fac44365f09c9ecea2db586554f077823fc95da099751524686d"} Feb 17 16:14:22 crc kubenswrapper[4808]: I0217 16:14:22.058675 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-jskwv" podStartSLOduration=3.05860718 podStartE2EDuration="3.05860718s" podCreationTimestamp="2026-02-17 16:14:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:14:21.997822715 +0000 UTC m=+1225.514181798" watchObservedRunningTime="2026-02-17 16:14:22.05860718 +0000 UTC m=+1225.574966253" Feb 17 16:14:22 crc kubenswrapper[4808]: I0217 16:14:22.140478 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-p2fwj" podStartSLOduration=3.140456437 podStartE2EDuration="3.140456437s" podCreationTimestamp="2026-02-17 16:14:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:14:22.085417456 +0000 UTC m=+1225.601776529" watchObservedRunningTime="2026-02-17 16:14:22.140456437 +0000 UTC m=+1225.656815510" Feb 17 16:14:22 crc kubenswrapper[4808]: I0217 16:14:22.202690 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-kpwh4"] Feb 17 16:14:22 crc kubenswrapper[4808]: I0217 16:14:22.224799 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-kpwh4"] Feb 17 16:14:22 crc kubenswrapper[4808]: I0217 16:14:22.238058 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:14:23 crc kubenswrapper[4808]: I0217 16:14:23.050677 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-bbhtn" event={"ID":"ac763412-39e7-40d0-892a-57ac801af2bb","Type":"ContainerStarted","Data":"efb29cb8354ee1065418cb03cb216915e7b1e0246bdd1f63d45fcf6320a29eb9"} Feb 17 16:14:23 crc kubenswrapper[4808]: I0217 16:14:23.051150 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-58dd9ff6bc-bbhtn" Feb 17 16:14:23 crc kubenswrapper[4808]: I0217 16:14:23.166833 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4cdfa661-fa28-48be-b416-f2e69927fc9b" path="/var/lib/kubelet/pods/4cdfa661-fa28-48be-b416-f2e69927fc9b/volumes" Feb 17 16:14:23 crc kubenswrapper[4808]: I0217 16:14:23.167452 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75b951c6-37fc-4757-bafd-ef3647e3b701" path="/var/lib/kubelet/pods/75b951c6-37fc-4757-bafd-ef3647e3b701/volumes" Feb 17 16:14:25 crc kubenswrapper[4808]: I0217 16:14:25.094038 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 17 16:14:26 crc kubenswrapper[4808]: I0217 16:14:26.118417 4808 generic.go:334] "Generic (PLEG): container finished" podID="4e39a33f-5d00-4171-bf63-6b12226901d3" containerID="256eec0493e7fac44365f09c9ecea2db586554f077823fc95da099751524686d" exitCode=0 Feb 17 16:14:26 crc kubenswrapper[4808]: I0217 16:14:26.118493 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-p2fwj" event={"ID":"4e39a33f-5d00-4171-bf63-6b12226901d3","Type":"ContainerDied","Data":"256eec0493e7fac44365f09c9ecea2db586554f077823fc95da099751524686d"} Feb 17 16:14:26 crc kubenswrapper[4808]: I0217 16:14:26.151041 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-58dd9ff6bc-bbhtn" podStartSLOduration=7.150745027 podStartE2EDuration="7.150745027s" podCreationTimestamp="2026-02-17 16:14:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:14:23.074000353 +0000 UTC m=+1226.590359426" watchObservedRunningTime="2026-02-17 16:14:26.150745027 +0000 UTC m=+1229.667104100" Feb 17 16:14:26 crc kubenswrapper[4808]: E0217 16:14:26.235004 4808 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2917eca2_0431_4bd6_ad96_ab8464cc4fd7.slice/crio-conmon-3e1259ba3d26a0e7de7e3a0ca80bca8985317419bb22e9888ef6fc0a7e83aec7.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2917eca2_0431_4bd6_ad96_ab8464cc4fd7.slice/crio-8d4b256de0544b61472bec728b8a9f6596b6505c3ff6baf74b4b74f9988e76dc.scope\": RecentStats: unable to find data in memory cache]" Feb 17 16:14:30 crc kubenswrapper[4808]: I0217 16:14:30.223537 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-58dd9ff6bc-bbhtn" Feb 17 16:14:30 crc kubenswrapper[4808]: I0217 16:14:30.284030 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-pq8qq"] Feb 17 16:14:30 crc kubenswrapper[4808]: I0217 16:14:30.285015 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-pq8qq" podUID="317e56c8-5f01-4313-a632-12ccaccf9442" containerName="dnsmasq-dns" containerID="cri-o://5bbec6100cf7c3218bd24bc7371072ff178631d539a209a85ec99f4282aadb9a" gracePeriod=10 Feb 17 16:14:31 crc kubenswrapper[4808]: I0217 16:14:31.172759 4808 generic.go:334] "Generic (PLEG): container finished" podID="317e56c8-5f01-4313-a632-12ccaccf9442" containerID="5bbec6100cf7c3218bd24bc7371072ff178631d539a209a85ec99f4282aadb9a" exitCode=0 Feb 17 16:14:31 crc kubenswrapper[4808]: I0217 16:14:31.172798 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-pq8qq" event={"ID":"317e56c8-5f01-4313-a632-12ccaccf9442","Type":"ContainerDied","Data":"5bbec6100cf7c3218bd24bc7371072ff178631d539a209a85ec99f4282aadb9a"} Feb 17 16:14:32 crc kubenswrapper[4808]: I0217 16:14:32.184102 4808 generic.go:334] "Generic (PLEG): container finished" podID="e4002815-8dd4-4668-bea7-0d54bdaa4dd6" containerID="be39fd3404d415b22eff1029ee90e816412441ea7651c949f01bcda15108e232" exitCode=0 Feb 17 16:14:32 crc kubenswrapper[4808]: I0217 16:14:32.184169 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-4mdzt" event={"ID":"e4002815-8dd4-4668-bea7-0d54bdaa4dd6","Type":"ContainerDied","Data":"be39fd3404d415b22eff1029ee90e816412441ea7651c949f01bcda15108e232"} Feb 17 16:14:32 crc kubenswrapper[4808]: I0217 16:14:32.996917 4808 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-pq8qq" podUID="317e56c8-5f01-4313-a632-12ccaccf9442" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.128:5353: connect: connection refused" Feb 17 16:14:33 crc kubenswrapper[4808]: I0217 16:14:33.386803 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-p2fwj" Feb 17 16:14:33 crc kubenswrapper[4808]: I0217 16:14:33.512418 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e39a33f-5d00-4171-bf63-6b12226901d3-combined-ca-bundle\") pod \"4e39a33f-5d00-4171-bf63-6b12226901d3\" (UID: \"4e39a33f-5d00-4171-bf63-6b12226901d3\") " Feb 17 16:14:33 crc kubenswrapper[4808]: I0217 16:14:33.512592 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nklnb\" (UniqueName: \"kubernetes.io/projected/4e39a33f-5d00-4171-bf63-6b12226901d3-kube-api-access-nklnb\") pod \"4e39a33f-5d00-4171-bf63-6b12226901d3\" (UID: \"4e39a33f-5d00-4171-bf63-6b12226901d3\") " Feb 17 16:14:33 crc kubenswrapper[4808]: I0217 16:14:33.512624 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4e39a33f-5d00-4171-bf63-6b12226901d3-credential-keys\") pod \"4e39a33f-5d00-4171-bf63-6b12226901d3\" (UID: \"4e39a33f-5d00-4171-bf63-6b12226901d3\") " Feb 17 16:14:33 crc kubenswrapper[4808]: I0217 16:14:33.512812 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e39a33f-5d00-4171-bf63-6b12226901d3-scripts\") pod \"4e39a33f-5d00-4171-bf63-6b12226901d3\" (UID: \"4e39a33f-5d00-4171-bf63-6b12226901d3\") " Feb 17 16:14:33 crc kubenswrapper[4808]: I0217 16:14:33.512848 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4e39a33f-5d00-4171-bf63-6b12226901d3-fernet-keys\") pod \"4e39a33f-5d00-4171-bf63-6b12226901d3\" (UID: \"4e39a33f-5d00-4171-bf63-6b12226901d3\") " Feb 17 16:14:33 crc kubenswrapper[4808]: I0217 16:14:33.512874 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e39a33f-5d00-4171-bf63-6b12226901d3-config-data\") pod \"4e39a33f-5d00-4171-bf63-6b12226901d3\" (UID: \"4e39a33f-5d00-4171-bf63-6b12226901d3\") " Feb 17 16:14:33 crc kubenswrapper[4808]: I0217 16:14:33.518405 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e39a33f-5d00-4171-bf63-6b12226901d3-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "4e39a33f-5d00-4171-bf63-6b12226901d3" (UID: "4e39a33f-5d00-4171-bf63-6b12226901d3"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:14:33 crc kubenswrapper[4808]: I0217 16:14:33.518453 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e39a33f-5d00-4171-bf63-6b12226901d3-scripts" (OuterVolumeSpecName: "scripts") pod "4e39a33f-5d00-4171-bf63-6b12226901d3" (UID: "4e39a33f-5d00-4171-bf63-6b12226901d3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:14:33 crc kubenswrapper[4808]: I0217 16:14:33.518464 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e39a33f-5d00-4171-bf63-6b12226901d3-kube-api-access-nklnb" (OuterVolumeSpecName: "kube-api-access-nklnb") pod "4e39a33f-5d00-4171-bf63-6b12226901d3" (UID: "4e39a33f-5d00-4171-bf63-6b12226901d3"). InnerVolumeSpecName "kube-api-access-nklnb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:14:33 crc kubenswrapper[4808]: I0217 16:14:33.520289 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e39a33f-5d00-4171-bf63-6b12226901d3-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "4e39a33f-5d00-4171-bf63-6b12226901d3" (UID: "4e39a33f-5d00-4171-bf63-6b12226901d3"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:14:33 crc kubenswrapper[4808]: I0217 16:14:33.538784 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e39a33f-5d00-4171-bf63-6b12226901d3-config-data" (OuterVolumeSpecName: "config-data") pod "4e39a33f-5d00-4171-bf63-6b12226901d3" (UID: "4e39a33f-5d00-4171-bf63-6b12226901d3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:14:33 crc kubenswrapper[4808]: I0217 16:14:33.541888 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e39a33f-5d00-4171-bf63-6b12226901d3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4e39a33f-5d00-4171-bf63-6b12226901d3" (UID: "4e39a33f-5d00-4171-bf63-6b12226901d3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:14:33 crc kubenswrapper[4808]: I0217 16:14:33.614521 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e39a33f-5d00-4171-bf63-6b12226901d3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:33 crc kubenswrapper[4808]: I0217 16:14:33.614557 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nklnb\" (UniqueName: \"kubernetes.io/projected/4e39a33f-5d00-4171-bf63-6b12226901d3-kube-api-access-nklnb\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:33 crc kubenswrapper[4808]: I0217 16:14:33.614587 4808 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4e39a33f-5d00-4171-bf63-6b12226901d3-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:33 crc kubenswrapper[4808]: I0217 16:14:33.614599 4808 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e39a33f-5d00-4171-bf63-6b12226901d3-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:33 crc kubenswrapper[4808]: I0217 16:14:33.614611 4808 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4e39a33f-5d00-4171-bf63-6b12226901d3-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:33 crc kubenswrapper[4808]: I0217 16:14:33.614620 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e39a33f-5d00-4171-bf63-6b12226901d3-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:34 crc kubenswrapper[4808]: I0217 16:14:34.209806 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-p2fwj" event={"ID":"4e39a33f-5d00-4171-bf63-6b12226901d3","Type":"ContainerDied","Data":"17116d89192a8613360b83b9abc0d23bc6d3cc17099f32067b22d7c7d3c6494e"} Feb 17 16:14:34 crc kubenswrapper[4808]: I0217 16:14:34.209850 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-p2fwj" Feb 17 16:14:34 crc kubenswrapper[4808]: I0217 16:14:34.210220 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="17116d89192a8613360b83b9abc0d23bc6d3cc17099f32067b22d7c7d3c6494e" Feb 17 16:14:34 crc kubenswrapper[4808]: I0217 16:14:34.470228 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-p2fwj"] Feb 17 16:14:34 crc kubenswrapper[4808]: I0217 16:14:34.481682 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-p2fwj"] Feb 17 16:14:34 crc kubenswrapper[4808]: I0217 16:14:34.494822 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-67f4b"] Feb 17 16:14:34 crc kubenswrapper[4808]: E0217 16:14:34.495183 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75b951c6-37fc-4757-bafd-ef3647e3b701" containerName="init" Feb 17 16:14:34 crc kubenswrapper[4808]: I0217 16:14:34.495199 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="75b951c6-37fc-4757-bafd-ef3647e3b701" containerName="init" Feb 17 16:14:34 crc kubenswrapper[4808]: E0217 16:14:34.495212 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cdfa661-fa28-48be-b416-f2e69927fc9b" containerName="init" Feb 17 16:14:34 crc kubenswrapper[4808]: I0217 16:14:34.495217 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cdfa661-fa28-48be-b416-f2e69927fc9b" containerName="init" Feb 17 16:14:34 crc kubenswrapper[4808]: E0217 16:14:34.495233 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e39a33f-5d00-4171-bf63-6b12226901d3" containerName="keystone-bootstrap" Feb 17 16:14:34 crc kubenswrapper[4808]: I0217 16:14:34.495240 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e39a33f-5d00-4171-bf63-6b12226901d3" containerName="keystone-bootstrap" Feb 17 16:14:34 crc kubenswrapper[4808]: E0217 16:14:34.495251 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75b951c6-37fc-4757-bafd-ef3647e3b701" containerName="dnsmasq-dns" Feb 17 16:14:34 crc kubenswrapper[4808]: I0217 16:14:34.495257 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="75b951c6-37fc-4757-bafd-ef3647e3b701" containerName="dnsmasq-dns" Feb 17 16:14:34 crc kubenswrapper[4808]: I0217 16:14:34.495422 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="4cdfa661-fa28-48be-b416-f2e69927fc9b" containerName="init" Feb 17 16:14:34 crc kubenswrapper[4808]: I0217 16:14:34.495444 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="75b951c6-37fc-4757-bafd-ef3647e3b701" containerName="dnsmasq-dns" Feb 17 16:14:34 crc kubenswrapper[4808]: I0217 16:14:34.495455 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e39a33f-5d00-4171-bf63-6b12226901d3" containerName="keystone-bootstrap" Feb 17 16:14:34 crc kubenswrapper[4808]: I0217 16:14:34.496115 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-67f4b" Feb 17 16:14:34 crc kubenswrapper[4808]: I0217 16:14:34.501125 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 17 16:14:34 crc kubenswrapper[4808]: I0217 16:14:34.501271 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 17 16:14:34 crc kubenswrapper[4808]: I0217 16:14:34.501278 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-6x2tm" Feb 17 16:14:34 crc kubenswrapper[4808]: I0217 16:14:34.501520 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 17 16:14:34 crc kubenswrapper[4808]: I0217 16:14:34.507699 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-67f4b"] Feb 17 16:14:34 crc kubenswrapper[4808]: I0217 16:14:34.639603 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb977bed-804c-4e4c-8d35-5562015024f3-combined-ca-bundle\") pod \"keystone-bootstrap-67f4b\" (UID: \"bb977bed-804c-4e4c-8d35-5562015024f3\") " pod="openstack/keystone-bootstrap-67f4b" Feb 17 16:14:34 crc kubenswrapper[4808]: I0217 16:14:34.639674 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h27j8\" (UniqueName: \"kubernetes.io/projected/bb977bed-804c-4e4c-8d35-5562015024f3-kube-api-access-h27j8\") pod \"keystone-bootstrap-67f4b\" (UID: \"bb977bed-804c-4e4c-8d35-5562015024f3\") " pod="openstack/keystone-bootstrap-67f4b" Feb 17 16:14:34 crc kubenswrapper[4808]: I0217 16:14:34.639727 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/bb977bed-804c-4e4c-8d35-5562015024f3-credential-keys\") pod \"keystone-bootstrap-67f4b\" (UID: \"bb977bed-804c-4e4c-8d35-5562015024f3\") " pod="openstack/keystone-bootstrap-67f4b" Feb 17 16:14:34 crc kubenswrapper[4808]: I0217 16:14:34.639746 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb977bed-804c-4e4c-8d35-5562015024f3-config-data\") pod \"keystone-bootstrap-67f4b\" (UID: \"bb977bed-804c-4e4c-8d35-5562015024f3\") " pod="openstack/keystone-bootstrap-67f4b" Feb 17 16:14:34 crc kubenswrapper[4808]: I0217 16:14:34.639834 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb977bed-804c-4e4c-8d35-5562015024f3-scripts\") pod \"keystone-bootstrap-67f4b\" (UID: \"bb977bed-804c-4e4c-8d35-5562015024f3\") " pod="openstack/keystone-bootstrap-67f4b" Feb 17 16:14:34 crc kubenswrapper[4808]: I0217 16:14:34.639938 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bb977bed-804c-4e4c-8d35-5562015024f3-fernet-keys\") pod \"keystone-bootstrap-67f4b\" (UID: \"bb977bed-804c-4e4c-8d35-5562015024f3\") " pod="openstack/keystone-bootstrap-67f4b" Feb 17 16:14:34 crc kubenswrapper[4808]: I0217 16:14:34.741562 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bb977bed-804c-4e4c-8d35-5562015024f3-fernet-keys\") pod \"keystone-bootstrap-67f4b\" (UID: \"bb977bed-804c-4e4c-8d35-5562015024f3\") " pod="openstack/keystone-bootstrap-67f4b" Feb 17 16:14:34 crc kubenswrapper[4808]: I0217 16:14:34.741648 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb977bed-804c-4e4c-8d35-5562015024f3-combined-ca-bundle\") pod \"keystone-bootstrap-67f4b\" (UID: \"bb977bed-804c-4e4c-8d35-5562015024f3\") " pod="openstack/keystone-bootstrap-67f4b" Feb 17 16:14:34 crc kubenswrapper[4808]: I0217 16:14:34.741677 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h27j8\" (UniqueName: \"kubernetes.io/projected/bb977bed-804c-4e4c-8d35-5562015024f3-kube-api-access-h27j8\") pod \"keystone-bootstrap-67f4b\" (UID: \"bb977bed-804c-4e4c-8d35-5562015024f3\") " pod="openstack/keystone-bootstrap-67f4b" Feb 17 16:14:34 crc kubenswrapper[4808]: I0217 16:14:34.741711 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/bb977bed-804c-4e4c-8d35-5562015024f3-credential-keys\") pod \"keystone-bootstrap-67f4b\" (UID: \"bb977bed-804c-4e4c-8d35-5562015024f3\") " pod="openstack/keystone-bootstrap-67f4b" Feb 17 16:14:34 crc kubenswrapper[4808]: I0217 16:14:34.741729 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb977bed-804c-4e4c-8d35-5562015024f3-config-data\") pod \"keystone-bootstrap-67f4b\" (UID: \"bb977bed-804c-4e4c-8d35-5562015024f3\") " pod="openstack/keystone-bootstrap-67f4b" Feb 17 16:14:34 crc kubenswrapper[4808]: I0217 16:14:34.741767 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb977bed-804c-4e4c-8d35-5562015024f3-scripts\") pod \"keystone-bootstrap-67f4b\" (UID: \"bb977bed-804c-4e4c-8d35-5562015024f3\") " pod="openstack/keystone-bootstrap-67f4b" Feb 17 16:14:34 crc kubenswrapper[4808]: I0217 16:14:34.749451 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb977bed-804c-4e4c-8d35-5562015024f3-scripts\") pod \"keystone-bootstrap-67f4b\" (UID: \"bb977bed-804c-4e4c-8d35-5562015024f3\") " pod="openstack/keystone-bootstrap-67f4b" Feb 17 16:14:34 crc kubenswrapper[4808]: I0217 16:14:34.760269 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bb977bed-804c-4e4c-8d35-5562015024f3-fernet-keys\") pod \"keystone-bootstrap-67f4b\" (UID: \"bb977bed-804c-4e4c-8d35-5562015024f3\") " pod="openstack/keystone-bootstrap-67f4b" Feb 17 16:14:34 crc kubenswrapper[4808]: I0217 16:14:34.760326 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb977bed-804c-4e4c-8d35-5562015024f3-config-data\") pod \"keystone-bootstrap-67f4b\" (UID: \"bb977bed-804c-4e4c-8d35-5562015024f3\") " pod="openstack/keystone-bootstrap-67f4b" Feb 17 16:14:34 crc kubenswrapper[4808]: I0217 16:14:34.760654 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb977bed-804c-4e4c-8d35-5562015024f3-combined-ca-bundle\") pod \"keystone-bootstrap-67f4b\" (UID: \"bb977bed-804c-4e4c-8d35-5562015024f3\") " pod="openstack/keystone-bootstrap-67f4b" Feb 17 16:14:34 crc kubenswrapper[4808]: I0217 16:14:34.762037 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/bb977bed-804c-4e4c-8d35-5562015024f3-credential-keys\") pod \"keystone-bootstrap-67f4b\" (UID: \"bb977bed-804c-4e4c-8d35-5562015024f3\") " pod="openstack/keystone-bootstrap-67f4b" Feb 17 16:14:34 crc kubenswrapper[4808]: I0217 16:14:34.762959 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h27j8\" (UniqueName: \"kubernetes.io/projected/bb977bed-804c-4e4c-8d35-5562015024f3-kube-api-access-h27j8\") pod \"keystone-bootstrap-67f4b\" (UID: \"bb977bed-804c-4e4c-8d35-5562015024f3\") " pod="openstack/keystone-bootstrap-67f4b" Feb 17 16:14:34 crc kubenswrapper[4808]: I0217 16:14:34.832591 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-67f4b" Feb 17 16:14:35 crc kubenswrapper[4808]: I0217 16:14:35.094007 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 17 16:14:35 crc kubenswrapper[4808]: I0217 16:14:35.102782 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 17 16:14:35 crc kubenswrapper[4808]: I0217 16:14:35.165184 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e39a33f-5d00-4171-bf63-6b12226901d3" path="/var/lib/kubelet/pods/4e39a33f-5d00-4171-bf63-6b12226901d3/volumes" Feb 17 16:14:35 crc kubenswrapper[4808]: I0217 16:14:35.224163 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 17 16:14:36 crc kubenswrapper[4808]: E0217 16:14:36.489936 4808 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2917eca2_0431_4bd6_ad96_ab8464cc4fd7.slice/crio-conmon-3e1259ba3d26a0e7de7e3a0ca80bca8985317419bb22e9888ef6fc0a7e83aec7.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2917eca2_0431_4bd6_ad96_ab8464cc4fd7.slice/crio-8d4b256de0544b61472bec728b8a9f6596b6505c3ff6baf74b4b74f9988e76dc.scope\": RecentStats: unable to find data in memory cache]" Feb 17 16:14:40 crc kubenswrapper[4808]: I0217 16:14:40.266481 4808 generic.go:334] "Generic (PLEG): container finished" podID="436b0400-6c82-450b-9505-61bf124b5db5" containerID="f426da7c0095388c504bdd496cb29b45871594e3a52a02106d296d950a35b8b0" exitCode=0 Feb 17 16:14:40 crc kubenswrapper[4808]: I0217 16:14:40.266606 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-jskwv" event={"ID":"436b0400-6c82-450b-9505-61bf124b5db5","Type":"ContainerDied","Data":"f426da7c0095388c504bdd496cb29b45871594e3a52a02106d296d950a35b8b0"} Feb 17 16:14:42 crc kubenswrapper[4808]: I0217 16:14:42.996863 4808 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-pq8qq" podUID="317e56c8-5f01-4313-a632-12ccaccf9442" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.128:5353: i/o timeout" Feb 17 16:14:44 crc kubenswrapper[4808]: I0217 16:14:44.269467 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-jskwv" Feb 17 16:14:44 crc kubenswrapper[4808]: I0217 16:14:44.312277 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-jskwv" event={"ID":"436b0400-6c82-450b-9505-61bf124b5db5","Type":"ContainerDied","Data":"5717dd2ef8af55d59bb6a6c87c756928ce372bb105a7380fa60e88c0fb60d552"} Feb 17 16:14:44 crc kubenswrapper[4808]: I0217 16:14:44.312321 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-jskwv" Feb 17 16:14:44 crc kubenswrapper[4808]: I0217 16:14:44.312334 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5717dd2ef8af55d59bb6a6c87c756928ce372bb105a7380fa60e88c0fb60d552" Feb 17 16:14:44 crc kubenswrapper[4808]: I0217 16:14:44.320452 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/436b0400-6c82-450b-9505-61bf124b5db5-config\") pod \"436b0400-6c82-450b-9505-61bf124b5db5\" (UID: \"436b0400-6c82-450b-9505-61bf124b5db5\") " Feb 17 16:14:44 crc kubenswrapper[4808]: I0217 16:14:44.320514 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/436b0400-6c82-450b-9505-61bf124b5db5-combined-ca-bundle\") pod \"436b0400-6c82-450b-9505-61bf124b5db5\" (UID: \"436b0400-6c82-450b-9505-61bf124b5db5\") " Feb 17 16:14:44 crc kubenswrapper[4808]: I0217 16:14:44.320833 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8zvwj\" (UniqueName: \"kubernetes.io/projected/436b0400-6c82-450b-9505-61bf124b5db5-kube-api-access-8zvwj\") pod \"436b0400-6c82-450b-9505-61bf124b5db5\" (UID: \"436b0400-6c82-450b-9505-61bf124b5db5\") " Feb 17 16:14:44 crc kubenswrapper[4808]: I0217 16:14:44.328779 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/436b0400-6c82-450b-9505-61bf124b5db5-kube-api-access-8zvwj" (OuterVolumeSpecName: "kube-api-access-8zvwj") pod "436b0400-6c82-450b-9505-61bf124b5db5" (UID: "436b0400-6c82-450b-9505-61bf124b5db5"). InnerVolumeSpecName "kube-api-access-8zvwj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:14:44 crc kubenswrapper[4808]: E0217 16:14:44.347014 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/436b0400-6c82-450b-9505-61bf124b5db5-combined-ca-bundle podName:436b0400-6c82-450b-9505-61bf124b5db5 nodeName:}" failed. No retries permitted until 2026-02-17 16:14:44.846898456 +0000 UTC m=+1248.363257539 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "combined-ca-bundle" (UniqueName: "kubernetes.io/secret/436b0400-6c82-450b-9505-61bf124b5db5-combined-ca-bundle") pod "436b0400-6c82-450b-9505-61bf124b5db5" (UID: "436b0400-6c82-450b-9505-61bf124b5db5") : error deleting /var/lib/kubelet/pods/436b0400-6c82-450b-9505-61bf124b5db5/volume-subpaths: remove /var/lib/kubelet/pods/436b0400-6c82-450b-9505-61bf124b5db5/volume-subpaths: no such file or directory Feb 17 16:14:44 crc kubenswrapper[4808]: I0217 16:14:44.349196 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/436b0400-6c82-450b-9505-61bf124b5db5-config" (OuterVolumeSpecName: "config") pod "436b0400-6c82-450b-9505-61bf124b5db5" (UID: "436b0400-6c82-450b-9505-61bf124b5db5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:14:44 crc kubenswrapper[4808]: I0217 16:14:44.422655 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8zvwj\" (UniqueName: \"kubernetes.io/projected/436b0400-6c82-450b-9505-61bf124b5db5-kube-api-access-8zvwj\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:44 crc kubenswrapper[4808]: I0217 16:14:44.422685 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/436b0400-6c82-450b-9505-61bf124b5db5-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:44 crc kubenswrapper[4808]: I0217 16:14:44.840320 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-4mdzt" Feb 17 16:14:44 crc kubenswrapper[4808]: E0217 16:14:44.843755 4808 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Feb 17 16:14:44 crc kubenswrapper[4808]: E0217 16:14:44.843947 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2zvc8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-rwld8_openstack(5bf4d932-664a-46c6-bec5-f2b70950c824): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 16:14:44 crc kubenswrapper[4808]: E0217 16:14:44.845748 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-rwld8" podUID="5bf4d932-664a-46c6-bec5-f2b70950c824" Feb 17 16:14:44 crc kubenswrapper[4808]: I0217 16:14:44.931494 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/436b0400-6c82-450b-9505-61bf124b5db5-combined-ca-bundle\") pod \"436b0400-6c82-450b-9505-61bf124b5db5\" (UID: \"436b0400-6c82-450b-9505-61bf124b5db5\") " Feb 17 16:14:44 crc kubenswrapper[4808]: I0217 16:14:44.931566 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4002815-8dd4-4668-bea7-0d54bdaa4dd6-combined-ca-bundle\") pod \"e4002815-8dd4-4668-bea7-0d54bdaa4dd6\" (UID: \"e4002815-8dd4-4668-bea7-0d54bdaa4dd6\") " Feb 17 16:14:44 crc kubenswrapper[4808]: I0217 16:14:44.931687 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4002815-8dd4-4668-bea7-0d54bdaa4dd6-config-data\") pod \"e4002815-8dd4-4668-bea7-0d54bdaa4dd6\" (UID: \"e4002815-8dd4-4668-bea7-0d54bdaa4dd6\") " Feb 17 16:14:44 crc kubenswrapper[4808]: I0217 16:14:44.931729 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rb486\" (UniqueName: \"kubernetes.io/projected/e4002815-8dd4-4668-bea7-0d54bdaa4dd6-kube-api-access-rb486\") pod \"e4002815-8dd4-4668-bea7-0d54bdaa4dd6\" (UID: \"e4002815-8dd4-4668-bea7-0d54bdaa4dd6\") " Feb 17 16:14:44 crc kubenswrapper[4808]: I0217 16:14:44.931918 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e4002815-8dd4-4668-bea7-0d54bdaa4dd6-db-sync-config-data\") pod \"e4002815-8dd4-4668-bea7-0d54bdaa4dd6\" (UID: \"e4002815-8dd4-4668-bea7-0d54bdaa4dd6\") " Feb 17 16:14:44 crc kubenswrapper[4808]: I0217 16:14:44.935560 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4002815-8dd4-4668-bea7-0d54bdaa4dd6-kube-api-access-rb486" (OuterVolumeSpecName: "kube-api-access-rb486") pod "e4002815-8dd4-4668-bea7-0d54bdaa4dd6" (UID: "e4002815-8dd4-4668-bea7-0d54bdaa4dd6"). InnerVolumeSpecName "kube-api-access-rb486". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:14:44 crc kubenswrapper[4808]: I0217 16:14:44.936141 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/436b0400-6c82-450b-9505-61bf124b5db5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "436b0400-6c82-450b-9505-61bf124b5db5" (UID: "436b0400-6c82-450b-9505-61bf124b5db5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:14:44 crc kubenswrapper[4808]: I0217 16:14:44.936610 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4002815-8dd4-4668-bea7-0d54bdaa4dd6-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "e4002815-8dd4-4668-bea7-0d54bdaa4dd6" (UID: "e4002815-8dd4-4668-bea7-0d54bdaa4dd6"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:14:44 crc kubenswrapper[4808]: I0217 16:14:44.956268 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4002815-8dd4-4668-bea7-0d54bdaa4dd6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e4002815-8dd4-4668-bea7-0d54bdaa4dd6" (UID: "e4002815-8dd4-4668-bea7-0d54bdaa4dd6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:14:44 crc kubenswrapper[4808]: I0217 16:14:44.978540 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4002815-8dd4-4668-bea7-0d54bdaa4dd6-config-data" (OuterVolumeSpecName: "config-data") pod "e4002815-8dd4-4668-bea7-0d54bdaa4dd6" (UID: "e4002815-8dd4-4668-bea7-0d54bdaa4dd6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:14:45 crc kubenswrapper[4808]: I0217 16:14:45.033788 4808 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e4002815-8dd4-4668-bea7-0d54bdaa4dd6-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:45 crc kubenswrapper[4808]: I0217 16:14:45.033834 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/436b0400-6c82-450b-9505-61bf124b5db5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:45 crc kubenswrapper[4808]: I0217 16:14:45.033847 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4002815-8dd4-4668-bea7-0d54bdaa4dd6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:45 crc kubenswrapper[4808]: I0217 16:14:45.033859 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4002815-8dd4-4668-bea7-0d54bdaa4dd6-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:45 crc kubenswrapper[4808]: I0217 16:14:45.033871 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rb486\" (UniqueName: \"kubernetes.io/projected/e4002815-8dd4-4668-bea7-0d54bdaa4dd6-kube-api-access-rb486\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:45 crc kubenswrapper[4808]: I0217 16:14:45.324459 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-4mdzt" event={"ID":"e4002815-8dd4-4668-bea7-0d54bdaa4dd6","Type":"ContainerDied","Data":"e5bfc747bb74b14a5184eb3f8c16443aca59a2667d60646ea7965a405418e0b0"} Feb 17 16:14:45 crc kubenswrapper[4808]: I0217 16:14:45.324481 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-4mdzt" Feb 17 16:14:45 crc kubenswrapper[4808]: I0217 16:14:45.324497 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e5bfc747bb74b14a5184eb3f8c16443aca59a2667d60646ea7965a405418e0b0" Feb 17 16:14:45 crc kubenswrapper[4808]: E0217 16:14:45.326741 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-rwld8" podUID="5bf4d932-664a-46c6-bec5-f2b70950c824" Feb 17 16:14:45 crc kubenswrapper[4808]: I0217 16:14:45.526307 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7d88d7b95f-kcq78"] Feb 17 16:14:45 crc kubenswrapper[4808]: E0217 16:14:45.527187 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="436b0400-6c82-450b-9505-61bf124b5db5" containerName="neutron-db-sync" Feb 17 16:14:45 crc kubenswrapper[4808]: I0217 16:14:45.527215 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="436b0400-6c82-450b-9505-61bf124b5db5" containerName="neutron-db-sync" Feb 17 16:14:45 crc kubenswrapper[4808]: E0217 16:14:45.527240 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4002815-8dd4-4668-bea7-0d54bdaa4dd6" containerName="glance-db-sync" Feb 17 16:14:45 crc kubenswrapper[4808]: I0217 16:14:45.527249 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4002815-8dd4-4668-bea7-0d54bdaa4dd6" containerName="glance-db-sync" Feb 17 16:14:45 crc kubenswrapper[4808]: I0217 16:14:45.527531 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="436b0400-6c82-450b-9505-61bf124b5db5" containerName="neutron-db-sync" Feb 17 16:14:45 crc kubenswrapper[4808]: I0217 16:14:45.527595 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4002815-8dd4-4668-bea7-0d54bdaa4dd6" containerName="glance-db-sync" Feb 17 16:14:45 crc kubenswrapper[4808]: I0217 16:14:45.529395 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d88d7b95f-kcq78" Feb 17 16:14:45 crc kubenswrapper[4808]: I0217 16:14:45.555838 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d88d7b95f-kcq78"] Feb 17 16:14:45 crc kubenswrapper[4808]: I0217 16:14:45.651512 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a79be637-3b6e-4ccf-8bbe-95b1baf64444-dns-swift-storage-0\") pod \"dnsmasq-dns-7d88d7b95f-kcq78\" (UID: \"a79be637-3b6e-4ccf-8bbe-95b1baf64444\") " pod="openstack/dnsmasq-dns-7d88d7b95f-kcq78" Feb 17 16:14:45 crc kubenswrapper[4808]: I0217 16:14:45.651594 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a79be637-3b6e-4ccf-8bbe-95b1baf64444-dns-svc\") pod \"dnsmasq-dns-7d88d7b95f-kcq78\" (UID: \"a79be637-3b6e-4ccf-8bbe-95b1baf64444\") " pod="openstack/dnsmasq-dns-7d88d7b95f-kcq78" Feb 17 16:14:45 crc kubenswrapper[4808]: I0217 16:14:45.651620 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jr877\" (UniqueName: \"kubernetes.io/projected/a79be637-3b6e-4ccf-8bbe-95b1baf64444-kube-api-access-jr877\") pod \"dnsmasq-dns-7d88d7b95f-kcq78\" (UID: \"a79be637-3b6e-4ccf-8bbe-95b1baf64444\") " pod="openstack/dnsmasq-dns-7d88d7b95f-kcq78" Feb 17 16:14:45 crc kubenswrapper[4808]: I0217 16:14:45.651838 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a79be637-3b6e-4ccf-8bbe-95b1baf64444-ovsdbserver-sb\") pod \"dnsmasq-dns-7d88d7b95f-kcq78\" (UID: \"a79be637-3b6e-4ccf-8bbe-95b1baf64444\") " pod="openstack/dnsmasq-dns-7d88d7b95f-kcq78" Feb 17 16:14:45 crc kubenswrapper[4808]: I0217 16:14:45.652105 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a79be637-3b6e-4ccf-8bbe-95b1baf64444-config\") pod \"dnsmasq-dns-7d88d7b95f-kcq78\" (UID: \"a79be637-3b6e-4ccf-8bbe-95b1baf64444\") " pod="openstack/dnsmasq-dns-7d88d7b95f-kcq78" Feb 17 16:14:45 crc kubenswrapper[4808]: I0217 16:14:45.652248 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a79be637-3b6e-4ccf-8bbe-95b1baf64444-ovsdbserver-nb\") pod \"dnsmasq-dns-7d88d7b95f-kcq78\" (UID: \"a79be637-3b6e-4ccf-8bbe-95b1baf64444\") " pod="openstack/dnsmasq-dns-7d88d7b95f-kcq78" Feb 17 16:14:45 crc kubenswrapper[4808]: I0217 16:14:45.683259 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5c8b8554dd-86wnt"] Feb 17 16:14:45 crc kubenswrapper[4808]: I0217 16:14:45.684988 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5c8b8554dd-86wnt" Feb 17 16:14:45 crc kubenswrapper[4808]: I0217 16:14:45.690777 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-89rvs" Feb 17 16:14:45 crc kubenswrapper[4808]: I0217 16:14:45.691741 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Feb 17 16:14:45 crc kubenswrapper[4808]: I0217 16:14:45.692042 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 17 16:14:45 crc kubenswrapper[4808]: I0217 16:14:45.694241 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 17 16:14:45 crc kubenswrapper[4808]: I0217 16:14:45.699703 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5c8b8554dd-86wnt"] Feb 17 16:14:45 crc kubenswrapper[4808]: I0217 16:14:45.754095 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a79be637-3b6e-4ccf-8bbe-95b1baf64444-config\") pod \"dnsmasq-dns-7d88d7b95f-kcq78\" (UID: \"a79be637-3b6e-4ccf-8bbe-95b1baf64444\") " pod="openstack/dnsmasq-dns-7d88d7b95f-kcq78" Feb 17 16:14:45 crc kubenswrapper[4808]: I0217 16:14:45.754142 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b4b8e73f-b7b0-4580-8e0f-44eef84624e4-httpd-config\") pod \"neutron-5c8b8554dd-86wnt\" (UID: \"b4b8e73f-b7b0-4580-8e0f-44eef84624e4\") " pod="openstack/neutron-5c8b8554dd-86wnt" Feb 17 16:14:45 crc kubenswrapper[4808]: I0217 16:14:45.754171 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b4b8e73f-b7b0-4580-8e0f-44eef84624e4-config\") pod \"neutron-5c8b8554dd-86wnt\" (UID: \"b4b8e73f-b7b0-4580-8e0f-44eef84624e4\") " pod="openstack/neutron-5c8b8554dd-86wnt" Feb 17 16:14:45 crc kubenswrapper[4808]: I0217 16:14:45.754206 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4b8e73f-b7b0-4580-8e0f-44eef84624e4-ovndb-tls-certs\") pod \"neutron-5c8b8554dd-86wnt\" (UID: \"b4b8e73f-b7b0-4580-8e0f-44eef84624e4\") " pod="openstack/neutron-5c8b8554dd-86wnt" Feb 17 16:14:45 crc kubenswrapper[4808]: I0217 16:14:45.754226 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a79be637-3b6e-4ccf-8bbe-95b1baf64444-ovsdbserver-nb\") pod \"dnsmasq-dns-7d88d7b95f-kcq78\" (UID: \"a79be637-3b6e-4ccf-8bbe-95b1baf64444\") " pod="openstack/dnsmasq-dns-7d88d7b95f-kcq78" Feb 17 16:14:45 crc kubenswrapper[4808]: I0217 16:14:45.754260 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a79be637-3b6e-4ccf-8bbe-95b1baf64444-dns-swift-storage-0\") pod \"dnsmasq-dns-7d88d7b95f-kcq78\" (UID: \"a79be637-3b6e-4ccf-8bbe-95b1baf64444\") " pod="openstack/dnsmasq-dns-7d88d7b95f-kcq78" Feb 17 16:14:45 crc kubenswrapper[4808]: I0217 16:14:45.754290 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnm4z\" (UniqueName: \"kubernetes.io/projected/b4b8e73f-b7b0-4580-8e0f-44eef84624e4-kube-api-access-wnm4z\") pod \"neutron-5c8b8554dd-86wnt\" (UID: \"b4b8e73f-b7b0-4580-8e0f-44eef84624e4\") " pod="openstack/neutron-5c8b8554dd-86wnt" Feb 17 16:14:45 crc kubenswrapper[4808]: I0217 16:14:45.754320 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a79be637-3b6e-4ccf-8bbe-95b1baf64444-dns-svc\") pod \"dnsmasq-dns-7d88d7b95f-kcq78\" (UID: \"a79be637-3b6e-4ccf-8bbe-95b1baf64444\") " pod="openstack/dnsmasq-dns-7d88d7b95f-kcq78" Feb 17 16:14:45 crc kubenswrapper[4808]: I0217 16:14:45.754341 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jr877\" (UniqueName: \"kubernetes.io/projected/a79be637-3b6e-4ccf-8bbe-95b1baf64444-kube-api-access-jr877\") pod \"dnsmasq-dns-7d88d7b95f-kcq78\" (UID: \"a79be637-3b6e-4ccf-8bbe-95b1baf64444\") " pod="openstack/dnsmasq-dns-7d88d7b95f-kcq78" Feb 17 16:14:45 crc kubenswrapper[4808]: I0217 16:14:45.754383 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4b8e73f-b7b0-4580-8e0f-44eef84624e4-combined-ca-bundle\") pod \"neutron-5c8b8554dd-86wnt\" (UID: \"b4b8e73f-b7b0-4580-8e0f-44eef84624e4\") " pod="openstack/neutron-5c8b8554dd-86wnt" Feb 17 16:14:45 crc kubenswrapper[4808]: I0217 16:14:45.754413 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a79be637-3b6e-4ccf-8bbe-95b1baf64444-ovsdbserver-sb\") pod \"dnsmasq-dns-7d88d7b95f-kcq78\" (UID: \"a79be637-3b6e-4ccf-8bbe-95b1baf64444\") " pod="openstack/dnsmasq-dns-7d88d7b95f-kcq78" Feb 17 16:14:45 crc kubenswrapper[4808]: I0217 16:14:45.755364 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a79be637-3b6e-4ccf-8bbe-95b1baf64444-ovsdbserver-sb\") pod \"dnsmasq-dns-7d88d7b95f-kcq78\" (UID: \"a79be637-3b6e-4ccf-8bbe-95b1baf64444\") " pod="openstack/dnsmasq-dns-7d88d7b95f-kcq78" Feb 17 16:14:45 crc kubenswrapper[4808]: I0217 16:14:45.755792 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a79be637-3b6e-4ccf-8bbe-95b1baf64444-ovsdbserver-nb\") pod \"dnsmasq-dns-7d88d7b95f-kcq78\" (UID: \"a79be637-3b6e-4ccf-8bbe-95b1baf64444\") " pod="openstack/dnsmasq-dns-7d88d7b95f-kcq78" Feb 17 16:14:45 crc kubenswrapper[4808]: I0217 16:14:45.758878 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a79be637-3b6e-4ccf-8bbe-95b1baf64444-config\") pod \"dnsmasq-dns-7d88d7b95f-kcq78\" (UID: \"a79be637-3b6e-4ccf-8bbe-95b1baf64444\") " pod="openstack/dnsmasq-dns-7d88d7b95f-kcq78" Feb 17 16:14:45 crc kubenswrapper[4808]: I0217 16:14:45.759048 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a79be637-3b6e-4ccf-8bbe-95b1baf64444-dns-swift-storage-0\") pod \"dnsmasq-dns-7d88d7b95f-kcq78\" (UID: \"a79be637-3b6e-4ccf-8bbe-95b1baf64444\") " pod="openstack/dnsmasq-dns-7d88d7b95f-kcq78" Feb 17 16:14:45 crc kubenswrapper[4808]: I0217 16:14:45.759884 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a79be637-3b6e-4ccf-8bbe-95b1baf64444-dns-svc\") pod \"dnsmasq-dns-7d88d7b95f-kcq78\" (UID: \"a79be637-3b6e-4ccf-8bbe-95b1baf64444\") " pod="openstack/dnsmasq-dns-7d88d7b95f-kcq78" Feb 17 16:14:45 crc kubenswrapper[4808]: I0217 16:14:45.792151 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jr877\" (UniqueName: \"kubernetes.io/projected/a79be637-3b6e-4ccf-8bbe-95b1baf64444-kube-api-access-jr877\") pod \"dnsmasq-dns-7d88d7b95f-kcq78\" (UID: \"a79be637-3b6e-4ccf-8bbe-95b1baf64444\") " pod="openstack/dnsmasq-dns-7d88d7b95f-kcq78" Feb 17 16:14:45 crc kubenswrapper[4808]: I0217 16:14:45.856285 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b4b8e73f-b7b0-4580-8e0f-44eef84624e4-config\") pod \"neutron-5c8b8554dd-86wnt\" (UID: \"b4b8e73f-b7b0-4580-8e0f-44eef84624e4\") " pod="openstack/neutron-5c8b8554dd-86wnt" Feb 17 16:14:45 crc kubenswrapper[4808]: I0217 16:14:45.856347 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4b8e73f-b7b0-4580-8e0f-44eef84624e4-ovndb-tls-certs\") pod \"neutron-5c8b8554dd-86wnt\" (UID: \"b4b8e73f-b7b0-4580-8e0f-44eef84624e4\") " pod="openstack/neutron-5c8b8554dd-86wnt" Feb 17 16:14:45 crc kubenswrapper[4808]: I0217 16:14:45.856556 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wnm4z\" (UniqueName: \"kubernetes.io/projected/b4b8e73f-b7b0-4580-8e0f-44eef84624e4-kube-api-access-wnm4z\") pod \"neutron-5c8b8554dd-86wnt\" (UID: \"b4b8e73f-b7b0-4580-8e0f-44eef84624e4\") " pod="openstack/neutron-5c8b8554dd-86wnt" Feb 17 16:14:45 crc kubenswrapper[4808]: I0217 16:14:45.856840 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4b8e73f-b7b0-4580-8e0f-44eef84624e4-combined-ca-bundle\") pod \"neutron-5c8b8554dd-86wnt\" (UID: \"b4b8e73f-b7b0-4580-8e0f-44eef84624e4\") " pod="openstack/neutron-5c8b8554dd-86wnt" Feb 17 16:14:45 crc kubenswrapper[4808]: I0217 16:14:45.857157 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b4b8e73f-b7b0-4580-8e0f-44eef84624e4-httpd-config\") pod \"neutron-5c8b8554dd-86wnt\" (UID: \"b4b8e73f-b7b0-4580-8e0f-44eef84624e4\") " pod="openstack/neutron-5c8b8554dd-86wnt" Feb 17 16:14:45 crc kubenswrapper[4808]: I0217 16:14:45.860350 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b4b8e73f-b7b0-4580-8e0f-44eef84624e4-httpd-config\") pod \"neutron-5c8b8554dd-86wnt\" (UID: \"b4b8e73f-b7b0-4580-8e0f-44eef84624e4\") " pod="openstack/neutron-5c8b8554dd-86wnt" Feb 17 16:14:45 crc kubenswrapper[4808]: I0217 16:14:45.860442 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/b4b8e73f-b7b0-4580-8e0f-44eef84624e4-config\") pod \"neutron-5c8b8554dd-86wnt\" (UID: \"b4b8e73f-b7b0-4580-8e0f-44eef84624e4\") " pod="openstack/neutron-5c8b8554dd-86wnt" Feb 17 16:14:45 crc kubenswrapper[4808]: I0217 16:14:45.862309 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4b8e73f-b7b0-4580-8e0f-44eef84624e4-ovndb-tls-certs\") pod \"neutron-5c8b8554dd-86wnt\" (UID: \"b4b8e73f-b7b0-4580-8e0f-44eef84624e4\") " pod="openstack/neutron-5c8b8554dd-86wnt" Feb 17 16:14:45 crc kubenswrapper[4808]: I0217 16:14:45.862529 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4b8e73f-b7b0-4580-8e0f-44eef84624e4-combined-ca-bundle\") pod \"neutron-5c8b8554dd-86wnt\" (UID: \"b4b8e73f-b7b0-4580-8e0f-44eef84624e4\") " pod="openstack/neutron-5c8b8554dd-86wnt" Feb 17 16:14:45 crc kubenswrapper[4808]: I0217 16:14:45.872614 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d88d7b95f-kcq78" Feb 17 16:14:45 crc kubenswrapper[4808]: I0217 16:14:45.877142 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wnm4z\" (UniqueName: \"kubernetes.io/projected/b4b8e73f-b7b0-4580-8e0f-44eef84624e4-kube-api-access-wnm4z\") pod \"neutron-5c8b8554dd-86wnt\" (UID: \"b4b8e73f-b7b0-4580-8e0f-44eef84624e4\") " pod="openstack/neutron-5c8b8554dd-86wnt" Feb 17 16:14:45 crc kubenswrapper[4808]: I0217 16:14:45.982604 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-pq8qq" Feb 17 16:14:46 crc kubenswrapper[4808]: I0217 16:14:46.044153 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5c8b8554dd-86wnt" Feb 17 16:14:46 crc kubenswrapper[4808]: I0217 16:14:46.060709 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/317e56c8-5f01-4313-a632-12ccaccf9442-config\") pod \"317e56c8-5f01-4313-a632-12ccaccf9442\" (UID: \"317e56c8-5f01-4313-a632-12ccaccf9442\") " Feb 17 16:14:46 crc kubenswrapper[4808]: I0217 16:14:46.060750 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2l9h5\" (UniqueName: \"kubernetes.io/projected/317e56c8-5f01-4313-a632-12ccaccf9442-kube-api-access-2l9h5\") pod \"317e56c8-5f01-4313-a632-12ccaccf9442\" (UID: \"317e56c8-5f01-4313-a632-12ccaccf9442\") " Feb 17 16:14:46 crc kubenswrapper[4808]: I0217 16:14:46.060969 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/317e56c8-5f01-4313-a632-12ccaccf9442-dns-svc\") pod \"317e56c8-5f01-4313-a632-12ccaccf9442\" (UID: \"317e56c8-5f01-4313-a632-12ccaccf9442\") " Feb 17 16:14:46 crc kubenswrapper[4808]: I0217 16:14:46.061036 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/317e56c8-5f01-4313-a632-12ccaccf9442-ovsdbserver-sb\") pod \"317e56c8-5f01-4313-a632-12ccaccf9442\" (UID: \"317e56c8-5f01-4313-a632-12ccaccf9442\") " Feb 17 16:14:46 crc kubenswrapper[4808]: I0217 16:14:46.061056 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/317e56c8-5f01-4313-a632-12ccaccf9442-ovsdbserver-nb\") pod \"317e56c8-5f01-4313-a632-12ccaccf9442\" (UID: \"317e56c8-5f01-4313-a632-12ccaccf9442\") " Feb 17 16:14:46 crc kubenswrapper[4808]: I0217 16:14:46.065783 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/317e56c8-5f01-4313-a632-12ccaccf9442-kube-api-access-2l9h5" (OuterVolumeSpecName: "kube-api-access-2l9h5") pod "317e56c8-5f01-4313-a632-12ccaccf9442" (UID: "317e56c8-5f01-4313-a632-12ccaccf9442"). InnerVolumeSpecName "kube-api-access-2l9h5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:14:46 crc kubenswrapper[4808]: I0217 16:14:46.165845 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2l9h5\" (UniqueName: \"kubernetes.io/projected/317e56c8-5f01-4313-a632-12ccaccf9442-kube-api-access-2l9h5\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:46 crc kubenswrapper[4808]: I0217 16:14:46.189030 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/317e56c8-5f01-4313-a632-12ccaccf9442-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "317e56c8-5f01-4313-a632-12ccaccf9442" (UID: "317e56c8-5f01-4313-a632-12ccaccf9442"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:14:46 crc kubenswrapper[4808]: I0217 16:14:46.195173 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/317e56c8-5f01-4313-a632-12ccaccf9442-config" (OuterVolumeSpecName: "config") pod "317e56c8-5f01-4313-a632-12ccaccf9442" (UID: "317e56c8-5f01-4313-a632-12ccaccf9442"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:14:46 crc kubenswrapper[4808]: I0217 16:14:46.234773 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d88d7b95f-kcq78"] Feb 17 16:14:46 crc kubenswrapper[4808]: I0217 16:14:46.236825 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/317e56c8-5f01-4313-a632-12ccaccf9442-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "317e56c8-5f01-4313-a632-12ccaccf9442" (UID: "317e56c8-5f01-4313-a632-12ccaccf9442"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:14:46 crc kubenswrapper[4808]: I0217 16:14:46.247128 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/317e56c8-5f01-4313-a632-12ccaccf9442-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "317e56c8-5f01-4313-a632-12ccaccf9442" (UID: "317e56c8-5f01-4313-a632-12ccaccf9442"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:14:46 crc kubenswrapper[4808]: I0217 16:14:46.267258 4808 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/317e56c8-5f01-4313-a632-12ccaccf9442-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:46 crc kubenswrapper[4808]: I0217 16:14:46.267290 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/317e56c8-5f01-4313-a632-12ccaccf9442-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:46 crc kubenswrapper[4808]: I0217 16:14:46.267300 4808 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/317e56c8-5f01-4313-a632-12ccaccf9442-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:46 crc kubenswrapper[4808]: I0217 16:14:46.267310 4808 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/317e56c8-5f01-4313-a632-12ccaccf9442-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:46 crc kubenswrapper[4808]: I0217 16:14:46.282196 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-7t4g9"] Feb 17 16:14:46 crc kubenswrapper[4808]: E0217 16:14:46.283399 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="317e56c8-5f01-4313-a632-12ccaccf9442" containerName="init" Feb 17 16:14:46 crc kubenswrapper[4808]: I0217 16:14:46.283427 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="317e56c8-5f01-4313-a632-12ccaccf9442" containerName="init" Feb 17 16:14:46 crc kubenswrapper[4808]: E0217 16:14:46.283471 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="317e56c8-5f01-4313-a632-12ccaccf9442" containerName="dnsmasq-dns" Feb 17 16:14:46 crc kubenswrapper[4808]: I0217 16:14:46.283479 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="317e56c8-5f01-4313-a632-12ccaccf9442" containerName="dnsmasq-dns" Feb 17 16:14:46 crc kubenswrapper[4808]: I0217 16:14:46.283697 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="317e56c8-5f01-4313-a632-12ccaccf9442" containerName="dnsmasq-dns" Feb 17 16:14:46 crc kubenswrapper[4808]: I0217 16:14:46.308500 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-7t4g9" Feb 17 16:14:46 crc kubenswrapper[4808]: I0217 16:14:46.367715 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-7t4g9"] Feb 17 16:14:46 crc kubenswrapper[4808]: I0217 16:14:46.378280 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/abaeb0d0-670e-4a6d-a583-b4885236c73d-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-7t4g9\" (UID: \"abaeb0d0-670e-4a6d-a583-b4885236c73d\") " pod="openstack/dnsmasq-dns-55f844cf75-7t4g9" Feb 17 16:14:46 crc kubenswrapper[4808]: I0217 16:14:46.378314 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/abaeb0d0-670e-4a6d-a583-b4885236c73d-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-7t4g9\" (UID: \"abaeb0d0-670e-4a6d-a583-b4885236c73d\") " pod="openstack/dnsmasq-dns-55f844cf75-7t4g9" Feb 17 16:14:46 crc kubenswrapper[4808]: I0217 16:14:46.378361 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpz7f\" (UniqueName: \"kubernetes.io/projected/abaeb0d0-670e-4a6d-a583-b4885236c73d-kube-api-access-vpz7f\") pod \"dnsmasq-dns-55f844cf75-7t4g9\" (UID: \"abaeb0d0-670e-4a6d-a583-b4885236c73d\") " pod="openstack/dnsmasq-dns-55f844cf75-7t4g9" Feb 17 16:14:46 crc kubenswrapper[4808]: I0217 16:14:46.378493 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/abaeb0d0-670e-4a6d-a583-b4885236c73d-dns-svc\") pod \"dnsmasq-dns-55f844cf75-7t4g9\" (UID: \"abaeb0d0-670e-4a6d-a583-b4885236c73d\") " pod="openstack/dnsmasq-dns-55f844cf75-7t4g9" Feb 17 16:14:46 crc kubenswrapper[4808]: I0217 16:14:46.378510 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/abaeb0d0-670e-4a6d-a583-b4885236c73d-config\") pod \"dnsmasq-dns-55f844cf75-7t4g9\" (UID: \"abaeb0d0-670e-4a6d-a583-b4885236c73d\") " pod="openstack/dnsmasq-dns-55f844cf75-7t4g9" Feb 17 16:14:46 crc kubenswrapper[4808]: I0217 16:14:46.378525 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/abaeb0d0-670e-4a6d-a583-b4885236c73d-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-7t4g9\" (UID: \"abaeb0d0-670e-4a6d-a583-b4885236c73d\") " pod="openstack/dnsmasq-dns-55f844cf75-7t4g9" Feb 17 16:14:46 crc kubenswrapper[4808]: I0217 16:14:46.383247 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-pq8qq" event={"ID":"317e56c8-5f01-4313-a632-12ccaccf9442","Type":"ContainerDied","Data":"ddfff32a5e606c9bd26b149ee55b24df69316a56d9a9ba2c7680c271a80e072c"} Feb 17 16:14:46 crc kubenswrapper[4808]: I0217 16:14:46.383320 4808 scope.go:117] "RemoveContainer" containerID="5bbec6100cf7c3218bd24bc7371072ff178631d539a209a85ec99f4282aadb9a" Feb 17 16:14:46 crc kubenswrapper[4808]: I0217 16:14:46.383568 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-pq8qq" Feb 17 16:14:46 crc kubenswrapper[4808]: I0217 16:14:46.453408 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-pq8qq"] Feb 17 16:14:46 crc kubenswrapper[4808]: I0217 16:14:46.461905 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-pq8qq"] Feb 17 16:14:46 crc kubenswrapper[4808]: I0217 16:14:46.480866 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/abaeb0d0-670e-4a6d-a583-b4885236c73d-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-7t4g9\" (UID: \"abaeb0d0-670e-4a6d-a583-b4885236c73d\") " pod="openstack/dnsmasq-dns-55f844cf75-7t4g9" Feb 17 16:14:46 crc kubenswrapper[4808]: I0217 16:14:46.480921 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/abaeb0d0-670e-4a6d-a583-b4885236c73d-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-7t4g9\" (UID: \"abaeb0d0-670e-4a6d-a583-b4885236c73d\") " pod="openstack/dnsmasq-dns-55f844cf75-7t4g9" Feb 17 16:14:46 crc kubenswrapper[4808]: I0217 16:14:46.480978 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vpz7f\" (UniqueName: \"kubernetes.io/projected/abaeb0d0-670e-4a6d-a583-b4885236c73d-kube-api-access-vpz7f\") pod \"dnsmasq-dns-55f844cf75-7t4g9\" (UID: \"abaeb0d0-670e-4a6d-a583-b4885236c73d\") " pod="openstack/dnsmasq-dns-55f844cf75-7t4g9" Feb 17 16:14:46 crc kubenswrapper[4808]: I0217 16:14:46.481129 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/abaeb0d0-670e-4a6d-a583-b4885236c73d-dns-svc\") pod \"dnsmasq-dns-55f844cf75-7t4g9\" (UID: \"abaeb0d0-670e-4a6d-a583-b4885236c73d\") " pod="openstack/dnsmasq-dns-55f844cf75-7t4g9" Feb 17 16:14:46 crc kubenswrapper[4808]: I0217 16:14:46.481160 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/abaeb0d0-670e-4a6d-a583-b4885236c73d-config\") pod \"dnsmasq-dns-55f844cf75-7t4g9\" (UID: \"abaeb0d0-670e-4a6d-a583-b4885236c73d\") " pod="openstack/dnsmasq-dns-55f844cf75-7t4g9" Feb 17 16:14:46 crc kubenswrapper[4808]: I0217 16:14:46.481182 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/abaeb0d0-670e-4a6d-a583-b4885236c73d-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-7t4g9\" (UID: \"abaeb0d0-670e-4a6d-a583-b4885236c73d\") " pod="openstack/dnsmasq-dns-55f844cf75-7t4g9" Feb 17 16:14:46 crc kubenswrapper[4808]: I0217 16:14:46.481906 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/abaeb0d0-670e-4a6d-a583-b4885236c73d-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-7t4g9\" (UID: \"abaeb0d0-670e-4a6d-a583-b4885236c73d\") " pod="openstack/dnsmasq-dns-55f844cf75-7t4g9" Feb 17 16:14:46 crc kubenswrapper[4808]: I0217 16:14:46.483259 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/abaeb0d0-670e-4a6d-a583-b4885236c73d-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-7t4g9\" (UID: \"abaeb0d0-670e-4a6d-a583-b4885236c73d\") " pod="openstack/dnsmasq-dns-55f844cf75-7t4g9" Feb 17 16:14:46 crc kubenswrapper[4808]: I0217 16:14:46.484159 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/abaeb0d0-670e-4a6d-a583-b4885236c73d-config\") pod \"dnsmasq-dns-55f844cf75-7t4g9\" (UID: \"abaeb0d0-670e-4a6d-a583-b4885236c73d\") " pod="openstack/dnsmasq-dns-55f844cf75-7t4g9" Feb 17 16:14:46 crc kubenswrapper[4808]: I0217 16:14:46.484940 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/abaeb0d0-670e-4a6d-a583-b4885236c73d-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-7t4g9\" (UID: \"abaeb0d0-670e-4a6d-a583-b4885236c73d\") " pod="openstack/dnsmasq-dns-55f844cf75-7t4g9" Feb 17 16:14:46 crc kubenswrapper[4808]: I0217 16:14:46.485309 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/abaeb0d0-670e-4a6d-a583-b4885236c73d-dns-svc\") pod \"dnsmasq-dns-55f844cf75-7t4g9\" (UID: \"abaeb0d0-670e-4a6d-a583-b4885236c73d\") " pod="openstack/dnsmasq-dns-55f844cf75-7t4g9" Feb 17 16:14:46 crc kubenswrapper[4808]: I0217 16:14:46.503517 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vpz7f\" (UniqueName: \"kubernetes.io/projected/abaeb0d0-670e-4a6d-a583-b4885236c73d-kube-api-access-vpz7f\") pod \"dnsmasq-dns-55f844cf75-7t4g9\" (UID: \"abaeb0d0-670e-4a6d-a583-b4885236c73d\") " pod="openstack/dnsmasq-dns-55f844cf75-7t4g9" Feb 17 16:14:46 crc kubenswrapper[4808]: I0217 16:14:46.679556 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-7t4g9" Feb 17 16:14:46 crc kubenswrapper[4808]: E0217 16:14:46.745745 4808 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2917eca2_0431_4bd6_ad96_ab8464cc4fd7.slice/crio-8d4b256de0544b61472bec728b8a9f6596b6505c3ff6baf74b4b74f9988e76dc.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2917eca2_0431_4bd6_ad96_ab8464cc4fd7.slice/crio-conmon-3e1259ba3d26a0e7de7e3a0ca80bca8985317419bb22e9888ef6fc0a7e83aec7.scope\": RecentStats: unable to find data in memory cache]" Feb 17 16:14:47 crc kubenswrapper[4808]: I0217 16:14:47.106872 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 16:14:47 crc kubenswrapper[4808]: I0217 16:14:47.108662 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 17 16:14:47 crc kubenswrapper[4808]: I0217 16:14:47.123719 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 17 16:14:47 crc kubenswrapper[4808]: I0217 16:14:47.123900 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 17 16:14:47 crc kubenswrapper[4808]: I0217 16:14:47.124005 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-xhb8t" Feb 17 16:14:47 crc kubenswrapper[4808]: I0217 16:14:47.130401 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 16:14:47 crc kubenswrapper[4808]: I0217 16:14:47.164979 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="317e56c8-5f01-4313-a632-12ccaccf9442" path="/var/lib/kubelet/pods/317e56c8-5f01-4313-a632-12ccaccf9442/volumes" Feb 17 16:14:47 crc kubenswrapper[4808]: I0217 16:14:47.195185 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/03b7a5d2-f785-4f3f-962d-b82b7d922dde-logs\") pod \"glance-default-external-api-0\" (UID: \"03b7a5d2-f785-4f3f-962d-b82b7d922dde\") " pod="openstack/glance-default-external-api-0" Feb 17 16:14:47 crc kubenswrapper[4808]: I0217 16:14:47.195244 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03b7a5d2-f785-4f3f-962d-b82b7d922dde-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"03b7a5d2-f785-4f3f-962d-b82b7d922dde\") " pod="openstack/glance-default-external-api-0" Feb 17 16:14:47 crc kubenswrapper[4808]: I0217 16:14:47.195306 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2d669ca1-f580-41d6-88d3-29cb32d20522\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2d669ca1-f580-41d6-88d3-29cb32d20522\") pod \"glance-default-external-api-0\" (UID: \"03b7a5d2-f785-4f3f-962d-b82b7d922dde\") " pod="openstack/glance-default-external-api-0" Feb 17 16:14:47 crc kubenswrapper[4808]: I0217 16:14:47.195374 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03b7a5d2-f785-4f3f-962d-b82b7d922dde-config-data\") pod \"glance-default-external-api-0\" (UID: \"03b7a5d2-f785-4f3f-962d-b82b7d922dde\") " pod="openstack/glance-default-external-api-0" Feb 17 16:14:47 crc kubenswrapper[4808]: I0217 16:14:47.195438 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/03b7a5d2-f785-4f3f-962d-b82b7d922dde-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"03b7a5d2-f785-4f3f-962d-b82b7d922dde\") " pod="openstack/glance-default-external-api-0" Feb 17 16:14:47 crc kubenswrapper[4808]: I0217 16:14:47.196194 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/03b7a5d2-f785-4f3f-962d-b82b7d922dde-scripts\") pod \"glance-default-external-api-0\" (UID: \"03b7a5d2-f785-4f3f-962d-b82b7d922dde\") " pod="openstack/glance-default-external-api-0" Feb 17 16:14:47 crc kubenswrapper[4808]: I0217 16:14:47.196255 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkqj5\" (UniqueName: \"kubernetes.io/projected/03b7a5d2-f785-4f3f-962d-b82b7d922dde-kube-api-access-mkqj5\") pod \"glance-default-external-api-0\" (UID: \"03b7a5d2-f785-4f3f-962d-b82b7d922dde\") " pod="openstack/glance-default-external-api-0" Feb 17 16:14:47 crc kubenswrapper[4808]: I0217 16:14:47.298313 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/03b7a5d2-f785-4f3f-962d-b82b7d922dde-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"03b7a5d2-f785-4f3f-962d-b82b7d922dde\") " pod="openstack/glance-default-external-api-0" Feb 17 16:14:47 crc kubenswrapper[4808]: I0217 16:14:47.298443 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/03b7a5d2-f785-4f3f-962d-b82b7d922dde-scripts\") pod \"glance-default-external-api-0\" (UID: \"03b7a5d2-f785-4f3f-962d-b82b7d922dde\") " pod="openstack/glance-default-external-api-0" Feb 17 16:14:47 crc kubenswrapper[4808]: I0217 16:14:47.298473 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mkqj5\" (UniqueName: \"kubernetes.io/projected/03b7a5d2-f785-4f3f-962d-b82b7d922dde-kube-api-access-mkqj5\") pod \"glance-default-external-api-0\" (UID: \"03b7a5d2-f785-4f3f-962d-b82b7d922dde\") " pod="openstack/glance-default-external-api-0" Feb 17 16:14:47 crc kubenswrapper[4808]: I0217 16:14:47.298522 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/03b7a5d2-f785-4f3f-962d-b82b7d922dde-logs\") pod \"glance-default-external-api-0\" (UID: \"03b7a5d2-f785-4f3f-962d-b82b7d922dde\") " pod="openstack/glance-default-external-api-0" Feb 17 16:14:47 crc kubenswrapper[4808]: I0217 16:14:47.298544 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03b7a5d2-f785-4f3f-962d-b82b7d922dde-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"03b7a5d2-f785-4f3f-962d-b82b7d922dde\") " pod="openstack/glance-default-external-api-0" Feb 17 16:14:47 crc kubenswrapper[4808]: I0217 16:14:47.298610 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2d669ca1-f580-41d6-88d3-29cb32d20522\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2d669ca1-f580-41d6-88d3-29cb32d20522\") pod \"glance-default-external-api-0\" (UID: \"03b7a5d2-f785-4f3f-962d-b82b7d922dde\") " pod="openstack/glance-default-external-api-0" Feb 17 16:14:47 crc kubenswrapper[4808]: I0217 16:14:47.298692 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03b7a5d2-f785-4f3f-962d-b82b7d922dde-config-data\") pod \"glance-default-external-api-0\" (UID: \"03b7a5d2-f785-4f3f-962d-b82b7d922dde\") " pod="openstack/glance-default-external-api-0" Feb 17 16:14:47 crc kubenswrapper[4808]: I0217 16:14:47.300247 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/03b7a5d2-f785-4f3f-962d-b82b7d922dde-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"03b7a5d2-f785-4f3f-962d-b82b7d922dde\") " pod="openstack/glance-default-external-api-0" Feb 17 16:14:47 crc kubenswrapper[4808]: I0217 16:14:47.301175 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/03b7a5d2-f785-4f3f-962d-b82b7d922dde-logs\") pod \"glance-default-external-api-0\" (UID: \"03b7a5d2-f785-4f3f-962d-b82b7d922dde\") " pod="openstack/glance-default-external-api-0" Feb 17 16:14:47 crc kubenswrapper[4808]: I0217 16:14:47.304044 4808 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 16:14:47 crc kubenswrapper[4808]: I0217 16:14:47.304109 4808 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2d669ca1-f580-41d6-88d3-29cb32d20522\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2d669ca1-f580-41d6-88d3-29cb32d20522\") pod \"glance-default-external-api-0\" (UID: \"03b7a5d2-f785-4f3f-962d-b82b7d922dde\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/793125420e976eb43638bc1f8c10c1dbf19200ea40f241dea1aa3deff96042e8/globalmount\"" pod="openstack/glance-default-external-api-0" Feb 17 16:14:47 crc kubenswrapper[4808]: I0217 16:14:47.304907 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/03b7a5d2-f785-4f3f-962d-b82b7d922dde-scripts\") pod \"glance-default-external-api-0\" (UID: \"03b7a5d2-f785-4f3f-962d-b82b7d922dde\") " pod="openstack/glance-default-external-api-0" Feb 17 16:14:47 crc kubenswrapper[4808]: I0217 16:14:47.306834 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03b7a5d2-f785-4f3f-962d-b82b7d922dde-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"03b7a5d2-f785-4f3f-962d-b82b7d922dde\") " pod="openstack/glance-default-external-api-0" Feb 17 16:14:47 crc kubenswrapper[4808]: I0217 16:14:47.308557 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03b7a5d2-f785-4f3f-962d-b82b7d922dde-config-data\") pod \"glance-default-external-api-0\" (UID: \"03b7a5d2-f785-4f3f-962d-b82b7d922dde\") " pod="openstack/glance-default-external-api-0" Feb 17 16:14:47 crc kubenswrapper[4808]: I0217 16:14:47.331902 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mkqj5\" (UniqueName: \"kubernetes.io/projected/03b7a5d2-f785-4f3f-962d-b82b7d922dde-kube-api-access-mkqj5\") pod \"glance-default-external-api-0\" (UID: \"03b7a5d2-f785-4f3f-962d-b82b7d922dde\") " pod="openstack/glance-default-external-api-0" Feb 17 16:14:47 crc kubenswrapper[4808]: I0217 16:14:47.357633 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2d669ca1-f580-41d6-88d3-29cb32d20522\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2d669ca1-f580-41d6-88d3-29cb32d20522\") pod \"glance-default-external-api-0\" (UID: \"03b7a5d2-f785-4f3f-962d-b82b7d922dde\") " pod="openstack/glance-default-external-api-0" Feb 17 16:14:47 crc kubenswrapper[4808]: I0217 16:14:47.421366 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 16:14:47 crc kubenswrapper[4808]: I0217 16:14:47.425646 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 17 16:14:47 crc kubenswrapper[4808]: I0217 16:14:47.429535 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 17 16:14:47 crc kubenswrapper[4808]: I0217 16:14:47.439167 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 17 16:14:47 crc kubenswrapper[4808]: I0217 16:14:47.455279 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 16:14:47 crc kubenswrapper[4808]: I0217 16:14:47.503528 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f547a16d-87f8-4ee7-96a5-c4039bfdb453-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f547a16d-87f8-4ee7-96a5-c4039bfdb453\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:14:47 crc kubenswrapper[4808]: I0217 16:14:47.503636 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-cde2fba9-8f9b-406e-abc6-bd786e0adb3c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cde2fba9-8f9b-406e-abc6-bd786e0adb3c\") pod \"glance-default-internal-api-0\" (UID: \"f547a16d-87f8-4ee7-96a5-c4039bfdb453\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:14:47 crc kubenswrapper[4808]: I0217 16:14:47.503788 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fc4x\" (UniqueName: \"kubernetes.io/projected/f547a16d-87f8-4ee7-96a5-c4039bfdb453-kube-api-access-7fc4x\") pod \"glance-default-internal-api-0\" (UID: \"f547a16d-87f8-4ee7-96a5-c4039bfdb453\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:14:47 crc kubenswrapper[4808]: I0217 16:14:47.503845 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f547a16d-87f8-4ee7-96a5-c4039bfdb453-logs\") pod \"glance-default-internal-api-0\" (UID: \"f547a16d-87f8-4ee7-96a5-c4039bfdb453\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:14:47 crc kubenswrapper[4808]: I0217 16:14:47.504072 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f547a16d-87f8-4ee7-96a5-c4039bfdb453-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f547a16d-87f8-4ee7-96a5-c4039bfdb453\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:14:47 crc kubenswrapper[4808]: I0217 16:14:47.504112 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f547a16d-87f8-4ee7-96a5-c4039bfdb453-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f547a16d-87f8-4ee7-96a5-c4039bfdb453\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:14:47 crc kubenswrapper[4808]: I0217 16:14:47.504147 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f547a16d-87f8-4ee7-96a5-c4039bfdb453-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f547a16d-87f8-4ee7-96a5-c4039bfdb453\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:14:47 crc kubenswrapper[4808]: I0217 16:14:47.607203 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7fc4x\" (UniqueName: \"kubernetes.io/projected/f547a16d-87f8-4ee7-96a5-c4039bfdb453-kube-api-access-7fc4x\") pod \"glance-default-internal-api-0\" (UID: \"f547a16d-87f8-4ee7-96a5-c4039bfdb453\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:14:47 crc kubenswrapper[4808]: I0217 16:14:47.607374 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f547a16d-87f8-4ee7-96a5-c4039bfdb453-logs\") pod \"glance-default-internal-api-0\" (UID: \"f547a16d-87f8-4ee7-96a5-c4039bfdb453\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:14:47 crc kubenswrapper[4808]: I0217 16:14:47.607468 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f547a16d-87f8-4ee7-96a5-c4039bfdb453-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f547a16d-87f8-4ee7-96a5-c4039bfdb453\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:14:47 crc kubenswrapper[4808]: I0217 16:14:47.607528 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f547a16d-87f8-4ee7-96a5-c4039bfdb453-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f547a16d-87f8-4ee7-96a5-c4039bfdb453\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:14:47 crc kubenswrapper[4808]: I0217 16:14:47.607551 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f547a16d-87f8-4ee7-96a5-c4039bfdb453-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f547a16d-87f8-4ee7-96a5-c4039bfdb453\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:14:47 crc kubenswrapper[4808]: I0217 16:14:47.607644 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f547a16d-87f8-4ee7-96a5-c4039bfdb453-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f547a16d-87f8-4ee7-96a5-c4039bfdb453\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:14:47 crc kubenswrapper[4808]: I0217 16:14:47.607675 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-cde2fba9-8f9b-406e-abc6-bd786e0adb3c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cde2fba9-8f9b-406e-abc6-bd786e0adb3c\") pod \"glance-default-internal-api-0\" (UID: \"f547a16d-87f8-4ee7-96a5-c4039bfdb453\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:14:47 crc kubenswrapper[4808]: I0217 16:14:47.608061 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f547a16d-87f8-4ee7-96a5-c4039bfdb453-logs\") pod \"glance-default-internal-api-0\" (UID: \"f547a16d-87f8-4ee7-96a5-c4039bfdb453\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:14:47 crc kubenswrapper[4808]: I0217 16:14:47.608113 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f547a16d-87f8-4ee7-96a5-c4039bfdb453-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f547a16d-87f8-4ee7-96a5-c4039bfdb453\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:14:47 crc kubenswrapper[4808]: I0217 16:14:47.611402 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f547a16d-87f8-4ee7-96a5-c4039bfdb453-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f547a16d-87f8-4ee7-96a5-c4039bfdb453\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:14:47 crc kubenswrapper[4808]: I0217 16:14:47.612807 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f547a16d-87f8-4ee7-96a5-c4039bfdb453-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f547a16d-87f8-4ee7-96a5-c4039bfdb453\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:14:47 crc kubenswrapper[4808]: I0217 16:14:47.612885 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f547a16d-87f8-4ee7-96a5-c4039bfdb453-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f547a16d-87f8-4ee7-96a5-c4039bfdb453\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:14:47 crc kubenswrapper[4808]: I0217 16:14:47.614095 4808 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 16:14:47 crc kubenswrapper[4808]: I0217 16:14:47.614124 4808 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-cde2fba9-8f9b-406e-abc6-bd786e0adb3c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cde2fba9-8f9b-406e-abc6-bd786e0adb3c\") pod \"glance-default-internal-api-0\" (UID: \"f547a16d-87f8-4ee7-96a5-c4039bfdb453\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/babb0a58e49abb7abbb526a723d7265132519584485959e000cf4b8b02c96a84/globalmount\"" pod="openstack/glance-default-internal-api-0" Feb 17 16:14:47 crc kubenswrapper[4808]: I0217 16:14:47.640342 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7fc4x\" (UniqueName: \"kubernetes.io/projected/f547a16d-87f8-4ee7-96a5-c4039bfdb453-kube-api-access-7fc4x\") pod \"glance-default-internal-api-0\" (UID: \"f547a16d-87f8-4ee7-96a5-c4039bfdb453\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:14:47 crc kubenswrapper[4808]: I0217 16:14:47.646436 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-cde2fba9-8f9b-406e-abc6-bd786e0adb3c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cde2fba9-8f9b-406e-abc6-bd786e0adb3c\") pod \"glance-default-internal-api-0\" (UID: \"f547a16d-87f8-4ee7-96a5-c4039bfdb453\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:14:47 crc kubenswrapper[4808]: I0217 16:14:47.751766 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 17 16:14:47 crc kubenswrapper[4808]: I0217 16:14:47.997606 4808 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-pq8qq" podUID="317e56c8-5f01-4313-a632-12ccaccf9442" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.128:5353: i/o timeout" Feb 17 16:14:48 crc kubenswrapper[4808]: E0217 16:14:48.020087 4808 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Feb 17 16:14:48 crc kubenswrapper[4808]: E0217 16:14:48.020281 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9mc46,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-jcqjf_openstack(d0cc3be3-7aa7-4384-97ed-1ec7bf75f026): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 16:14:48 crc kubenswrapper[4808]: E0217 16:14:48.021479 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-jcqjf" podUID="d0cc3be3-7aa7-4384-97ed-1ec7bf75f026" Feb 17 16:14:48 crc kubenswrapper[4808]: E0217 16:14:48.408759 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-jcqjf" podUID="d0cc3be3-7aa7-4384-97ed-1ec7bf75f026" Feb 17 16:14:50 crc kubenswrapper[4808]: I0217 16:14:50.984134 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 16:14:51 crc kubenswrapper[4808]: I0217 16:14:51.060309 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 16:14:51 crc kubenswrapper[4808]: I0217 16:14:51.675864 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6576669595-nvtln"] Feb 17 16:14:51 crc kubenswrapper[4808]: I0217 16:14:51.687812 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6576669595-nvtln" Feb 17 16:14:51 crc kubenswrapper[4808]: I0217 16:14:51.687978 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd20b2ca-153a-4f21-9c41-4f00bdc82b56-internal-tls-certs\") pod \"neutron-6576669595-nvtln\" (UID: \"dd20b2ca-153a-4f21-9c41-4f00bdc82b56\") " pod="openstack/neutron-6576669595-nvtln" Feb 17 16:14:51 crc kubenswrapper[4808]: I0217 16:14:51.688049 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/dd20b2ca-153a-4f21-9c41-4f00bdc82b56-config\") pod \"neutron-6576669595-nvtln\" (UID: \"dd20b2ca-153a-4f21-9c41-4f00bdc82b56\") " pod="openstack/neutron-6576669595-nvtln" Feb 17 16:14:51 crc kubenswrapper[4808]: I0217 16:14:51.688069 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd20b2ca-153a-4f21-9c41-4f00bdc82b56-ovndb-tls-certs\") pod \"neutron-6576669595-nvtln\" (UID: \"dd20b2ca-153a-4f21-9c41-4f00bdc82b56\") " pod="openstack/neutron-6576669595-nvtln" Feb 17 16:14:51 crc kubenswrapper[4808]: I0217 16:14:51.688104 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd20b2ca-153a-4f21-9c41-4f00bdc82b56-combined-ca-bundle\") pod \"neutron-6576669595-nvtln\" (UID: \"dd20b2ca-153a-4f21-9c41-4f00bdc82b56\") " pod="openstack/neutron-6576669595-nvtln" Feb 17 16:14:51 crc kubenswrapper[4808]: I0217 16:14:51.688122 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfzgz\" (UniqueName: \"kubernetes.io/projected/dd20b2ca-153a-4f21-9c41-4f00bdc82b56-kube-api-access-kfzgz\") pod \"neutron-6576669595-nvtln\" (UID: \"dd20b2ca-153a-4f21-9c41-4f00bdc82b56\") " pod="openstack/neutron-6576669595-nvtln" Feb 17 16:14:51 crc kubenswrapper[4808]: I0217 16:14:51.688164 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/dd20b2ca-153a-4f21-9c41-4f00bdc82b56-httpd-config\") pod \"neutron-6576669595-nvtln\" (UID: \"dd20b2ca-153a-4f21-9c41-4f00bdc82b56\") " pod="openstack/neutron-6576669595-nvtln" Feb 17 16:14:51 crc kubenswrapper[4808]: I0217 16:14:51.688236 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd20b2ca-153a-4f21-9c41-4f00bdc82b56-public-tls-certs\") pod \"neutron-6576669595-nvtln\" (UID: \"dd20b2ca-153a-4f21-9c41-4f00bdc82b56\") " pod="openstack/neutron-6576669595-nvtln" Feb 17 16:14:51 crc kubenswrapper[4808]: I0217 16:14:51.688348 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6576669595-nvtln"] Feb 17 16:14:51 crc kubenswrapper[4808]: I0217 16:14:51.710347 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Feb 17 16:14:51 crc kubenswrapper[4808]: I0217 16:14:51.710764 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Feb 17 16:14:51 crc kubenswrapper[4808]: I0217 16:14:51.790476 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd20b2ca-153a-4f21-9c41-4f00bdc82b56-public-tls-certs\") pod \"neutron-6576669595-nvtln\" (UID: \"dd20b2ca-153a-4f21-9c41-4f00bdc82b56\") " pod="openstack/neutron-6576669595-nvtln" Feb 17 16:14:51 crc kubenswrapper[4808]: I0217 16:14:51.790754 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd20b2ca-153a-4f21-9c41-4f00bdc82b56-internal-tls-certs\") pod \"neutron-6576669595-nvtln\" (UID: \"dd20b2ca-153a-4f21-9c41-4f00bdc82b56\") " pod="openstack/neutron-6576669595-nvtln" Feb 17 16:14:51 crc kubenswrapper[4808]: I0217 16:14:51.790826 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/dd20b2ca-153a-4f21-9c41-4f00bdc82b56-config\") pod \"neutron-6576669595-nvtln\" (UID: \"dd20b2ca-153a-4f21-9c41-4f00bdc82b56\") " pod="openstack/neutron-6576669595-nvtln" Feb 17 16:14:51 crc kubenswrapper[4808]: I0217 16:14:51.790864 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd20b2ca-153a-4f21-9c41-4f00bdc82b56-ovndb-tls-certs\") pod \"neutron-6576669595-nvtln\" (UID: \"dd20b2ca-153a-4f21-9c41-4f00bdc82b56\") " pod="openstack/neutron-6576669595-nvtln" Feb 17 16:14:51 crc kubenswrapper[4808]: I0217 16:14:51.790931 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd20b2ca-153a-4f21-9c41-4f00bdc82b56-combined-ca-bundle\") pod \"neutron-6576669595-nvtln\" (UID: \"dd20b2ca-153a-4f21-9c41-4f00bdc82b56\") " pod="openstack/neutron-6576669595-nvtln" Feb 17 16:14:51 crc kubenswrapper[4808]: I0217 16:14:51.790961 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kfzgz\" (UniqueName: \"kubernetes.io/projected/dd20b2ca-153a-4f21-9c41-4f00bdc82b56-kube-api-access-kfzgz\") pod \"neutron-6576669595-nvtln\" (UID: \"dd20b2ca-153a-4f21-9c41-4f00bdc82b56\") " pod="openstack/neutron-6576669595-nvtln" Feb 17 16:14:51 crc kubenswrapper[4808]: I0217 16:14:51.791041 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/dd20b2ca-153a-4f21-9c41-4f00bdc82b56-httpd-config\") pod \"neutron-6576669595-nvtln\" (UID: \"dd20b2ca-153a-4f21-9c41-4f00bdc82b56\") " pod="openstack/neutron-6576669595-nvtln" Feb 17 16:14:51 crc kubenswrapper[4808]: I0217 16:14:51.795197 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd20b2ca-153a-4f21-9c41-4f00bdc82b56-public-tls-certs\") pod \"neutron-6576669595-nvtln\" (UID: \"dd20b2ca-153a-4f21-9c41-4f00bdc82b56\") " pod="openstack/neutron-6576669595-nvtln" Feb 17 16:14:51 crc kubenswrapper[4808]: I0217 16:14:51.795476 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd20b2ca-153a-4f21-9c41-4f00bdc82b56-ovndb-tls-certs\") pod \"neutron-6576669595-nvtln\" (UID: \"dd20b2ca-153a-4f21-9c41-4f00bdc82b56\") " pod="openstack/neutron-6576669595-nvtln" Feb 17 16:14:51 crc kubenswrapper[4808]: I0217 16:14:51.795641 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/dd20b2ca-153a-4f21-9c41-4f00bdc82b56-config\") pod \"neutron-6576669595-nvtln\" (UID: \"dd20b2ca-153a-4f21-9c41-4f00bdc82b56\") " pod="openstack/neutron-6576669595-nvtln" Feb 17 16:14:51 crc kubenswrapper[4808]: I0217 16:14:51.795831 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd20b2ca-153a-4f21-9c41-4f00bdc82b56-internal-tls-certs\") pod \"neutron-6576669595-nvtln\" (UID: \"dd20b2ca-153a-4f21-9c41-4f00bdc82b56\") " pod="openstack/neutron-6576669595-nvtln" Feb 17 16:14:51 crc kubenswrapper[4808]: I0217 16:14:51.797250 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd20b2ca-153a-4f21-9c41-4f00bdc82b56-combined-ca-bundle\") pod \"neutron-6576669595-nvtln\" (UID: \"dd20b2ca-153a-4f21-9c41-4f00bdc82b56\") " pod="openstack/neutron-6576669595-nvtln" Feb 17 16:14:51 crc kubenswrapper[4808]: I0217 16:14:51.814430 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/dd20b2ca-153a-4f21-9c41-4f00bdc82b56-httpd-config\") pod \"neutron-6576669595-nvtln\" (UID: \"dd20b2ca-153a-4f21-9c41-4f00bdc82b56\") " pod="openstack/neutron-6576669595-nvtln" Feb 17 16:14:51 crc kubenswrapper[4808]: I0217 16:14:51.815249 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfzgz\" (UniqueName: \"kubernetes.io/projected/dd20b2ca-153a-4f21-9c41-4f00bdc82b56-kube-api-access-kfzgz\") pod \"neutron-6576669595-nvtln\" (UID: \"dd20b2ca-153a-4f21-9c41-4f00bdc82b56\") " pod="openstack/neutron-6576669595-nvtln" Feb 17 16:14:52 crc kubenswrapper[4808]: I0217 16:14:52.038070 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6576669595-nvtln" Feb 17 16:14:53 crc kubenswrapper[4808]: I0217 16:14:53.856011 4808 scope.go:117] "RemoveContainer" containerID="05efd9fb2a30652e1a674ecb739d46dca429eecdc2a90da4de03961953c36078" Feb 17 16:14:54 crc kubenswrapper[4808]: I0217 16:14:54.323825 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-67f4b"] Feb 17 16:14:54 crc kubenswrapper[4808]: W0217 16:14:54.516091 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbb977bed_804c_4e4c_8d35_5562015024f3.slice/crio-c81162eb89cbecee97cfac1cc5229cbf6b84ca62ed280abed73ac2d3607e8880 WatchSource:0}: Error finding container c81162eb89cbecee97cfac1cc5229cbf6b84ca62ed280abed73ac2d3607e8880: Status 404 returned error can't find the container with id c81162eb89cbecee97cfac1cc5229cbf6b84ca62ed280abed73ac2d3607e8880 Feb 17 16:14:54 crc kubenswrapper[4808]: E0217 16:14:54.527690 4808 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 17 16:14:54 crc kubenswrapper[4808]: E0217 16:14:54.528010 4808 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 17 16:14:54 crc kubenswrapper[4808]: E0217 16:14:54.528478 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5jmms,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-wdrmd_openstack(2ec52dbb-ca2f-4013-8536-972042607240): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 16:14:54 crc kubenswrapper[4808]: E0217 16:14:54.529691 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cloudkitty-db-sync-wdrmd" podUID="2ec52dbb-ca2f-4013-8536-972042607240" Feb 17 16:14:55 crc kubenswrapper[4808]: I0217 16:14:55.169795 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-7t4g9"] Feb 17 16:14:55 crc kubenswrapper[4808]: I0217 16:14:55.172045 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5c8b8554dd-86wnt"] Feb 17 16:14:55 crc kubenswrapper[4808]: I0217 16:14:55.262349 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 16:14:55 crc kubenswrapper[4808]: I0217 16:14:55.323044 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d88d7b95f-kcq78"] Feb 17 16:14:55 crc kubenswrapper[4808]: I0217 16:14:55.391970 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6576669595-nvtln"] Feb 17 16:14:55 crc kubenswrapper[4808]: I0217 16:14:55.554255 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ce9fba55-1b70-4d39-a052-bff96bd8e93a","Type":"ContainerStarted","Data":"dab1c654217acba93cbe85ef948ea50d4d0076687aeb53ea5db8956f9dc60a1a"} Feb 17 16:14:55 crc kubenswrapper[4808]: I0217 16:14:55.560781 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-7t4g9" event={"ID":"abaeb0d0-670e-4a6d-a583-b4885236c73d","Type":"ContainerStarted","Data":"673b376ab9a6f91954598ab4a63c75d818d8ff65e3bf87016ce8c6e162ed2846"} Feb 17 16:14:55 crc kubenswrapper[4808]: I0217 16:14:55.581166 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6576669595-nvtln" event={"ID":"dd20b2ca-153a-4f21-9c41-4f00bdc82b56","Type":"ContainerStarted","Data":"6a095cda0c57e7c83e37162d0a00993ab0fc7d2ed318b1cd5b24f7f8e6f8ed0d"} Feb 17 16:14:55 crc kubenswrapper[4808]: I0217 16:14:55.593443 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5c8b8554dd-86wnt" event={"ID":"b4b8e73f-b7b0-4580-8e0f-44eef84624e4","Type":"ContainerStarted","Data":"37ecb8a325939b5e585da0c83aac7cd196aa16f8c7e46e0941abecb0dea07a08"} Feb 17 16:14:55 crc kubenswrapper[4808]: I0217 16:14:55.594654 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"03b7a5d2-f785-4f3f-962d-b82b7d922dde","Type":"ContainerStarted","Data":"7582431cc96f656a76c273158d6a6121cb9dd22056c9bc46740b2c3ec436de2b"} Feb 17 16:14:55 crc kubenswrapper[4808]: I0217 16:14:55.597145 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-67f4b" event={"ID":"bb977bed-804c-4e4c-8d35-5562015024f3","Type":"ContainerStarted","Data":"f8847c4c332a78fa4f9cfb197b1e182c16bad161468b9956b43f0c638512254c"} Feb 17 16:14:55 crc kubenswrapper[4808]: I0217 16:14:55.597173 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-67f4b" event={"ID":"bb977bed-804c-4e4c-8d35-5562015024f3","Type":"ContainerStarted","Data":"c81162eb89cbecee97cfac1cc5229cbf6b84ca62ed280abed73ac2d3607e8880"} Feb 17 16:14:55 crc kubenswrapper[4808]: I0217 16:14:55.599179 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d88d7b95f-kcq78" event={"ID":"a79be637-3b6e-4ccf-8bbe-95b1baf64444","Type":"ContainerStarted","Data":"24c9ce81f9e602d6a930f27dc304d5868bca2e20b4aea4429bb4f1c683cfc845"} Feb 17 16:14:55 crc kubenswrapper[4808]: I0217 16:14:55.606853 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-d52vg" event={"ID":"b7820c3c-fe38-46dd-906a-498a579d0805","Type":"ContainerStarted","Data":"8d303380763eeeb183dbe5ad17a24b48fb7b4e5af84df78d3904d5c4c2cf91f7"} Feb 17 16:14:55 crc kubenswrapper[4808]: E0217 16:14:55.609279 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-wdrmd" podUID="2ec52dbb-ca2f-4013-8536-972042607240" Feb 17 16:14:55 crc kubenswrapper[4808]: I0217 16:14:55.616013 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-67f4b" podStartSLOduration=21.615995931 podStartE2EDuration="21.615995931s" podCreationTimestamp="2026-02-17 16:14:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:14:55.614233604 +0000 UTC m=+1259.130592677" watchObservedRunningTime="2026-02-17 16:14:55.615995931 +0000 UTC m=+1259.132355014" Feb 17 16:14:55 crc kubenswrapper[4808]: I0217 16:14:55.653909 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-d52vg" podStartSLOduration=7.1351817650000005 podStartE2EDuration="36.653892307s" podCreationTimestamp="2026-02-17 16:14:19 +0000 UTC" firstStartedPulling="2026-02-17 16:14:21.010607125 +0000 UTC m=+1224.526966198" lastFinishedPulling="2026-02-17 16:14:50.529317617 +0000 UTC m=+1254.045676740" observedRunningTime="2026-02-17 16:14:55.652875529 +0000 UTC m=+1259.169234602" watchObservedRunningTime="2026-02-17 16:14:55.653892307 +0000 UTC m=+1259.170251380" Feb 17 16:14:56 crc kubenswrapper[4808]: I0217 16:14:56.148219 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 16:14:56 crc kubenswrapper[4808]: I0217 16:14:56.616079 4808 generic.go:334] "Generic (PLEG): container finished" podID="abaeb0d0-670e-4a6d-a583-b4885236c73d" containerID="dddcaac247851948b323e115b84153bfcbcb71436b40ee468a0fbbfe54d676ae" exitCode=0 Feb 17 16:14:56 crc kubenswrapper[4808]: I0217 16:14:56.616159 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-7t4g9" event={"ID":"abaeb0d0-670e-4a6d-a583-b4885236c73d","Type":"ContainerDied","Data":"dddcaac247851948b323e115b84153bfcbcb71436b40ee468a0fbbfe54d676ae"} Feb 17 16:14:56 crc kubenswrapper[4808]: I0217 16:14:56.626441 4808 generic.go:334] "Generic (PLEG): container finished" podID="a79be637-3b6e-4ccf-8bbe-95b1baf64444" containerID="bcee8f3f2e22515c4ec2c71a0c369ae17f4dcd41bc80c7856231434378167962" exitCode=0 Feb 17 16:14:56 crc kubenswrapper[4808]: I0217 16:14:56.626502 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d88d7b95f-kcq78" event={"ID":"a79be637-3b6e-4ccf-8bbe-95b1baf64444","Type":"ContainerDied","Data":"bcee8f3f2e22515c4ec2c71a0c369ae17f4dcd41bc80c7856231434378167962"} Feb 17 16:14:56 crc kubenswrapper[4808]: I0217 16:14:56.633307 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6576669595-nvtln" event={"ID":"dd20b2ca-153a-4f21-9c41-4f00bdc82b56","Type":"ContainerStarted","Data":"811f9cc94c4ee217b19fe631254bddba36393da079ca418fd65bacd8378b729d"} Feb 17 16:14:56 crc kubenswrapper[4808]: I0217 16:14:56.643067 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5c8b8554dd-86wnt" event={"ID":"b4b8e73f-b7b0-4580-8e0f-44eef84624e4","Type":"ContainerStarted","Data":"6fb4ffeac0605961472d3b2de8b2dce4344cba69b4920dc698cb1b861244c6eb"} Feb 17 16:14:56 crc kubenswrapper[4808]: I0217 16:14:56.643131 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5c8b8554dd-86wnt" event={"ID":"b4b8e73f-b7b0-4580-8e0f-44eef84624e4","Type":"ContainerStarted","Data":"f3f7fd1ba085d42fb2a1208d784040ea1e2e45a48ec8b1c70c8122235d3614aa"} Feb 17 16:14:56 crc kubenswrapper[4808]: I0217 16:14:56.643149 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-5c8b8554dd-86wnt" Feb 17 16:14:56 crc kubenswrapper[4808]: I0217 16:14:56.648317 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"03b7a5d2-f785-4f3f-962d-b82b7d922dde","Type":"ContainerStarted","Data":"8656e3c9fa45f0ac52f9b29a68303796673607ed203072b87aa029326ec96716"} Feb 17 16:14:56 crc kubenswrapper[4808]: I0217 16:14:56.734756 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5c8b8554dd-86wnt" podStartSLOduration=11.734719771 podStartE2EDuration="11.734719771s" podCreationTimestamp="2026-02-17 16:14:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:14:56.703949178 +0000 UTC m=+1260.220308251" watchObservedRunningTime="2026-02-17 16:14:56.734719771 +0000 UTC m=+1260.251078844" Feb 17 16:14:57 crc kubenswrapper[4808]: E0217 16:14:57.068000 4808 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2917eca2_0431_4bd6_ad96_ab8464cc4fd7.slice/crio-8d4b256de0544b61472bec728b8a9f6596b6505c3ff6baf74b4b74f9988e76dc.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2917eca2_0431_4bd6_ad96_ab8464cc4fd7.slice/crio-conmon-3e1259ba3d26a0e7de7e3a0ca80bca8985317419bb22e9888ef6fc0a7e83aec7.scope\": RecentStats: unable to find data in memory cache]" Feb 17 16:14:57 crc kubenswrapper[4808]: I0217 16:14:57.133341 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d88d7b95f-kcq78" Feb 17 16:14:57 crc kubenswrapper[4808]: E0217 16:14:57.185151 4808 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/8220dc80e343188ccc976cd01bb233632f3de453fd04815105dfdf15196faa6a/diff" to get inode usage: stat /var/lib/containers/storage/overlay/8220dc80e343188ccc976cd01bb233632f3de453fd04815105dfdf15196faa6a/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/openstack_prometheus-metric-storage-0_2917eca2-0431-4bd6-ad96-ab8464cc4fd7/config-reloader/0.log" to get inode usage: stat /var/log/pods/openstack_prometheus-metric-storage-0_2917eca2-0431-4bd6-ad96-ab8464cc4fd7/config-reloader/0.log: no such file or directory Feb 17 16:14:57 crc kubenswrapper[4808]: I0217 16:14:57.321229 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a79be637-3b6e-4ccf-8bbe-95b1baf64444-ovsdbserver-nb\") pod \"a79be637-3b6e-4ccf-8bbe-95b1baf64444\" (UID: \"a79be637-3b6e-4ccf-8bbe-95b1baf64444\") " Feb 17 16:14:57 crc kubenswrapper[4808]: I0217 16:14:57.321988 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a79be637-3b6e-4ccf-8bbe-95b1baf64444-config\") pod \"a79be637-3b6e-4ccf-8bbe-95b1baf64444\" (UID: \"a79be637-3b6e-4ccf-8bbe-95b1baf64444\") " Feb 17 16:14:57 crc kubenswrapper[4808]: I0217 16:14:57.322021 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a79be637-3b6e-4ccf-8bbe-95b1baf64444-dns-swift-storage-0\") pod \"a79be637-3b6e-4ccf-8bbe-95b1baf64444\" (UID: \"a79be637-3b6e-4ccf-8bbe-95b1baf64444\") " Feb 17 16:14:57 crc kubenswrapper[4808]: I0217 16:14:57.322046 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a79be637-3b6e-4ccf-8bbe-95b1baf64444-ovsdbserver-sb\") pod \"a79be637-3b6e-4ccf-8bbe-95b1baf64444\" (UID: \"a79be637-3b6e-4ccf-8bbe-95b1baf64444\") " Feb 17 16:14:57 crc kubenswrapper[4808]: I0217 16:14:57.322076 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jr877\" (UniqueName: \"kubernetes.io/projected/a79be637-3b6e-4ccf-8bbe-95b1baf64444-kube-api-access-jr877\") pod \"a79be637-3b6e-4ccf-8bbe-95b1baf64444\" (UID: \"a79be637-3b6e-4ccf-8bbe-95b1baf64444\") " Feb 17 16:14:57 crc kubenswrapper[4808]: I0217 16:14:57.322164 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a79be637-3b6e-4ccf-8bbe-95b1baf64444-dns-svc\") pod \"a79be637-3b6e-4ccf-8bbe-95b1baf64444\" (UID: \"a79be637-3b6e-4ccf-8bbe-95b1baf64444\") " Feb 17 16:14:57 crc kubenswrapper[4808]: I0217 16:14:57.356372 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a79be637-3b6e-4ccf-8bbe-95b1baf64444-kube-api-access-jr877" (OuterVolumeSpecName: "kube-api-access-jr877") pod "a79be637-3b6e-4ccf-8bbe-95b1baf64444" (UID: "a79be637-3b6e-4ccf-8bbe-95b1baf64444"). InnerVolumeSpecName "kube-api-access-jr877". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:14:57 crc kubenswrapper[4808]: I0217 16:14:57.357365 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a79be637-3b6e-4ccf-8bbe-95b1baf64444-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a79be637-3b6e-4ccf-8bbe-95b1baf64444" (UID: "a79be637-3b6e-4ccf-8bbe-95b1baf64444"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:14:57 crc kubenswrapper[4808]: I0217 16:14:57.360048 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a79be637-3b6e-4ccf-8bbe-95b1baf64444-config" (OuterVolumeSpecName: "config") pod "a79be637-3b6e-4ccf-8bbe-95b1baf64444" (UID: "a79be637-3b6e-4ccf-8bbe-95b1baf64444"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:14:57 crc kubenswrapper[4808]: I0217 16:14:57.370545 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a79be637-3b6e-4ccf-8bbe-95b1baf64444-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a79be637-3b6e-4ccf-8bbe-95b1baf64444" (UID: "a79be637-3b6e-4ccf-8bbe-95b1baf64444"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:14:57 crc kubenswrapper[4808]: I0217 16:14:57.371912 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a79be637-3b6e-4ccf-8bbe-95b1baf64444-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a79be637-3b6e-4ccf-8bbe-95b1baf64444" (UID: "a79be637-3b6e-4ccf-8bbe-95b1baf64444"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:14:57 crc kubenswrapper[4808]: I0217 16:14:57.387338 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a79be637-3b6e-4ccf-8bbe-95b1baf64444-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a79be637-3b6e-4ccf-8bbe-95b1baf64444" (UID: "a79be637-3b6e-4ccf-8bbe-95b1baf64444"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:14:57 crc kubenswrapper[4808]: I0217 16:14:57.427090 4808 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a79be637-3b6e-4ccf-8bbe-95b1baf64444-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:57 crc kubenswrapper[4808]: I0217 16:14:57.427140 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a79be637-3b6e-4ccf-8bbe-95b1baf64444-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:57 crc kubenswrapper[4808]: I0217 16:14:57.427156 4808 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a79be637-3b6e-4ccf-8bbe-95b1baf64444-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:57 crc kubenswrapper[4808]: I0217 16:14:57.427171 4808 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a79be637-3b6e-4ccf-8bbe-95b1baf64444-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:57 crc kubenswrapper[4808]: I0217 16:14:57.427183 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jr877\" (UniqueName: \"kubernetes.io/projected/a79be637-3b6e-4ccf-8bbe-95b1baf64444-kube-api-access-jr877\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:57 crc kubenswrapper[4808]: I0217 16:14:57.427196 4808 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a79be637-3b6e-4ccf-8bbe-95b1baf64444-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:57 crc kubenswrapper[4808]: I0217 16:14:57.656735 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f547a16d-87f8-4ee7-96a5-c4039bfdb453","Type":"ContainerStarted","Data":"c10fc6d6f2a4869db9fa18326dfe2683218bcdc439daca6286604be99d676aab"} Feb 17 16:14:57 crc kubenswrapper[4808]: I0217 16:14:57.658953 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d88d7b95f-kcq78" event={"ID":"a79be637-3b6e-4ccf-8bbe-95b1baf64444","Type":"ContainerDied","Data":"24c9ce81f9e602d6a930f27dc304d5868bca2e20b4aea4429bb4f1c683cfc845"} Feb 17 16:14:57 crc kubenswrapper[4808]: I0217 16:14:57.659007 4808 scope.go:117] "RemoveContainer" containerID="bcee8f3f2e22515c4ec2c71a0c369ae17f4dcd41bc80c7856231434378167962" Feb 17 16:14:57 crc kubenswrapper[4808]: I0217 16:14:57.659044 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d88d7b95f-kcq78" Feb 17 16:14:57 crc kubenswrapper[4808]: I0217 16:14:57.758064 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d88d7b95f-kcq78"] Feb 17 16:14:57 crc kubenswrapper[4808]: I0217 16:14:57.772371 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7d88d7b95f-kcq78"] Feb 17 16:14:59 crc kubenswrapper[4808]: I0217 16:14:59.162808 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a79be637-3b6e-4ccf-8bbe-95b1baf64444" path="/var/lib/kubelet/pods/a79be637-3b6e-4ccf-8bbe-95b1baf64444/volumes" Feb 17 16:15:00 crc kubenswrapper[4808]: I0217 16:15:00.154853 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522415-pp7nh"] Feb 17 16:15:00 crc kubenswrapper[4808]: E0217 16:15:00.155349 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a79be637-3b6e-4ccf-8bbe-95b1baf64444" containerName="init" Feb 17 16:15:00 crc kubenswrapper[4808]: I0217 16:15:00.155372 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="a79be637-3b6e-4ccf-8bbe-95b1baf64444" containerName="init" Feb 17 16:15:00 crc kubenswrapper[4808]: I0217 16:15:00.155652 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="a79be637-3b6e-4ccf-8bbe-95b1baf64444" containerName="init" Feb 17 16:15:00 crc kubenswrapper[4808]: I0217 16:15:00.156520 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522415-pp7nh" Feb 17 16:15:00 crc kubenswrapper[4808]: I0217 16:15:00.161003 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 17 16:15:00 crc kubenswrapper[4808]: I0217 16:15:00.161026 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 17 16:15:00 crc kubenswrapper[4808]: I0217 16:15:00.169064 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522415-pp7nh"] Feb 17 16:15:00 crc kubenswrapper[4808]: I0217 16:15:00.287121 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zg4tp\" (UniqueName: \"kubernetes.io/projected/41f86f53-7772-428e-b916-8624c83de123-kube-api-access-zg4tp\") pod \"collect-profiles-29522415-pp7nh\" (UID: \"41f86f53-7772-428e-b916-8624c83de123\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522415-pp7nh" Feb 17 16:15:00 crc kubenswrapper[4808]: I0217 16:15:00.287212 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/41f86f53-7772-428e-b916-8624c83de123-secret-volume\") pod \"collect-profiles-29522415-pp7nh\" (UID: \"41f86f53-7772-428e-b916-8624c83de123\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522415-pp7nh" Feb 17 16:15:00 crc kubenswrapper[4808]: I0217 16:15:00.287823 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/41f86f53-7772-428e-b916-8624c83de123-config-volume\") pod \"collect-profiles-29522415-pp7nh\" (UID: \"41f86f53-7772-428e-b916-8624c83de123\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522415-pp7nh" Feb 17 16:15:00 crc kubenswrapper[4808]: I0217 16:15:00.389404 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zg4tp\" (UniqueName: \"kubernetes.io/projected/41f86f53-7772-428e-b916-8624c83de123-kube-api-access-zg4tp\") pod \"collect-profiles-29522415-pp7nh\" (UID: \"41f86f53-7772-428e-b916-8624c83de123\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522415-pp7nh" Feb 17 16:15:00 crc kubenswrapper[4808]: I0217 16:15:00.389490 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/41f86f53-7772-428e-b916-8624c83de123-secret-volume\") pod \"collect-profiles-29522415-pp7nh\" (UID: \"41f86f53-7772-428e-b916-8624c83de123\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522415-pp7nh" Feb 17 16:15:00 crc kubenswrapper[4808]: I0217 16:15:00.389646 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/41f86f53-7772-428e-b916-8624c83de123-config-volume\") pod \"collect-profiles-29522415-pp7nh\" (UID: \"41f86f53-7772-428e-b916-8624c83de123\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522415-pp7nh" Feb 17 16:15:00 crc kubenswrapper[4808]: I0217 16:15:00.390496 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/41f86f53-7772-428e-b916-8624c83de123-config-volume\") pod \"collect-profiles-29522415-pp7nh\" (UID: \"41f86f53-7772-428e-b916-8624c83de123\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522415-pp7nh" Feb 17 16:15:00 crc kubenswrapper[4808]: I0217 16:15:00.395276 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/41f86f53-7772-428e-b916-8624c83de123-secret-volume\") pod \"collect-profiles-29522415-pp7nh\" (UID: \"41f86f53-7772-428e-b916-8624c83de123\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522415-pp7nh" Feb 17 16:15:00 crc kubenswrapper[4808]: I0217 16:15:00.408312 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zg4tp\" (UniqueName: \"kubernetes.io/projected/41f86f53-7772-428e-b916-8624c83de123-kube-api-access-zg4tp\") pod \"collect-profiles-29522415-pp7nh\" (UID: \"41f86f53-7772-428e-b916-8624c83de123\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522415-pp7nh" Feb 17 16:15:00 crc kubenswrapper[4808]: I0217 16:15:00.489716 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522415-pp7nh" Feb 17 16:15:01 crc kubenswrapper[4808]: I0217 16:15:01.710211 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6576669595-nvtln" event={"ID":"dd20b2ca-153a-4f21-9c41-4f00bdc82b56","Type":"ContainerStarted","Data":"fee07854741e5a088b7b1dea17a21007719827fd0ce55cfd2c9c99ff36340d84"} Feb 17 16:15:01 crc kubenswrapper[4808]: I0217 16:15:01.713046 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-7t4g9" event={"ID":"abaeb0d0-670e-4a6d-a583-b4885236c73d","Type":"ContainerStarted","Data":"f93f51535ebc44c66de2583206f5226e2e1eace05189cb4e738809b8081ce7e1"} Feb 17 16:15:03 crc kubenswrapper[4808]: I0217 16:15:03.733668 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f547a16d-87f8-4ee7-96a5-c4039bfdb453","Type":"ContainerStarted","Data":"98730bd34bd002dd75d1fca6da0a1fce856a905d55bcd7e32dc87a631af01ed2"} Feb 17 16:15:03 crc kubenswrapper[4808]: I0217 16:15:03.736941 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"03b7a5d2-f785-4f3f-962d-b82b7d922dde","Type":"ContainerStarted","Data":"25de9ada2140932663cc119067041efca1131c57d9655bb4cb7717162f43201b"} Feb 17 16:15:04 crc kubenswrapper[4808]: I0217 16:15:04.594929 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522415-pp7nh"] Feb 17 16:15:04 crc kubenswrapper[4808]: W0217 16:15:04.610196 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod41f86f53_7772_428e_b916_8624c83de123.slice/crio-bbb87748ac53790d547ebe98fbf611fde3c6a82de7d4e177315d64123d64ebf9 WatchSource:0}: Error finding container bbb87748ac53790d547ebe98fbf611fde3c6a82de7d4e177315d64123d64ebf9: Status 404 returned error can't find the container with id bbb87748ac53790d547ebe98fbf611fde3c6a82de7d4e177315d64123d64ebf9 Feb 17 16:15:04 crc kubenswrapper[4808]: I0217 16:15:04.752181 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522415-pp7nh" event={"ID":"41f86f53-7772-428e-b916-8624c83de123","Type":"ContainerStarted","Data":"bbb87748ac53790d547ebe98fbf611fde3c6a82de7d4e177315d64123d64ebf9"} Feb 17 16:15:04 crc kubenswrapper[4808]: I0217 16:15:04.754593 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ce9fba55-1b70-4d39-a052-bff96bd8e93a","Type":"ContainerStarted","Data":"dd8761ee926d8071fc41da21713fb32d5f439b5455e53db35d9392155b78adbe"} Feb 17 16:15:04 crc kubenswrapper[4808]: I0217 16:15:04.755942 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-rwld8" event={"ID":"5bf4d932-664a-46c6-bec5-f2b70950c824","Type":"ContainerStarted","Data":"d13306e7f7b98912b9cc3cb00da949b55a527efdf00a13d4c28a802941f6067a"} Feb 17 16:15:04 crc kubenswrapper[4808]: I0217 16:15:04.759274 4808 generic.go:334] "Generic (PLEG): container finished" podID="bb977bed-804c-4e4c-8d35-5562015024f3" containerID="f8847c4c332a78fa4f9cfb197b1e182c16bad161468b9956b43f0c638512254c" exitCode=0 Feb 17 16:15:04 crc kubenswrapper[4808]: I0217 16:15:04.759462 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-67f4b" event={"ID":"bb977bed-804c-4e4c-8d35-5562015024f3","Type":"ContainerDied","Data":"f8847c4c332a78fa4f9cfb197b1e182c16bad161468b9956b43f0c638512254c"} Feb 17 16:15:04 crc kubenswrapper[4808]: I0217 16:15:04.759586 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="03b7a5d2-f785-4f3f-962d-b82b7d922dde" containerName="glance-log" containerID="cri-o://8656e3c9fa45f0ac52f9b29a68303796673607ed203072b87aa029326ec96716" gracePeriod=30 Feb 17 16:15:04 crc kubenswrapper[4808]: I0217 16:15:04.759751 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="03b7a5d2-f785-4f3f-962d-b82b7d922dde" containerName="glance-httpd" containerID="cri-o://25de9ada2140932663cc119067041efca1131c57d9655bb4cb7717162f43201b" gracePeriod=30 Feb 17 16:15:04 crc kubenswrapper[4808]: I0217 16:15:04.759779 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-55f844cf75-7t4g9" Feb 17 16:15:04 crc kubenswrapper[4808]: I0217 16:15:04.759853 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6576669595-nvtln" Feb 17 16:15:04 crc kubenswrapper[4808]: I0217 16:15:04.779484 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-rwld8" podStartSLOduration=2.509486281 podStartE2EDuration="45.779467457s" podCreationTimestamp="2026-02-17 16:14:19 +0000 UTC" firstStartedPulling="2026-02-17 16:14:20.946027596 +0000 UTC m=+1224.462386669" lastFinishedPulling="2026-02-17 16:15:04.216008772 +0000 UTC m=+1267.732367845" observedRunningTime="2026-02-17 16:15:04.76881668 +0000 UTC m=+1268.285175753" watchObservedRunningTime="2026-02-17 16:15:04.779467457 +0000 UTC m=+1268.295826530" Feb 17 16:15:04 crc kubenswrapper[4808]: I0217 16:15:04.810906 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6576669595-nvtln" podStartSLOduration=13.810884099 podStartE2EDuration="13.810884099s" podCreationTimestamp="2026-02-17 16:14:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:15:04.788107181 +0000 UTC m=+1268.304466254" watchObservedRunningTime="2026-02-17 16:15:04.810884099 +0000 UTC m=+1268.327243172" Feb 17 16:15:04 crc kubenswrapper[4808]: I0217 16:15:04.840425 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-55f844cf75-7t4g9" podStartSLOduration=18.840382617 podStartE2EDuration="18.840382617s" podCreationTimestamp="2026-02-17 16:14:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:15:04.825061342 +0000 UTC m=+1268.341420435" watchObservedRunningTime="2026-02-17 16:15:04.840382617 +0000 UTC m=+1268.356741690" Feb 17 16:15:04 crc kubenswrapper[4808]: I0217 16:15:04.851029 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=18.851002285 podStartE2EDuration="18.851002285s" podCreationTimestamp="2026-02-17 16:14:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:15:04.847923421 +0000 UTC m=+1268.364282494" watchObservedRunningTime="2026-02-17 16:15:04.851002285 +0000 UTC m=+1268.367361358" Feb 17 16:15:05 crc kubenswrapper[4808]: I0217 16:15:05.772687 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 17 16:15:05 crc kubenswrapper[4808]: I0217 16:15:05.775026 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="f547a16d-87f8-4ee7-96a5-c4039bfdb453" containerName="glance-log" containerID="cri-o://98730bd34bd002dd75d1fca6da0a1fce856a905d55bcd7e32dc87a631af01ed2" gracePeriod=30 Feb 17 16:15:05 crc kubenswrapper[4808]: I0217 16:15:05.775098 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f547a16d-87f8-4ee7-96a5-c4039bfdb453","Type":"ContainerStarted","Data":"4bbef9953a9c9890b80dda3c9f4babd7fbeefce28d6383ea9729de6c043c3795"} Feb 17 16:15:05 crc kubenswrapper[4808]: I0217 16:15:05.775180 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="f547a16d-87f8-4ee7-96a5-c4039bfdb453" containerName="glance-httpd" containerID="cri-o://4bbef9953a9c9890b80dda3c9f4babd7fbeefce28d6383ea9729de6c043c3795" gracePeriod=30 Feb 17 16:15:05 crc kubenswrapper[4808]: I0217 16:15:05.785092 4808 generic.go:334] "Generic (PLEG): container finished" podID="03b7a5d2-f785-4f3f-962d-b82b7d922dde" containerID="25de9ada2140932663cc119067041efca1131c57d9655bb4cb7717162f43201b" exitCode=0 Feb 17 16:15:05 crc kubenswrapper[4808]: I0217 16:15:05.785137 4808 generic.go:334] "Generic (PLEG): container finished" podID="03b7a5d2-f785-4f3f-962d-b82b7d922dde" containerID="8656e3c9fa45f0ac52f9b29a68303796673607ed203072b87aa029326ec96716" exitCode=143 Feb 17 16:15:05 crc kubenswrapper[4808]: I0217 16:15:05.785234 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"03b7a5d2-f785-4f3f-962d-b82b7d922dde","Type":"ContainerDied","Data":"25de9ada2140932663cc119067041efca1131c57d9655bb4cb7717162f43201b"} Feb 17 16:15:05 crc kubenswrapper[4808]: I0217 16:15:05.785742 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"03b7a5d2-f785-4f3f-962d-b82b7d922dde","Type":"ContainerDied","Data":"8656e3c9fa45f0ac52f9b29a68303796673607ed203072b87aa029326ec96716"} Feb 17 16:15:05 crc kubenswrapper[4808]: I0217 16:15:05.785787 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"03b7a5d2-f785-4f3f-962d-b82b7d922dde","Type":"ContainerDied","Data":"7582431cc96f656a76c273158d6a6121cb9dd22056c9bc46740b2c3ec436de2b"} Feb 17 16:15:05 crc kubenswrapper[4808]: I0217 16:15:05.785819 4808 scope.go:117] "RemoveContainer" containerID="25de9ada2140932663cc119067041efca1131c57d9655bb4cb7717162f43201b" Feb 17 16:15:05 crc kubenswrapper[4808]: I0217 16:15:05.785685 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 17 16:15:05 crc kubenswrapper[4808]: I0217 16:15:05.799507 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-jcqjf" event={"ID":"d0cc3be3-7aa7-4384-97ed-1ec7bf75f026","Type":"ContainerStarted","Data":"605854da0374a1e089d7a0c7ad0840ab1318edc5017bc1e2125f207c2fb40b06"} Feb 17 16:15:05 crc kubenswrapper[4808]: I0217 16:15:05.816788 4808 generic.go:334] "Generic (PLEG): container finished" podID="41f86f53-7772-428e-b916-8624c83de123" containerID="af2c8b60da9d5276edbe2e0351b8e1093617fb76e21f063ad9744c8103bb6313" exitCode=0 Feb 17 16:15:05 crc kubenswrapper[4808]: I0217 16:15:05.816855 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522415-pp7nh" event={"ID":"41f86f53-7772-428e-b916-8624c83de123","Type":"ContainerDied","Data":"af2c8b60da9d5276edbe2e0351b8e1093617fb76e21f063ad9744c8103bb6313"} Feb 17 16:15:05 crc kubenswrapper[4808]: I0217 16:15:05.824396 4808 generic.go:334] "Generic (PLEG): container finished" podID="b7820c3c-fe38-46dd-906a-498a579d0805" containerID="8d303380763eeeb183dbe5ad17a24b48fb7b4e5af84df78d3904d5c4c2cf91f7" exitCode=0 Feb 17 16:15:05 crc kubenswrapper[4808]: I0217 16:15:05.825337 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-d52vg" event={"ID":"b7820c3c-fe38-46dd-906a-498a579d0805","Type":"ContainerDied","Data":"8d303380763eeeb183dbe5ad17a24b48fb7b4e5af84df78d3904d5c4c2cf91f7"} Feb 17 16:15:05 crc kubenswrapper[4808]: I0217 16:15:05.835632 4808 scope.go:117] "RemoveContainer" containerID="8656e3c9fa45f0ac52f9b29a68303796673607ed203072b87aa029326ec96716" Feb 17 16:15:05 crc kubenswrapper[4808]: I0217 16:15:05.847560 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=19.847541746 podStartE2EDuration="19.847541746s" podCreationTimestamp="2026-02-17 16:14:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:15:05.834848182 +0000 UTC m=+1269.351207255" watchObservedRunningTime="2026-02-17 16:15:05.847541746 +0000 UTC m=+1269.363900819" Feb 17 16:15:05 crc kubenswrapper[4808]: I0217 16:15:05.906762 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-jcqjf" podStartSLOduration=2.961862571 podStartE2EDuration="46.906742869s" podCreationTimestamp="2026-02-17 16:14:19 +0000 UTC" firstStartedPulling="2026-02-17 16:14:20.583674336 +0000 UTC m=+1224.100033409" lastFinishedPulling="2026-02-17 16:15:04.528554634 +0000 UTC m=+1268.044913707" observedRunningTime="2026-02-17 16:15:05.902638358 +0000 UTC m=+1269.418997431" watchObservedRunningTime="2026-02-17 16:15:05.906742869 +0000 UTC m=+1269.423101952" Feb 17 16:15:05 crc kubenswrapper[4808]: I0217 16:15:05.909183 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03b7a5d2-f785-4f3f-962d-b82b7d922dde-combined-ca-bundle\") pod \"03b7a5d2-f785-4f3f-962d-b82b7d922dde\" (UID: \"03b7a5d2-f785-4f3f-962d-b82b7d922dde\") " Feb 17 16:15:05 crc kubenswrapper[4808]: I0217 16:15:05.909365 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2d669ca1-f580-41d6-88d3-29cb32d20522\") pod \"03b7a5d2-f785-4f3f-962d-b82b7d922dde\" (UID: \"03b7a5d2-f785-4f3f-962d-b82b7d922dde\") " Feb 17 16:15:05 crc kubenswrapper[4808]: I0217 16:15:05.909398 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mkqj5\" (UniqueName: \"kubernetes.io/projected/03b7a5d2-f785-4f3f-962d-b82b7d922dde-kube-api-access-mkqj5\") pod \"03b7a5d2-f785-4f3f-962d-b82b7d922dde\" (UID: \"03b7a5d2-f785-4f3f-962d-b82b7d922dde\") " Feb 17 16:15:05 crc kubenswrapper[4808]: I0217 16:15:05.909447 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/03b7a5d2-f785-4f3f-962d-b82b7d922dde-logs\") pod \"03b7a5d2-f785-4f3f-962d-b82b7d922dde\" (UID: \"03b7a5d2-f785-4f3f-962d-b82b7d922dde\") " Feb 17 16:15:05 crc kubenswrapper[4808]: I0217 16:15:05.909501 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03b7a5d2-f785-4f3f-962d-b82b7d922dde-config-data\") pod \"03b7a5d2-f785-4f3f-962d-b82b7d922dde\" (UID: \"03b7a5d2-f785-4f3f-962d-b82b7d922dde\") " Feb 17 16:15:05 crc kubenswrapper[4808]: I0217 16:15:05.909566 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/03b7a5d2-f785-4f3f-962d-b82b7d922dde-httpd-run\") pod \"03b7a5d2-f785-4f3f-962d-b82b7d922dde\" (UID: \"03b7a5d2-f785-4f3f-962d-b82b7d922dde\") " Feb 17 16:15:05 crc kubenswrapper[4808]: I0217 16:15:05.909651 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/03b7a5d2-f785-4f3f-962d-b82b7d922dde-scripts\") pod \"03b7a5d2-f785-4f3f-962d-b82b7d922dde\" (UID: \"03b7a5d2-f785-4f3f-962d-b82b7d922dde\") " Feb 17 16:15:05 crc kubenswrapper[4808]: I0217 16:15:05.909935 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/03b7a5d2-f785-4f3f-962d-b82b7d922dde-logs" (OuterVolumeSpecName: "logs") pod "03b7a5d2-f785-4f3f-962d-b82b7d922dde" (UID: "03b7a5d2-f785-4f3f-962d-b82b7d922dde"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:15:05 crc kubenswrapper[4808]: I0217 16:15:05.910683 4808 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/03b7a5d2-f785-4f3f-962d-b82b7d922dde-logs\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:05 crc kubenswrapper[4808]: I0217 16:15:05.910943 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/03b7a5d2-f785-4f3f-962d-b82b7d922dde-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "03b7a5d2-f785-4f3f-962d-b82b7d922dde" (UID: "03b7a5d2-f785-4f3f-962d-b82b7d922dde"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:15:05 crc kubenswrapper[4808]: I0217 16:15:05.922795 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03b7a5d2-f785-4f3f-962d-b82b7d922dde-scripts" (OuterVolumeSpecName: "scripts") pod "03b7a5d2-f785-4f3f-962d-b82b7d922dde" (UID: "03b7a5d2-f785-4f3f-962d-b82b7d922dde"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:15:05 crc kubenswrapper[4808]: I0217 16:15:05.922928 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03b7a5d2-f785-4f3f-962d-b82b7d922dde-kube-api-access-mkqj5" (OuterVolumeSpecName: "kube-api-access-mkqj5") pod "03b7a5d2-f785-4f3f-962d-b82b7d922dde" (UID: "03b7a5d2-f785-4f3f-962d-b82b7d922dde"). InnerVolumeSpecName "kube-api-access-mkqj5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:15:05 crc kubenswrapper[4808]: I0217 16:15:05.923053 4808 scope.go:117] "RemoveContainer" containerID="25de9ada2140932663cc119067041efca1131c57d9655bb4cb7717162f43201b" Feb 17 16:15:05 crc kubenswrapper[4808]: E0217 16:15:05.923731 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"25de9ada2140932663cc119067041efca1131c57d9655bb4cb7717162f43201b\": container with ID starting with 25de9ada2140932663cc119067041efca1131c57d9655bb4cb7717162f43201b not found: ID does not exist" containerID="25de9ada2140932663cc119067041efca1131c57d9655bb4cb7717162f43201b" Feb 17 16:15:05 crc kubenswrapper[4808]: I0217 16:15:05.923760 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25de9ada2140932663cc119067041efca1131c57d9655bb4cb7717162f43201b"} err="failed to get container status \"25de9ada2140932663cc119067041efca1131c57d9655bb4cb7717162f43201b\": rpc error: code = NotFound desc = could not find container \"25de9ada2140932663cc119067041efca1131c57d9655bb4cb7717162f43201b\": container with ID starting with 25de9ada2140932663cc119067041efca1131c57d9655bb4cb7717162f43201b not found: ID does not exist" Feb 17 16:15:05 crc kubenswrapper[4808]: I0217 16:15:05.923787 4808 scope.go:117] "RemoveContainer" containerID="8656e3c9fa45f0ac52f9b29a68303796673607ed203072b87aa029326ec96716" Feb 17 16:15:05 crc kubenswrapper[4808]: E0217 16:15:05.926118 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8656e3c9fa45f0ac52f9b29a68303796673607ed203072b87aa029326ec96716\": container with ID starting with 8656e3c9fa45f0ac52f9b29a68303796673607ed203072b87aa029326ec96716 not found: ID does not exist" containerID="8656e3c9fa45f0ac52f9b29a68303796673607ed203072b87aa029326ec96716" Feb 17 16:15:05 crc kubenswrapper[4808]: I0217 16:15:05.926158 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8656e3c9fa45f0ac52f9b29a68303796673607ed203072b87aa029326ec96716"} err="failed to get container status \"8656e3c9fa45f0ac52f9b29a68303796673607ed203072b87aa029326ec96716\": rpc error: code = NotFound desc = could not find container \"8656e3c9fa45f0ac52f9b29a68303796673607ed203072b87aa029326ec96716\": container with ID starting with 8656e3c9fa45f0ac52f9b29a68303796673607ed203072b87aa029326ec96716 not found: ID does not exist" Feb 17 16:15:05 crc kubenswrapper[4808]: I0217 16:15:05.926215 4808 scope.go:117] "RemoveContainer" containerID="25de9ada2140932663cc119067041efca1131c57d9655bb4cb7717162f43201b" Feb 17 16:15:05 crc kubenswrapper[4808]: I0217 16:15:05.926607 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25de9ada2140932663cc119067041efca1131c57d9655bb4cb7717162f43201b"} err="failed to get container status \"25de9ada2140932663cc119067041efca1131c57d9655bb4cb7717162f43201b\": rpc error: code = NotFound desc = could not find container \"25de9ada2140932663cc119067041efca1131c57d9655bb4cb7717162f43201b\": container with ID starting with 25de9ada2140932663cc119067041efca1131c57d9655bb4cb7717162f43201b not found: ID does not exist" Feb 17 16:15:05 crc kubenswrapper[4808]: I0217 16:15:05.926633 4808 scope.go:117] "RemoveContainer" containerID="8656e3c9fa45f0ac52f9b29a68303796673607ed203072b87aa029326ec96716" Feb 17 16:15:05 crc kubenswrapper[4808]: I0217 16:15:05.927008 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8656e3c9fa45f0ac52f9b29a68303796673607ed203072b87aa029326ec96716"} err="failed to get container status \"8656e3c9fa45f0ac52f9b29a68303796673607ed203072b87aa029326ec96716\": rpc error: code = NotFound desc = could not find container \"8656e3c9fa45f0ac52f9b29a68303796673607ed203072b87aa029326ec96716\": container with ID starting with 8656e3c9fa45f0ac52f9b29a68303796673607ed203072b87aa029326ec96716 not found: ID does not exist" Feb 17 16:15:05 crc kubenswrapper[4808]: I0217 16:15:05.938515 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2d669ca1-f580-41d6-88d3-29cb32d20522" (OuterVolumeSpecName: "glance") pod "03b7a5d2-f785-4f3f-962d-b82b7d922dde" (UID: "03b7a5d2-f785-4f3f-962d-b82b7d922dde"). InnerVolumeSpecName "pvc-2d669ca1-f580-41d6-88d3-29cb32d20522". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 17 16:15:05 crc kubenswrapper[4808]: I0217 16:15:05.972057 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03b7a5d2-f785-4f3f-962d-b82b7d922dde-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "03b7a5d2-f785-4f3f-962d-b82b7d922dde" (UID: "03b7a5d2-f785-4f3f-962d-b82b7d922dde"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:15:05 crc kubenswrapper[4808]: I0217 16:15:05.976046 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03b7a5d2-f785-4f3f-962d-b82b7d922dde-config-data" (OuterVolumeSpecName: "config-data") pod "03b7a5d2-f785-4f3f-962d-b82b7d922dde" (UID: "03b7a5d2-f785-4f3f-962d-b82b7d922dde"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.012958 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03b7a5d2-f785-4f3f-962d-b82b7d922dde-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.013034 4808 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-2d669ca1-f580-41d6-88d3-29cb32d20522\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2d669ca1-f580-41d6-88d3-29cb32d20522\") on node \"crc\" " Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.013047 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mkqj5\" (UniqueName: \"kubernetes.io/projected/03b7a5d2-f785-4f3f-962d-b82b7d922dde-kube-api-access-mkqj5\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.013058 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03b7a5d2-f785-4f3f-962d-b82b7d922dde-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.013069 4808 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/03b7a5d2-f785-4f3f-962d-b82b7d922dde-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.013481 4808 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/03b7a5d2-f785-4f3f-962d-b82b7d922dde-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.056211 4808 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.056519 4808 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-2d669ca1-f580-41d6-88d3-29cb32d20522" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2d669ca1-f580-41d6-88d3-29cb32d20522") on node "crc" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.119373 4808 reconciler_common.go:293] "Volume detached for volume \"pvc-2d669ca1-f580-41d6-88d3-29cb32d20522\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2d669ca1-f580-41d6-88d3-29cb32d20522\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.146537 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.176648 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.189715 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 16:15:06 crc kubenswrapper[4808]: E0217 16:15:06.190166 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03b7a5d2-f785-4f3f-962d-b82b7d922dde" containerName="glance-log" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.190179 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="03b7a5d2-f785-4f3f-962d-b82b7d922dde" containerName="glance-log" Feb 17 16:15:06 crc kubenswrapper[4808]: E0217 16:15:06.190197 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03b7a5d2-f785-4f3f-962d-b82b7d922dde" containerName="glance-httpd" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.190205 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="03b7a5d2-f785-4f3f-962d-b82b7d922dde" containerName="glance-httpd" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.190372 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="03b7a5d2-f785-4f3f-962d-b82b7d922dde" containerName="glance-log" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.190394 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="03b7a5d2-f785-4f3f-962d-b82b7d922dde" containerName="glance-httpd" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.194055 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.201049 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.201272 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.240739 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.264588 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-67f4b" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.323317 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/bb977bed-804c-4e4c-8d35-5562015024f3-credential-keys\") pod \"bb977bed-804c-4e4c-8d35-5562015024f3\" (UID: \"bb977bed-804c-4e4c-8d35-5562015024f3\") " Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.323443 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb977bed-804c-4e4c-8d35-5562015024f3-config-data\") pod \"bb977bed-804c-4e4c-8d35-5562015024f3\" (UID: \"bb977bed-804c-4e4c-8d35-5562015024f3\") " Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.323494 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb977bed-804c-4e4c-8d35-5562015024f3-scripts\") pod \"bb977bed-804c-4e4c-8d35-5562015024f3\" (UID: \"bb977bed-804c-4e4c-8d35-5562015024f3\") " Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.323531 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h27j8\" (UniqueName: \"kubernetes.io/projected/bb977bed-804c-4e4c-8d35-5562015024f3-kube-api-access-h27j8\") pod \"bb977bed-804c-4e4c-8d35-5562015024f3\" (UID: \"bb977bed-804c-4e4c-8d35-5562015024f3\") " Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.323728 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bb977bed-804c-4e4c-8d35-5562015024f3-fernet-keys\") pod \"bb977bed-804c-4e4c-8d35-5562015024f3\" (UID: \"bb977bed-804c-4e4c-8d35-5562015024f3\") " Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.323762 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb977bed-804c-4e4c-8d35-5562015024f3-combined-ca-bundle\") pod \"bb977bed-804c-4e4c-8d35-5562015024f3\" (UID: \"bb977bed-804c-4e4c-8d35-5562015024f3\") " Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.324051 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/311ff62c-be53-44b9-a2f7-933e94d8dfb1-logs\") pod \"glance-default-external-api-0\" (UID: \"311ff62c-be53-44b9-a2f7-933e94d8dfb1\") " pod="openstack/glance-default-external-api-0" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.324089 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/311ff62c-be53-44b9-a2f7-933e94d8dfb1-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"311ff62c-be53-44b9-a2f7-933e94d8dfb1\") " pod="openstack/glance-default-external-api-0" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.324143 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2d669ca1-f580-41d6-88d3-29cb32d20522\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2d669ca1-f580-41d6-88d3-29cb32d20522\") pod \"glance-default-external-api-0\" (UID: \"311ff62c-be53-44b9-a2f7-933e94d8dfb1\") " pod="openstack/glance-default-external-api-0" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.324199 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/311ff62c-be53-44b9-a2f7-933e94d8dfb1-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"311ff62c-be53-44b9-a2f7-933e94d8dfb1\") " pod="openstack/glance-default-external-api-0" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.324280 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/311ff62c-be53-44b9-a2f7-933e94d8dfb1-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"311ff62c-be53-44b9-a2f7-933e94d8dfb1\") " pod="openstack/glance-default-external-api-0" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.324333 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/311ff62c-be53-44b9-a2f7-933e94d8dfb1-scripts\") pod \"glance-default-external-api-0\" (UID: \"311ff62c-be53-44b9-a2f7-933e94d8dfb1\") " pod="openstack/glance-default-external-api-0" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.324381 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2l72\" (UniqueName: \"kubernetes.io/projected/311ff62c-be53-44b9-a2f7-933e94d8dfb1-kube-api-access-v2l72\") pod \"glance-default-external-api-0\" (UID: \"311ff62c-be53-44b9-a2f7-933e94d8dfb1\") " pod="openstack/glance-default-external-api-0" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.324439 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/311ff62c-be53-44b9-a2f7-933e94d8dfb1-config-data\") pod \"glance-default-external-api-0\" (UID: \"311ff62c-be53-44b9-a2f7-933e94d8dfb1\") " pod="openstack/glance-default-external-api-0" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.333241 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb977bed-804c-4e4c-8d35-5562015024f3-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "bb977bed-804c-4e4c-8d35-5562015024f3" (UID: "bb977bed-804c-4e4c-8d35-5562015024f3"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.334111 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb977bed-804c-4e4c-8d35-5562015024f3-scripts" (OuterVolumeSpecName: "scripts") pod "bb977bed-804c-4e4c-8d35-5562015024f3" (UID: "bb977bed-804c-4e4c-8d35-5562015024f3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.360900 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb977bed-804c-4e4c-8d35-5562015024f3-kube-api-access-h27j8" (OuterVolumeSpecName: "kube-api-access-h27j8") pod "bb977bed-804c-4e4c-8d35-5562015024f3" (UID: "bb977bed-804c-4e4c-8d35-5562015024f3"). InnerVolumeSpecName "kube-api-access-h27j8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.361206 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb977bed-804c-4e4c-8d35-5562015024f3-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "bb977bed-804c-4e4c-8d35-5562015024f3" (UID: "bb977bed-804c-4e4c-8d35-5562015024f3"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.365416 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb977bed-804c-4e4c-8d35-5562015024f3-config-data" (OuterVolumeSpecName: "config-data") pod "bb977bed-804c-4e4c-8d35-5562015024f3" (UID: "bb977bed-804c-4e4c-8d35-5562015024f3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.366436 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb977bed-804c-4e4c-8d35-5562015024f3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bb977bed-804c-4e4c-8d35-5562015024f3" (UID: "bb977bed-804c-4e4c-8d35-5562015024f3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.426287 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2d669ca1-f580-41d6-88d3-29cb32d20522\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2d669ca1-f580-41d6-88d3-29cb32d20522\") pod \"glance-default-external-api-0\" (UID: \"311ff62c-be53-44b9-a2f7-933e94d8dfb1\") " pod="openstack/glance-default-external-api-0" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.426373 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/311ff62c-be53-44b9-a2f7-933e94d8dfb1-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"311ff62c-be53-44b9-a2f7-933e94d8dfb1\") " pod="openstack/glance-default-external-api-0" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.426441 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/311ff62c-be53-44b9-a2f7-933e94d8dfb1-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"311ff62c-be53-44b9-a2f7-933e94d8dfb1\") " pod="openstack/glance-default-external-api-0" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.426483 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/311ff62c-be53-44b9-a2f7-933e94d8dfb1-scripts\") pod \"glance-default-external-api-0\" (UID: \"311ff62c-be53-44b9-a2f7-933e94d8dfb1\") " pod="openstack/glance-default-external-api-0" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.426524 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2l72\" (UniqueName: \"kubernetes.io/projected/311ff62c-be53-44b9-a2f7-933e94d8dfb1-kube-api-access-v2l72\") pod \"glance-default-external-api-0\" (UID: \"311ff62c-be53-44b9-a2f7-933e94d8dfb1\") " pod="openstack/glance-default-external-api-0" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.426594 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/311ff62c-be53-44b9-a2f7-933e94d8dfb1-config-data\") pod \"glance-default-external-api-0\" (UID: \"311ff62c-be53-44b9-a2f7-933e94d8dfb1\") " pod="openstack/glance-default-external-api-0" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.426648 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/311ff62c-be53-44b9-a2f7-933e94d8dfb1-logs\") pod \"glance-default-external-api-0\" (UID: \"311ff62c-be53-44b9-a2f7-933e94d8dfb1\") " pod="openstack/glance-default-external-api-0" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.426674 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/311ff62c-be53-44b9-a2f7-933e94d8dfb1-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"311ff62c-be53-44b9-a2f7-933e94d8dfb1\") " pod="openstack/glance-default-external-api-0" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.426742 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb977bed-804c-4e4c-8d35-5562015024f3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.426757 4808 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bb977bed-804c-4e4c-8d35-5562015024f3-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.426769 4808 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/bb977bed-804c-4e4c-8d35-5562015024f3-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.426781 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb977bed-804c-4e4c-8d35-5562015024f3-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.426792 4808 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb977bed-804c-4e4c-8d35-5562015024f3-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.426804 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h27j8\" (UniqueName: \"kubernetes.io/projected/bb977bed-804c-4e4c-8d35-5562015024f3-kube-api-access-h27j8\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.429976 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/311ff62c-be53-44b9-a2f7-933e94d8dfb1-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"311ff62c-be53-44b9-a2f7-933e94d8dfb1\") " pod="openstack/glance-default-external-api-0" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.430194 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/311ff62c-be53-44b9-a2f7-933e94d8dfb1-logs\") pod \"glance-default-external-api-0\" (UID: \"311ff62c-be53-44b9-a2f7-933e94d8dfb1\") " pod="openstack/glance-default-external-api-0" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.431125 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/311ff62c-be53-44b9-a2f7-933e94d8dfb1-scripts\") pod \"glance-default-external-api-0\" (UID: \"311ff62c-be53-44b9-a2f7-933e94d8dfb1\") " pod="openstack/glance-default-external-api-0" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.433178 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/311ff62c-be53-44b9-a2f7-933e94d8dfb1-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"311ff62c-be53-44b9-a2f7-933e94d8dfb1\") " pod="openstack/glance-default-external-api-0" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.435228 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/311ff62c-be53-44b9-a2f7-933e94d8dfb1-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"311ff62c-be53-44b9-a2f7-933e94d8dfb1\") " pod="openstack/glance-default-external-api-0" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.435876 4808 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.435903 4808 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2d669ca1-f580-41d6-88d3-29cb32d20522\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2d669ca1-f580-41d6-88d3-29cb32d20522\") pod \"glance-default-external-api-0\" (UID: \"311ff62c-be53-44b9-a2f7-933e94d8dfb1\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/793125420e976eb43638bc1f8c10c1dbf19200ea40f241dea1aa3deff96042e8/globalmount\"" pod="openstack/glance-default-external-api-0" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.436436 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/311ff62c-be53-44b9-a2f7-933e94d8dfb1-config-data\") pod \"glance-default-external-api-0\" (UID: \"311ff62c-be53-44b9-a2f7-933e94d8dfb1\") " pod="openstack/glance-default-external-api-0" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.447204 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2l72\" (UniqueName: \"kubernetes.io/projected/311ff62c-be53-44b9-a2f7-933e94d8dfb1-kube-api-access-v2l72\") pod \"glance-default-external-api-0\" (UID: \"311ff62c-be53-44b9-a2f7-933e94d8dfb1\") " pod="openstack/glance-default-external-api-0" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.468473 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2d669ca1-f580-41d6-88d3-29cb32d20522\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2d669ca1-f580-41d6-88d3-29cb32d20522\") pod \"glance-default-external-api-0\" (UID: \"311ff62c-be53-44b9-a2f7-933e94d8dfb1\") " pod="openstack/glance-default-external-api-0" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.491789 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.579650 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.669814 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f547a16d-87f8-4ee7-96a5-c4039bfdb453-scripts\") pod \"f547a16d-87f8-4ee7-96a5-c4039bfdb453\" (UID: \"f547a16d-87f8-4ee7-96a5-c4039bfdb453\") " Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.669923 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f547a16d-87f8-4ee7-96a5-c4039bfdb453-combined-ca-bundle\") pod \"f547a16d-87f8-4ee7-96a5-c4039bfdb453\" (UID: \"f547a16d-87f8-4ee7-96a5-c4039bfdb453\") " Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.669998 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f547a16d-87f8-4ee7-96a5-c4039bfdb453-config-data\") pod \"f547a16d-87f8-4ee7-96a5-c4039bfdb453\" (UID: \"f547a16d-87f8-4ee7-96a5-c4039bfdb453\") " Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.670029 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f547a16d-87f8-4ee7-96a5-c4039bfdb453-httpd-run\") pod \"f547a16d-87f8-4ee7-96a5-c4039bfdb453\" (UID: \"f547a16d-87f8-4ee7-96a5-c4039bfdb453\") " Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.670063 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f547a16d-87f8-4ee7-96a5-c4039bfdb453-logs\") pod \"f547a16d-87f8-4ee7-96a5-c4039bfdb453\" (UID: \"f547a16d-87f8-4ee7-96a5-c4039bfdb453\") " Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.670099 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7fc4x\" (UniqueName: \"kubernetes.io/projected/f547a16d-87f8-4ee7-96a5-c4039bfdb453-kube-api-access-7fc4x\") pod \"f547a16d-87f8-4ee7-96a5-c4039bfdb453\" (UID: \"f547a16d-87f8-4ee7-96a5-c4039bfdb453\") " Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.670327 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cde2fba9-8f9b-406e-abc6-bd786e0adb3c\") pod \"f547a16d-87f8-4ee7-96a5-c4039bfdb453\" (UID: \"f547a16d-87f8-4ee7-96a5-c4039bfdb453\") " Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.672019 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f547a16d-87f8-4ee7-96a5-c4039bfdb453-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "f547a16d-87f8-4ee7-96a5-c4039bfdb453" (UID: "f547a16d-87f8-4ee7-96a5-c4039bfdb453"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.673410 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f547a16d-87f8-4ee7-96a5-c4039bfdb453-logs" (OuterVolumeSpecName: "logs") pod "f547a16d-87f8-4ee7-96a5-c4039bfdb453" (UID: "f547a16d-87f8-4ee7-96a5-c4039bfdb453"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.682201 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f547a16d-87f8-4ee7-96a5-c4039bfdb453-kube-api-access-7fc4x" (OuterVolumeSpecName: "kube-api-access-7fc4x") pod "f547a16d-87f8-4ee7-96a5-c4039bfdb453" (UID: "f547a16d-87f8-4ee7-96a5-c4039bfdb453"). InnerVolumeSpecName "kube-api-access-7fc4x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.684591 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f547a16d-87f8-4ee7-96a5-c4039bfdb453-scripts" (OuterVolumeSpecName: "scripts") pod "f547a16d-87f8-4ee7-96a5-c4039bfdb453" (UID: "f547a16d-87f8-4ee7-96a5-c4039bfdb453"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.689433 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-55f844cf75-7t4g9" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.690251 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cde2fba9-8f9b-406e-abc6-bd786e0adb3c" (OuterVolumeSpecName: "glance") pod "f547a16d-87f8-4ee7-96a5-c4039bfdb453" (UID: "f547a16d-87f8-4ee7-96a5-c4039bfdb453"). InnerVolumeSpecName "pvc-cde2fba9-8f9b-406e-abc6-bd786e0adb3c". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.729150 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f547a16d-87f8-4ee7-96a5-c4039bfdb453-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f547a16d-87f8-4ee7-96a5-c4039bfdb453" (UID: "f547a16d-87f8-4ee7-96a5-c4039bfdb453"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.773940 4808 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f547a16d-87f8-4ee7-96a5-c4039bfdb453-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.773976 4808 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f547a16d-87f8-4ee7-96a5-c4039bfdb453-logs\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.773988 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7fc4x\" (UniqueName: \"kubernetes.io/projected/f547a16d-87f8-4ee7-96a5-c4039bfdb453-kube-api-access-7fc4x\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.774017 4808 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-cde2fba9-8f9b-406e-abc6-bd786e0adb3c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cde2fba9-8f9b-406e-abc6-bd786e0adb3c\") on node \"crc\" " Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.774029 4808 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f547a16d-87f8-4ee7-96a5-c4039bfdb453-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.774040 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f547a16d-87f8-4ee7-96a5-c4039bfdb453-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.783880 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f547a16d-87f8-4ee7-96a5-c4039bfdb453-config-data" (OuterVolumeSpecName: "config-data") pod "f547a16d-87f8-4ee7-96a5-c4039bfdb453" (UID: "f547a16d-87f8-4ee7-96a5-c4039bfdb453"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.808842 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-bbhtn"] Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.809233 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-58dd9ff6bc-bbhtn" podUID="ac763412-39e7-40d0-892a-57ac801af2bb" containerName="dnsmasq-dns" containerID="cri-o://efb29cb8354ee1065418cb03cb216915e7b1e0246bdd1f63d45fcf6320a29eb9" gracePeriod=10 Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.854780 4808 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.855240 4808 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-cde2fba9-8f9b-406e-abc6-bd786e0adb3c" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cde2fba9-8f9b-406e-abc6-bd786e0adb3c") on node "crc" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.883963 4808 generic.go:334] "Generic (PLEG): container finished" podID="f547a16d-87f8-4ee7-96a5-c4039bfdb453" containerID="4bbef9953a9c9890b80dda3c9f4babd7fbeefce28d6383ea9729de6c043c3795" exitCode=0 Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.884009 4808 generic.go:334] "Generic (PLEG): container finished" podID="f547a16d-87f8-4ee7-96a5-c4039bfdb453" containerID="98730bd34bd002dd75d1fca6da0a1fce856a905d55bcd7e32dc87a631af01ed2" exitCode=143 Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.884153 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.884549 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f547a16d-87f8-4ee7-96a5-c4039bfdb453","Type":"ContainerDied","Data":"4bbef9953a9c9890b80dda3c9f4babd7fbeefce28d6383ea9729de6c043c3795"} Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.884604 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f547a16d-87f8-4ee7-96a5-c4039bfdb453","Type":"ContainerDied","Data":"98730bd34bd002dd75d1fca6da0a1fce856a905d55bcd7e32dc87a631af01ed2"} Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.884621 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f547a16d-87f8-4ee7-96a5-c4039bfdb453","Type":"ContainerDied","Data":"c10fc6d6f2a4869db9fa18326dfe2683218bcdc439daca6286604be99d676aab"} Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.884639 4808 scope.go:117] "RemoveContainer" containerID="4bbef9953a9c9890b80dda3c9f4babd7fbeefce28d6383ea9729de6c043c3795" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.885697 4808 reconciler_common.go:293] "Volume detached for volume \"pvc-cde2fba9-8f9b-406e-abc6-bd786e0adb3c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cde2fba9-8f9b-406e-abc6-bd786e0adb3c\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.885905 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f547a16d-87f8-4ee7-96a5-c4039bfdb453-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.914482 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-sync-wdrmd" event={"ID":"2ec52dbb-ca2f-4013-8536-972042607240","Type":"ContainerStarted","Data":"a81fffa1dbaddd4905f2490f1b43e8825142981115e721e7e79501c10a7af652"} Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.936811 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-67f4b" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.937461 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-67f4b" event={"ID":"bb977bed-804c-4e4c-8d35-5562015024f3","Type":"ContainerDied","Data":"c81162eb89cbecee97cfac1cc5229cbf6b84ca62ed280abed73ac2d3607e8880"} Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.937553 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c81162eb89cbecee97cfac1cc5229cbf6b84ca62ed280abed73ac2d3607e8880" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.000388 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.000833 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.020598 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-db-sync-wdrmd" podStartSLOduration=2.805470155 podStartE2EDuration="48.020564586s" podCreationTimestamp="2026-02-17 16:14:19 +0000 UTC" firstStartedPulling="2026-02-17 16:14:21.193016813 +0000 UTC m=+1224.709375886" lastFinishedPulling="2026-02-17 16:15:06.408111244 +0000 UTC m=+1269.924470317" observedRunningTime="2026-02-17 16:15:06.968302032 +0000 UTC m=+1270.484661105" watchObservedRunningTime="2026-02-17 16:15:07.020564586 +0000 UTC m=+1270.536923659" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.026907 4808 scope.go:117] "RemoveContainer" containerID="98730bd34bd002dd75d1fca6da0a1fce856a905d55bcd7e32dc87a631af01ed2" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.045281 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 16:15:07 crc kubenswrapper[4808]: E0217 16:15:07.045712 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f547a16d-87f8-4ee7-96a5-c4039bfdb453" containerName="glance-log" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.045730 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="f547a16d-87f8-4ee7-96a5-c4039bfdb453" containerName="glance-log" Feb 17 16:15:07 crc kubenswrapper[4808]: E0217 16:15:07.045755 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f547a16d-87f8-4ee7-96a5-c4039bfdb453" containerName="glance-httpd" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.045763 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="f547a16d-87f8-4ee7-96a5-c4039bfdb453" containerName="glance-httpd" Feb 17 16:15:07 crc kubenswrapper[4808]: E0217 16:15:07.045776 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb977bed-804c-4e4c-8d35-5562015024f3" containerName="keystone-bootstrap" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.045783 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb977bed-804c-4e4c-8d35-5562015024f3" containerName="keystone-bootstrap" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.045958 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="f547a16d-87f8-4ee7-96a5-c4039bfdb453" containerName="glance-httpd" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.045974 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="f547a16d-87f8-4ee7-96a5-c4039bfdb453" containerName="glance-log" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.045989 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb977bed-804c-4e4c-8d35-5562015024f3" containerName="keystone-bootstrap" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.057392 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.057840 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.074871 4808 scope.go:117] "RemoveContainer" containerID="4bbef9953a9c9890b80dda3c9f4babd7fbeefce28d6383ea9729de6c043c3795" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.075500 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.075705 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 17 16:15:07 crc kubenswrapper[4808]: E0217 16:15:07.092388 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4bbef9953a9c9890b80dda3c9f4babd7fbeefce28d6383ea9729de6c043c3795\": container with ID starting with 4bbef9953a9c9890b80dda3c9f4babd7fbeefce28d6383ea9729de6c043c3795 not found: ID does not exist" containerID="4bbef9953a9c9890b80dda3c9f4babd7fbeefce28d6383ea9729de6c043c3795" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.092418 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4bbef9953a9c9890b80dda3c9f4babd7fbeefce28d6383ea9729de6c043c3795"} err="failed to get container status \"4bbef9953a9c9890b80dda3c9f4babd7fbeefce28d6383ea9729de6c043c3795\": rpc error: code = NotFound desc = could not find container \"4bbef9953a9c9890b80dda3c9f4babd7fbeefce28d6383ea9729de6c043c3795\": container with ID starting with 4bbef9953a9c9890b80dda3c9f4babd7fbeefce28d6383ea9729de6c043c3795 not found: ID does not exist" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.092442 4808 scope.go:117] "RemoveContainer" containerID="98730bd34bd002dd75d1fca6da0a1fce856a905d55bcd7e32dc87a631af01ed2" Feb 17 16:15:07 crc kubenswrapper[4808]: E0217 16:15:07.093006 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98730bd34bd002dd75d1fca6da0a1fce856a905d55bcd7e32dc87a631af01ed2\": container with ID starting with 98730bd34bd002dd75d1fca6da0a1fce856a905d55bcd7e32dc87a631af01ed2 not found: ID does not exist" containerID="98730bd34bd002dd75d1fca6da0a1fce856a905d55bcd7e32dc87a631af01ed2" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.093049 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98730bd34bd002dd75d1fca6da0a1fce856a905d55bcd7e32dc87a631af01ed2"} err="failed to get container status \"98730bd34bd002dd75d1fca6da0a1fce856a905d55bcd7e32dc87a631af01ed2\": rpc error: code = NotFound desc = could not find container \"98730bd34bd002dd75d1fca6da0a1fce856a905d55bcd7e32dc87a631af01ed2\": container with ID starting with 98730bd34bd002dd75d1fca6da0a1fce856a905d55bcd7e32dc87a631af01ed2 not found: ID does not exist" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.093093 4808 scope.go:117] "RemoveContainer" containerID="4bbef9953a9c9890b80dda3c9f4babd7fbeefce28d6383ea9729de6c043c3795" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.093354 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4bbef9953a9c9890b80dda3c9f4babd7fbeefce28d6383ea9729de6c043c3795"} err="failed to get container status \"4bbef9953a9c9890b80dda3c9f4babd7fbeefce28d6383ea9729de6c043c3795\": rpc error: code = NotFound desc = could not find container \"4bbef9953a9c9890b80dda3c9f4babd7fbeefce28d6383ea9729de6c043c3795\": container with ID starting with 4bbef9953a9c9890b80dda3c9f4babd7fbeefce28d6383ea9729de6c043c3795 not found: ID does not exist" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.093373 4808 scope.go:117] "RemoveContainer" containerID="98730bd34bd002dd75d1fca6da0a1fce856a905d55bcd7e32dc87a631af01ed2" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.093551 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98730bd34bd002dd75d1fca6da0a1fce856a905d55bcd7e32dc87a631af01ed2"} err="failed to get container status \"98730bd34bd002dd75d1fca6da0a1fce856a905d55bcd7e32dc87a631af01ed2\": rpc error: code = NotFound desc = could not find container \"98730bd34bd002dd75d1fca6da0a1fce856a905d55bcd7e32dc87a631af01ed2\": container with ID starting with 98730bd34bd002dd75d1fca6da0a1fce856a905d55bcd7e32dc87a631af01ed2 not found: ID does not exist" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.192513 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03b7a5d2-f785-4f3f-962d-b82b7d922dde" path="/var/lib/kubelet/pods/03b7a5d2-f785-4f3f-962d-b82b7d922dde/volumes" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.193346 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f547a16d-87f8-4ee7-96a5-c4039bfdb453" path="/var/lib/kubelet/pods/f547a16d-87f8-4ee7-96a5-c4039bfdb453/volumes" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.194087 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-679dfcbbb9-npbsd"] Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.199204 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-679dfcbbb9-npbsd"] Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.199282 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-679dfcbbb9-npbsd" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.205879 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.206047 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.206114 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-6x2tm" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.206192 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.206273 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.206436 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.208485 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-cde2fba9-8f9b-406e-abc6-bd786e0adb3c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cde2fba9-8f9b-406e-abc6-bd786e0adb3c\") pod \"glance-default-internal-api-0\" (UID: \"a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.208829 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.209157 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0-scripts\") pod \"glance-default-internal-api-0\" (UID: \"a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.209334 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0-config-data\") pod \"glance-default-internal-api-0\" (UID: \"a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.209453 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wngfm\" (UniqueName: \"kubernetes.io/projected/a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0-kube-api-access-wngfm\") pod \"glance-default-internal-api-0\" (UID: \"a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.209629 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.209933 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.215113 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0-logs\") pod \"glance-default-internal-api-0\" (UID: \"a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.266262 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.316734 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a521aa0-4048-49a0-b6c1-32e07f349ac5-scripts\") pod \"keystone-679dfcbbb9-npbsd\" (UID: \"8a521aa0-4048-49a0-b6c1-32e07f349ac5\") " pod="openstack/keystone-679dfcbbb9-npbsd" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.316783 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-cde2fba9-8f9b-406e-abc6-bd786e0adb3c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cde2fba9-8f9b-406e-abc6-bd786e0adb3c\") pod \"glance-default-internal-api-0\" (UID: \"a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.316817 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.316841 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8a521aa0-4048-49a0-b6c1-32e07f349ac5-fernet-keys\") pod \"keystone-679dfcbbb9-npbsd\" (UID: \"8a521aa0-4048-49a0-b6c1-32e07f349ac5\") " pod="openstack/keystone-679dfcbbb9-npbsd" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.316891 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0-scripts\") pod \"glance-default-internal-api-0\" (UID: \"a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.316912 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a521aa0-4048-49a0-b6c1-32e07f349ac5-internal-tls-certs\") pod \"keystone-679dfcbbb9-npbsd\" (UID: \"8a521aa0-4048-49a0-b6c1-32e07f349ac5\") " pod="openstack/keystone-679dfcbbb9-npbsd" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.316944 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0-config-data\") pod \"glance-default-internal-api-0\" (UID: \"a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.316961 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8a521aa0-4048-49a0-b6c1-32e07f349ac5-credential-keys\") pod \"keystone-679dfcbbb9-npbsd\" (UID: \"8a521aa0-4048-49a0-b6c1-32e07f349ac5\") " pod="openstack/keystone-679dfcbbb9-npbsd" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.316986 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wngfm\" (UniqueName: \"kubernetes.io/projected/a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0-kube-api-access-wngfm\") pod \"glance-default-internal-api-0\" (UID: \"a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.317010 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvpjm\" (UniqueName: \"kubernetes.io/projected/8a521aa0-4048-49a0-b6c1-32e07f349ac5-kube-api-access-xvpjm\") pod \"keystone-679dfcbbb9-npbsd\" (UID: \"8a521aa0-4048-49a0-b6c1-32e07f349ac5\") " pod="openstack/keystone-679dfcbbb9-npbsd" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.317036 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.317079 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a521aa0-4048-49a0-b6c1-32e07f349ac5-config-data\") pod \"keystone-679dfcbbb9-npbsd\" (UID: \"8a521aa0-4048-49a0-b6c1-32e07f349ac5\") " pod="openstack/keystone-679dfcbbb9-npbsd" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.317098 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a521aa0-4048-49a0-b6c1-32e07f349ac5-public-tls-certs\") pod \"keystone-679dfcbbb9-npbsd\" (UID: \"8a521aa0-4048-49a0-b6c1-32e07f349ac5\") " pod="openstack/keystone-679dfcbbb9-npbsd" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.317124 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.317144 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0-logs\") pod \"glance-default-internal-api-0\" (UID: \"a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.317161 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a521aa0-4048-49a0-b6c1-32e07f349ac5-combined-ca-bundle\") pod \"keystone-679dfcbbb9-npbsd\" (UID: \"8a521aa0-4048-49a0-b6c1-32e07f349ac5\") " pod="openstack/keystone-679dfcbbb9-npbsd" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.322219 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.324414 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.330777 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0-scripts\") pod \"glance-default-internal-api-0\" (UID: \"a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.331029 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0-config-data\") pod \"glance-default-internal-api-0\" (UID: \"a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.331840 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.338862 4808 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.338914 4808 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-cde2fba9-8f9b-406e-abc6-bd786e0adb3c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cde2fba9-8f9b-406e-abc6-bd786e0adb3c\") pod \"glance-default-internal-api-0\" (UID: \"a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/babb0a58e49abb7abbb526a723d7265132519584485959e000cf4b8b02c96a84/globalmount\"" pod="openstack/glance-default-internal-api-0" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.341907 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0-logs\") pod \"glance-default-internal-api-0\" (UID: \"a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.342460 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wngfm\" (UniqueName: \"kubernetes.io/projected/a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0-kube-api-access-wngfm\") pod \"glance-default-internal-api-0\" (UID: \"a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.421647 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvpjm\" (UniqueName: \"kubernetes.io/projected/8a521aa0-4048-49a0-b6c1-32e07f349ac5-kube-api-access-xvpjm\") pod \"keystone-679dfcbbb9-npbsd\" (UID: \"8a521aa0-4048-49a0-b6c1-32e07f349ac5\") " pod="openstack/keystone-679dfcbbb9-npbsd" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.422489 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a521aa0-4048-49a0-b6c1-32e07f349ac5-config-data\") pod \"keystone-679dfcbbb9-npbsd\" (UID: \"8a521aa0-4048-49a0-b6c1-32e07f349ac5\") " pod="openstack/keystone-679dfcbbb9-npbsd" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.422524 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a521aa0-4048-49a0-b6c1-32e07f349ac5-public-tls-certs\") pod \"keystone-679dfcbbb9-npbsd\" (UID: \"8a521aa0-4048-49a0-b6c1-32e07f349ac5\") " pod="openstack/keystone-679dfcbbb9-npbsd" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.422592 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a521aa0-4048-49a0-b6c1-32e07f349ac5-combined-ca-bundle\") pod \"keystone-679dfcbbb9-npbsd\" (UID: \"8a521aa0-4048-49a0-b6c1-32e07f349ac5\") " pod="openstack/keystone-679dfcbbb9-npbsd" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.422637 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a521aa0-4048-49a0-b6c1-32e07f349ac5-scripts\") pod \"keystone-679dfcbbb9-npbsd\" (UID: \"8a521aa0-4048-49a0-b6c1-32e07f349ac5\") " pod="openstack/keystone-679dfcbbb9-npbsd" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.422680 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8a521aa0-4048-49a0-b6c1-32e07f349ac5-fernet-keys\") pod \"keystone-679dfcbbb9-npbsd\" (UID: \"8a521aa0-4048-49a0-b6c1-32e07f349ac5\") " pod="openstack/keystone-679dfcbbb9-npbsd" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.422760 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a521aa0-4048-49a0-b6c1-32e07f349ac5-internal-tls-certs\") pod \"keystone-679dfcbbb9-npbsd\" (UID: \"8a521aa0-4048-49a0-b6c1-32e07f349ac5\") " pod="openstack/keystone-679dfcbbb9-npbsd" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.422800 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8a521aa0-4048-49a0-b6c1-32e07f349ac5-credential-keys\") pod \"keystone-679dfcbbb9-npbsd\" (UID: \"8a521aa0-4048-49a0-b6c1-32e07f349ac5\") " pod="openstack/keystone-679dfcbbb9-npbsd" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.429824 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8a521aa0-4048-49a0-b6c1-32e07f349ac5-credential-keys\") pod \"keystone-679dfcbbb9-npbsd\" (UID: \"8a521aa0-4048-49a0-b6c1-32e07f349ac5\") " pod="openstack/keystone-679dfcbbb9-npbsd" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.430481 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a521aa0-4048-49a0-b6c1-32e07f349ac5-public-tls-certs\") pod \"keystone-679dfcbbb9-npbsd\" (UID: \"8a521aa0-4048-49a0-b6c1-32e07f349ac5\") " pod="openstack/keystone-679dfcbbb9-npbsd" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.436217 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8a521aa0-4048-49a0-b6c1-32e07f349ac5-fernet-keys\") pod \"keystone-679dfcbbb9-npbsd\" (UID: \"8a521aa0-4048-49a0-b6c1-32e07f349ac5\") " pod="openstack/keystone-679dfcbbb9-npbsd" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.439296 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a521aa0-4048-49a0-b6c1-32e07f349ac5-config-data\") pod \"keystone-679dfcbbb9-npbsd\" (UID: \"8a521aa0-4048-49a0-b6c1-32e07f349ac5\") " pod="openstack/keystone-679dfcbbb9-npbsd" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.443984 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-cde2fba9-8f9b-406e-abc6-bd786e0adb3c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cde2fba9-8f9b-406e-abc6-bd786e0adb3c\") pod \"glance-default-internal-api-0\" (UID: \"a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.448408 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a521aa0-4048-49a0-b6c1-32e07f349ac5-combined-ca-bundle\") pod \"keystone-679dfcbbb9-npbsd\" (UID: \"8a521aa0-4048-49a0-b6c1-32e07f349ac5\") " pod="openstack/keystone-679dfcbbb9-npbsd" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.450904 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a521aa0-4048-49a0-b6c1-32e07f349ac5-scripts\") pod \"keystone-679dfcbbb9-npbsd\" (UID: \"8a521aa0-4048-49a0-b6c1-32e07f349ac5\") " pod="openstack/keystone-679dfcbbb9-npbsd" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.472194 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvpjm\" (UniqueName: \"kubernetes.io/projected/8a521aa0-4048-49a0-b6c1-32e07f349ac5-kube-api-access-xvpjm\") pod \"keystone-679dfcbbb9-npbsd\" (UID: \"8a521aa0-4048-49a0-b6c1-32e07f349ac5\") " pod="openstack/keystone-679dfcbbb9-npbsd" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.484825 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a521aa0-4048-49a0-b6c1-32e07f349ac5-internal-tls-certs\") pod \"keystone-679dfcbbb9-npbsd\" (UID: \"8a521aa0-4048-49a0-b6c1-32e07f349ac5\") " pod="openstack/keystone-679dfcbbb9-npbsd" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.507339 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.530025 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-679dfcbbb9-npbsd" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.786994 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522415-pp7nh" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.852791 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/41f86f53-7772-428e-b916-8624c83de123-secret-volume\") pod \"41f86f53-7772-428e-b916-8624c83de123\" (UID: \"41f86f53-7772-428e-b916-8624c83de123\") " Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.852857 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/41f86f53-7772-428e-b916-8624c83de123-config-volume\") pod \"41f86f53-7772-428e-b916-8624c83de123\" (UID: \"41f86f53-7772-428e-b916-8624c83de123\") " Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.852998 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zg4tp\" (UniqueName: \"kubernetes.io/projected/41f86f53-7772-428e-b916-8624c83de123-kube-api-access-zg4tp\") pod \"41f86f53-7772-428e-b916-8624c83de123\" (UID: \"41f86f53-7772-428e-b916-8624c83de123\") " Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.854269 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41f86f53-7772-428e-b916-8624c83de123-config-volume" (OuterVolumeSpecName: "config-volume") pod "41f86f53-7772-428e-b916-8624c83de123" (UID: "41f86f53-7772-428e-b916-8624c83de123"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.857609 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41f86f53-7772-428e-b916-8624c83de123-kube-api-access-zg4tp" (OuterVolumeSpecName: "kube-api-access-zg4tp") pod "41f86f53-7772-428e-b916-8624c83de123" (UID: "41f86f53-7772-428e-b916-8624c83de123"). InnerVolumeSpecName "kube-api-access-zg4tp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.857769 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41f86f53-7772-428e-b916-8624c83de123-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "41f86f53-7772-428e-b916-8624c83de123" (UID: "41f86f53-7772-428e-b916-8624c83de123"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.909714 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-bbhtn" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.939320 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-d52vg" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.957080 4808 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/41f86f53-7772-428e-b916-8624c83de123-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.957110 4808 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/41f86f53-7772-428e-b916-8624c83de123-config-volume\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.957124 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zg4tp\" (UniqueName: \"kubernetes.io/projected/41f86f53-7772-428e-b916-8624c83de123-kube-api-access-zg4tp\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.973603 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522415-pp7nh" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.973607 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522415-pp7nh" event={"ID":"41f86f53-7772-428e-b916-8624c83de123","Type":"ContainerDied","Data":"bbb87748ac53790d547ebe98fbf611fde3c6a82de7d4e177315d64123d64ebf9"} Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.973830 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bbb87748ac53790d547ebe98fbf611fde3c6a82de7d4e177315d64123d64ebf9" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.978674 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-d52vg" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.978804 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-d52vg" event={"ID":"b7820c3c-fe38-46dd-906a-498a579d0805","Type":"ContainerDied","Data":"5b531905add091d4dfe9c3b871669f1b4764b98e78ffc02ea10bcfde5b754358"} Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.978841 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b531905add091d4dfe9c3b871669f1b4764b98e78ffc02ea10bcfde5b754358" Feb 17 16:15:08 crc kubenswrapper[4808]: I0217 16:15:08.061895 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ac763412-39e7-40d0-892a-57ac801af2bb-ovsdbserver-sb\") pod \"ac763412-39e7-40d0-892a-57ac801af2bb\" (UID: \"ac763412-39e7-40d0-892a-57ac801af2bb\") " Feb 17 16:15:08 crc kubenswrapper[4808]: I0217 16:15:08.061962 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ac763412-39e7-40d0-892a-57ac801af2bb-dns-svc\") pod \"ac763412-39e7-40d0-892a-57ac801af2bb\" (UID: \"ac763412-39e7-40d0-892a-57ac801af2bb\") " Feb 17 16:15:08 crc kubenswrapper[4808]: I0217 16:15:08.062006 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b7820c3c-fe38-46dd-906a-498a579d0805-scripts\") pod \"b7820c3c-fe38-46dd-906a-498a579d0805\" (UID: \"b7820c3c-fe38-46dd-906a-498a579d0805\") " Feb 17 16:15:08 crc kubenswrapper[4808]: I0217 16:15:08.062081 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ac763412-39e7-40d0-892a-57ac801af2bb-dns-swift-storage-0\") pod \"ac763412-39e7-40d0-892a-57ac801af2bb\" (UID: \"ac763412-39e7-40d0-892a-57ac801af2bb\") " Feb 17 16:15:08 crc kubenswrapper[4808]: I0217 16:15:08.062157 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b7820c3c-fe38-46dd-906a-498a579d0805-logs\") pod \"b7820c3c-fe38-46dd-906a-498a579d0805\" (UID: \"b7820c3c-fe38-46dd-906a-498a579d0805\") " Feb 17 16:15:08 crc kubenswrapper[4808]: I0217 16:15:08.062203 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7820c3c-fe38-46dd-906a-498a579d0805-config-data\") pod \"b7820c3c-fe38-46dd-906a-498a579d0805\" (UID: \"b7820c3c-fe38-46dd-906a-498a579d0805\") " Feb 17 16:15:08 crc kubenswrapper[4808]: I0217 16:15:08.062247 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zz8lw\" (UniqueName: \"kubernetes.io/projected/ac763412-39e7-40d0-892a-57ac801af2bb-kube-api-access-zz8lw\") pod \"ac763412-39e7-40d0-892a-57ac801af2bb\" (UID: \"ac763412-39e7-40d0-892a-57ac801af2bb\") " Feb 17 16:15:08 crc kubenswrapper[4808]: I0217 16:15:08.062383 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7820c3c-fe38-46dd-906a-498a579d0805-combined-ca-bundle\") pod \"b7820c3c-fe38-46dd-906a-498a579d0805\" (UID: \"b7820c3c-fe38-46dd-906a-498a579d0805\") " Feb 17 16:15:08 crc kubenswrapper[4808]: I0217 16:15:08.062415 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7bzxr\" (UniqueName: \"kubernetes.io/projected/b7820c3c-fe38-46dd-906a-498a579d0805-kube-api-access-7bzxr\") pod \"b7820c3c-fe38-46dd-906a-498a579d0805\" (UID: \"b7820c3c-fe38-46dd-906a-498a579d0805\") " Feb 17 16:15:08 crc kubenswrapper[4808]: I0217 16:15:08.062477 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ac763412-39e7-40d0-892a-57ac801af2bb-ovsdbserver-nb\") pod \"ac763412-39e7-40d0-892a-57ac801af2bb\" (UID: \"ac763412-39e7-40d0-892a-57ac801af2bb\") " Feb 17 16:15:08 crc kubenswrapper[4808]: I0217 16:15:08.062527 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac763412-39e7-40d0-892a-57ac801af2bb-config\") pod \"ac763412-39e7-40d0-892a-57ac801af2bb\" (UID: \"ac763412-39e7-40d0-892a-57ac801af2bb\") " Feb 17 16:15:08 crc kubenswrapper[4808]: I0217 16:15:08.063467 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b7820c3c-fe38-46dd-906a-498a579d0805-logs" (OuterVolumeSpecName: "logs") pod "b7820c3c-fe38-46dd-906a-498a579d0805" (UID: "b7820c3c-fe38-46dd-906a-498a579d0805"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:15:08 crc kubenswrapper[4808]: I0217 16:15:08.066056 4808 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b7820c3c-fe38-46dd-906a-498a579d0805-logs\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:08 crc kubenswrapper[4808]: I0217 16:15:08.067433 4808 generic.go:334] "Generic (PLEG): container finished" podID="ac763412-39e7-40d0-892a-57ac801af2bb" containerID="efb29cb8354ee1065418cb03cb216915e7b1e0246bdd1f63d45fcf6320a29eb9" exitCode=0 Feb 17 16:15:08 crc kubenswrapper[4808]: I0217 16:15:08.067666 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-bbhtn" Feb 17 16:15:08 crc kubenswrapper[4808]: I0217 16:15:08.067793 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-bbhtn" event={"ID":"ac763412-39e7-40d0-892a-57ac801af2bb","Type":"ContainerDied","Data":"efb29cb8354ee1065418cb03cb216915e7b1e0246bdd1f63d45fcf6320a29eb9"} Feb 17 16:15:08 crc kubenswrapper[4808]: I0217 16:15:08.067910 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-bbhtn" event={"ID":"ac763412-39e7-40d0-892a-57ac801af2bb","Type":"ContainerDied","Data":"027ce35e95410cc92a867a6b938a45485c623b5bfa8d8827b979b970dbe86f22"} Feb 17 16:15:08 crc kubenswrapper[4808]: I0217 16:15:08.070379 4808 scope.go:117] "RemoveContainer" containerID="efb29cb8354ee1065418cb03cb216915e7b1e0246bdd1f63d45fcf6320a29eb9" Feb 17 16:15:08 crc kubenswrapper[4808]: I0217 16:15:08.082880 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac763412-39e7-40d0-892a-57ac801af2bb-kube-api-access-zz8lw" (OuterVolumeSpecName: "kube-api-access-zz8lw") pod "ac763412-39e7-40d0-892a-57ac801af2bb" (UID: "ac763412-39e7-40d0-892a-57ac801af2bb"). InnerVolumeSpecName "kube-api-access-zz8lw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:15:08 crc kubenswrapper[4808]: I0217 16:15:08.086753 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7820c3c-fe38-46dd-906a-498a579d0805-kube-api-access-7bzxr" (OuterVolumeSpecName: "kube-api-access-7bzxr") pod "b7820c3c-fe38-46dd-906a-498a579d0805" (UID: "b7820c3c-fe38-46dd-906a-498a579d0805"). InnerVolumeSpecName "kube-api-access-7bzxr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:15:08 crc kubenswrapper[4808]: I0217 16:15:08.088298 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7820c3c-fe38-46dd-906a-498a579d0805-scripts" (OuterVolumeSpecName: "scripts") pod "b7820c3c-fe38-46dd-906a-498a579d0805" (UID: "b7820c3c-fe38-46dd-906a-498a579d0805"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:15:08 crc kubenswrapper[4808]: I0217 16:15:08.125896 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac763412-39e7-40d0-892a-57ac801af2bb-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ac763412-39e7-40d0-892a-57ac801af2bb" (UID: "ac763412-39e7-40d0-892a-57ac801af2bb"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:15:08 crc kubenswrapper[4808]: I0217 16:15:08.143945 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"311ff62c-be53-44b9-a2f7-933e94d8dfb1","Type":"ContainerStarted","Data":"5259b7f9e5eb8d16dd9b6467f0a2e9d1eee838ac2578fd7225262f0187ce85fa"} Feb 17 16:15:08 crc kubenswrapper[4808]: I0217 16:15:08.150674 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7820c3c-fe38-46dd-906a-498a579d0805-config-data" (OuterVolumeSpecName: "config-data") pod "b7820c3c-fe38-46dd-906a-498a579d0805" (UID: "b7820c3c-fe38-46dd-906a-498a579d0805"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:15:08 crc kubenswrapper[4808]: I0217 16:15:08.162553 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7820c3c-fe38-46dd-906a-498a579d0805-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b7820c3c-fe38-46dd-906a-498a579d0805" (UID: "b7820c3c-fe38-46dd-906a-498a579d0805"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:15:08 crc kubenswrapper[4808]: I0217 16:15:08.168092 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7820c3c-fe38-46dd-906a-498a579d0805-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:08 crc kubenswrapper[4808]: I0217 16:15:08.168109 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zz8lw\" (UniqueName: \"kubernetes.io/projected/ac763412-39e7-40d0-892a-57ac801af2bb-kube-api-access-zz8lw\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:08 crc kubenswrapper[4808]: I0217 16:15:08.168121 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7820c3c-fe38-46dd-906a-498a579d0805-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:08 crc kubenswrapper[4808]: I0217 16:15:08.168130 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7bzxr\" (UniqueName: \"kubernetes.io/projected/b7820c3c-fe38-46dd-906a-498a579d0805-kube-api-access-7bzxr\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:08 crc kubenswrapper[4808]: I0217 16:15:08.168140 4808 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ac763412-39e7-40d0-892a-57ac801af2bb-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:08 crc kubenswrapper[4808]: I0217 16:15:08.168148 4808 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b7820c3c-fe38-46dd-906a-498a579d0805-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:08 crc kubenswrapper[4808]: I0217 16:15:08.183999 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac763412-39e7-40d0-892a-57ac801af2bb-config" (OuterVolumeSpecName: "config") pod "ac763412-39e7-40d0-892a-57ac801af2bb" (UID: "ac763412-39e7-40d0-892a-57ac801af2bb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:15:08 crc kubenswrapper[4808]: I0217 16:15:08.198198 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac763412-39e7-40d0-892a-57ac801af2bb-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ac763412-39e7-40d0-892a-57ac801af2bb" (UID: "ac763412-39e7-40d0-892a-57ac801af2bb"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:15:08 crc kubenswrapper[4808]: I0217 16:15:08.198377 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac763412-39e7-40d0-892a-57ac801af2bb-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ac763412-39e7-40d0-892a-57ac801af2bb" (UID: "ac763412-39e7-40d0-892a-57ac801af2bb"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:15:08 crc kubenswrapper[4808]: I0217 16:15:08.136643 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac763412-39e7-40d0-892a-57ac801af2bb-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ac763412-39e7-40d0-892a-57ac801af2bb" (UID: "ac763412-39e7-40d0-892a-57ac801af2bb"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:15:08 crc kubenswrapper[4808]: I0217 16:15:08.272934 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac763412-39e7-40d0-892a-57ac801af2bb-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:08 crc kubenswrapper[4808]: I0217 16:15:08.275953 4808 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ac763412-39e7-40d0-892a-57ac801af2bb-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:08 crc kubenswrapper[4808]: I0217 16:15:08.276047 4808 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ac763412-39e7-40d0-892a-57ac801af2bb-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:08 crc kubenswrapper[4808]: I0217 16:15:08.276114 4808 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ac763412-39e7-40d0-892a-57ac801af2bb-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:08 crc kubenswrapper[4808]: I0217 16:15:08.296032 4808 scope.go:117] "RemoveContainer" containerID="3cd5c53464fedd37e9d9819c27c7cd7bc3734963bedd089eb5eac87ece7032f0" Feb 17 16:15:08 crc kubenswrapper[4808]: I0217 16:15:08.346979 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-679dfcbbb9-npbsd"] Feb 17 16:15:08 crc kubenswrapper[4808]: W0217 16:15:08.355999 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8a521aa0_4048_49a0_b6c1_32e07f349ac5.slice/crio-ad14d058aa0dac229a220b344a8765da6ec123e103f1c1521525d11603c01b48 WatchSource:0}: Error finding container ad14d058aa0dac229a220b344a8765da6ec123e103f1c1521525d11603c01b48: Status 404 returned error can't find the container with id ad14d058aa0dac229a220b344a8765da6ec123e103f1c1521525d11603c01b48 Feb 17 16:15:08 crc kubenswrapper[4808]: I0217 16:15:08.370873 4808 scope.go:117] "RemoveContainer" containerID="efb29cb8354ee1065418cb03cb216915e7b1e0246bdd1f63d45fcf6320a29eb9" Feb 17 16:15:08 crc kubenswrapper[4808]: E0217 16:15:08.371433 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"efb29cb8354ee1065418cb03cb216915e7b1e0246bdd1f63d45fcf6320a29eb9\": container with ID starting with efb29cb8354ee1065418cb03cb216915e7b1e0246bdd1f63d45fcf6320a29eb9 not found: ID does not exist" containerID="efb29cb8354ee1065418cb03cb216915e7b1e0246bdd1f63d45fcf6320a29eb9" Feb 17 16:15:08 crc kubenswrapper[4808]: I0217 16:15:08.371460 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"efb29cb8354ee1065418cb03cb216915e7b1e0246bdd1f63d45fcf6320a29eb9"} err="failed to get container status \"efb29cb8354ee1065418cb03cb216915e7b1e0246bdd1f63d45fcf6320a29eb9\": rpc error: code = NotFound desc = could not find container \"efb29cb8354ee1065418cb03cb216915e7b1e0246bdd1f63d45fcf6320a29eb9\": container with ID starting with efb29cb8354ee1065418cb03cb216915e7b1e0246bdd1f63d45fcf6320a29eb9 not found: ID does not exist" Feb 17 16:15:08 crc kubenswrapper[4808]: I0217 16:15:08.371480 4808 scope.go:117] "RemoveContainer" containerID="3cd5c53464fedd37e9d9819c27c7cd7bc3734963bedd089eb5eac87ece7032f0" Feb 17 16:15:08 crc kubenswrapper[4808]: E0217 16:15:08.371735 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3cd5c53464fedd37e9d9819c27c7cd7bc3734963bedd089eb5eac87ece7032f0\": container with ID starting with 3cd5c53464fedd37e9d9819c27c7cd7bc3734963bedd089eb5eac87ece7032f0 not found: ID does not exist" containerID="3cd5c53464fedd37e9d9819c27c7cd7bc3734963bedd089eb5eac87ece7032f0" Feb 17 16:15:08 crc kubenswrapper[4808]: I0217 16:15:08.371750 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3cd5c53464fedd37e9d9819c27c7cd7bc3734963bedd089eb5eac87ece7032f0"} err="failed to get container status \"3cd5c53464fedd37e9d9819c27c7cd7bc3734963bedd089eb5eac87ece7032f0\": rpc error: code = NotFound desc = could not find container \"3cd5c53464fedd37e9d9819c27c7cd7bc3734963bedd089eb5eac87ece7032f0\": container with ID starting with 3cd5c53464fedd37e9d9819c27c7cd7bc3734963bedd089eb5eac87ece7032f0 not found: ID does not exist" Feb 17 16:15:08 crc kubenswrapper[4808]: I0217 16:15:08.557192 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-bbhtn"] Feb 17 16:15:08 crc kubenswrapper[4808]: I0217 16:15:08.583956 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-bbhtn"] Feb 17 16:15:08 crc kubenswrapper[4808]: I0217 16:15:08.613298 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 16:15:09 crc kubenswrapper[4808]: I0217 16:15:09.165317 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac763412-39e7-40d0-892a-57ac801af2bb" path="/var/lib/kubelet/pods/ac763412-39e7-40d0-892a-57ac801af2bb/volumes" Feb 17 16:15:09 crc kubenswrapper[4808]: I0217 16:15:09.165909 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-679dfcbbb9-npbsd" Feb 17 16:15:09 crc kubenswrapper[4808]: I0217 16:15:09.165926 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-679dfcbbb9-npbsd" event={"ID":"8a521aa0-4048-49a0-b6c1-32e07f349ac5","Type":"ContainerStarted","Data":"9b80e856a1484d326bbd785dad5941a60017ee1129bcf6e5805f921083557b78"} Feb 17 16:15:09 crc kubenswrapper[4808]: I0217 16:15:09.165939 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-679dfcbbb9-npbsd" event={"ID":"8a521aa0-4048-49a0-b6c1-32e07f349ac5","Type":"ContainerStarted","Data":"ad14d058aa0dac229a220b344a8765da6ec123e103f1c1521525d11603c01b48"} Feb 17 16:15:09 crc kubenswrapper[4808]: I0217 16:15:09.172759 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"311ff62c-be53-44b9-a2f7-933e94d8dfb1","Type":"ContainerStarted","Data":"ae6f17f8e667309ba204350d8bb1c7687a14a6c30d1d2913b4f840091857035f"} Feb 17 16:15:09 crc kubenswrapper[4808]: I0217 16:15:09.177156 4808 generic.go:334] "Generic (PLEG): container finished" podID="5bf4d932-664a-46c6-bec5-f2b70950c824" containerID="d13306e7f7b98912b9cc3cb00da949b55a527efdf00a13d4c28a802941f6067a" exitCode=0 Feb 17 16:15:09 crc kubenswrapper[4808]: I0217 16:15:09.177254 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-rwld8" event={"ID":"5bf4d932-664a-46c6-bec5-f2b70950c824","Type":"ContainerDied","Data":"d13306e7f7b98912b9cc3cb00da949b55a527efdf00a13d4c28a802941f6067a"} Feb 17 16:15:09 crc kubenswrapper[4808]: I0217 16:15:09.181817 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0","Type":"ContainerStarted","Data":"674bc197545e528a3fae6a8ee441743eba630fd0f6cf0ca9277898370f13b963"} Feb 17 16:15:09 crc kubenswrapper[4808]: I0217 16:15:09.196675 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-679dfcbbb9-npbsd" podStartSLOduration=3.196659065 podStartE2EDuration="3.196659065s" podCreationTimestamp="2026-02-17 16:15:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:15:09.195840223 +0000 UTC m=+1272.712199296" watchObservedRunningTime="2026-02-17 16:15:09.196659065 +0000 UTC m=+1272.713018138" Feb 17 16:15:09 crc kubenswrapper[4808]: I0217 16:15:09.324567 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-76b995d5cb-7xs25"] Feb 17 16:15:09 crc kubenswrapper[4808]: E0217 16:15:09.325422 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac763412-39e7-40d0-892a-57ac801af2bb" containerName="dnsmasq-dns" Feb 17 16:15:09 crc kubenswrapper[4808]: I0217 16:15:09.325450 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac763412-39e7-40d0-892a-57ac801af2bb" containerName="dnsmasq-dns" Feb 17 16:15:09 crc kubenswrapper[4808]: E0217 16:15:09.325460 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7820c3c-fe38-46dd-906a-498a579d0805" containerName="placement-db-sync" Feb 17 16:15:09 crc kubenswrapper[4808]: I0217 16:15:09.325476 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7820c3c-fe38-46dd-906a-498a579d0805" containerName="placement-db-sync" Feb 17 16:15:09 crc kubenswrapper[4808]: E0217 16:15:09.325489 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac763412-39e7-40d0-892a-57ac801af2bb" containerName="init" Feb 17 16:15:09 crc kubenswrapper[4808]: I0217 16:15:09.325499 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac763412-39e7-40d0-892a-57ac801af2bb" containerName="init" Feb 17 16:15:09 crc kubenswrapper[4808]: E0217 16:15:09.325510 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41f86f53-7772-428e-b916-8624c83de123" containerName="collect-profiles" Feb 17 16:15:09 crc kubenswrapper[4808]: I0217 16:15:09.325519 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="41f86f53-7772-428e-b916-8624c83de123" containerName="collect-profiles" Feb 17 16:15:09 crc kubenswrapper[4808]: I0217 16:15:09.325801 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="41f86f53-7772-428e-b916-8624c83de123" containerName="collect-profiles" Feb 17 16:15:09 crc kubenswrapper[4808]: I0217 16:15:09.325847 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7820c3c-fe38-46dd-906a-498a579d0805" containerName="placement-db-sync" Feb 17 16:15:09 crc kubenswrapper[4808]: I0217 16:15:09.325866 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac763412-39e7-40d0-892a-57ac801af2bb" containerName="dnsmasq-dns" Feb 17 16:15:09 crc kubenswrapper[4808]: I0217 16:15:09.327279 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-76b995d5cb-7xs25" Feb 17 16:15:09 crc kubenswrapper[4808]: I0217 16:15:09.330889 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 17 16:15:09 crc kubenswrapper[4808]: I0217 16:15:09.336114 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Feb 17 16:15:09 crc kubenswrapper[4808]: I0217 16:15:09.336289 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 17 16:15:09 crc kubenswrapper[4808]: I0217 16:15:09.336384 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-p4pcv" Feb 17 16:15:09 crc kubenswrapper[4808]: I0217 16:15:09.336486 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Feb 17 16:15:09 crc kubenswrapper[4808]: I0217 16:15:09.343738 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-76b995d5cb-7xs25"] Feb 17 16:15:09 crc kubenswrapper[4808]: I0217 16:15:09.401487 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab7f0766-47a0-4616-b6dc-32957d59188a-combined-ca-bundle\") pod \"placement-76b995d5cb-7xs25\" (UID: \"ab7f0766-47a0-4616-b6dc-32957d59188a\") " pod="openstack/placement-76b995d5cb-7xs25" Feb 17 16:15:09 crc kubenswrapper[4808]: I0217 16:15:09.401556 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab7f0766-47a0-4616-b6dc-32957d59188a-scripts\") pod \"placement-76b995d5cb-7xs25\" (UID: \"ab7f0766-47a0-4616-b6dc-32957d59188a\") " pod="openstack/placement-76b995d5cb-7xs25" Feb 17 16:15:09 crc kubenswrapper[4808]: I0217 16:15:09.401627 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab7f0766-47a0-4616-b6dc-32957d59188a-config-data\") pod \"placement-76b995d5cb-7xs25\" (UID: \"ab7f0766-47a0-4616-b6dc-32957d59188a\") " pod="openstack/placement-76b995d5cb-7xs25" Feb 17 16:15:09 crc kubenswrapper[4808]: I0217 16:15:09.401659 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msmrh\" (UniqueName: \"kubernetes.io/projected/ab7f0766-47a0-4616-b6dc-32957d59188a-kube-api-access-msmrh\") pod \"placement-76b995d5cb-7xs25\" (UID: \"ab7f0766-47a0-4616-b6dc-32957d59188a\") " pod="openstack/placement-76b995d5cb-7xs25" Feb 17 16:15:09 crc kubenswrapper[4808]: I0217 16:15:09.401693 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab7f0766-47a0-4616-b6dc-32957d59188a-public-tls-certs\") pod \"placement-76b995d5cb-7xs25\" (UID: \"ab7f0766-47a0-4616-b6dc-32957d59188a\") " pod="openstack/placement-76b995d5cb-7xs25" Feb 17 16:15:09 crc kubenswrapper[4808]: I0217 16:15:09.401736 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab7f0766-47a0-4616-b6dc-32957d59188a-internal-tls-certs\") pod \"placement-76b995d5cb-7xs25\" (UID: \"ab7f0766-47a0-4616-b6dc-32957d59188a\") " pod="openstack/placement-76b995d5cb-7xs25" Feb 17 16:15:09 crc kubenswrapper[4808]: I0217 16:15:09.401779 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab7f0766-47a0-4616-b6dc-32957d59188a-logs\") pod \"placement-76b995d5cb-7xs25\" (UID: \"ab7f0766-47a0-4616-b6dc-32957d59188a\") " pod="openstack/placement-76b995d5cb-7xs25" Feb 17 16:15:09 crc kubenswrapper[4808]: I0217 16:15:09.506136 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab7f0766-47a0-4616-b6dc-32957d59188a-logs\") pod \"placement-76b995d5cb-7xs25\" (UID: \"ab7f0766-47a0-4616-b6dc-32957d59188a\") " pod="openstack/placement-76b995d5cb-7xs25" Feb 17 16:15:09 crc kubenswrapper[4808]: I0217 16:15:09.506305 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab7f0766-47a0-4616-b6dc-32957d59188a-combined-ca-bundle\") pod \"placement-76b995d5cb-7xs25\" (UID: \"ab7f0766-47a0-4616-b6dc-32957d59188a\") " pod="openstack/placement-76b995d5cb-7xs25" Feb 17 16:15:09 crc kubenswrapper[4808]: I0217 16:15:09.506353 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab7f0766-47a0-4616-b6dc-32957d59188a-scripts\") pod \"placement-76b995d5cb-7xs25\" (UID: \"ab7f0766-47a0-4616-b6dc-32957d59188a\") " pod="openstack/placement-76b995d5cb-7xs25" Feb 17 16:15:09 crc kubenswrapper[4808]: I0217 16:15:09.506430 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab7f0766-47a0-4616-b6dc-32957d59188a-config-data\") pod \"placement-76b995d5cb-7xs25\" (UID: \"ab7f0766-47a0-4616-b6dc-32957d59188a\") " pod="openstack/placement-76b995d5cb-7xs25" Feb 17 16:15:09 crc kubenswrapper[4808]: I0217 16:15:09.506450 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-msmrh\" (UniqueName: \"kubernetes.io/projected/ab7f0766-47a0-4616-b6dc-32957d59188a-kube-api-access-msmrh\") pod \"placement-76b995d5cb-7xs25\" (UID: \"ab7f0766-47a0-4616-b6dc-32957d59188a\") " pod="openstack/placement-76b995d5cb-7xs25" Feb 17 16:15:09 crc kubenswrapper[4808]: I0217 16:15:09.506484 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab7f0766-47a0-4616-b6dc-32957d59188a-public-tls-certs\") pod \"placement-76b995d5cb-7xs25\" (UID: \"ab7f0766-47a0-4616-b6dc-32957d59188a\") " pod="openstack/placement-76b995d5cb-7xs25" Feb 17 16:15:09 crc kubenswrapper[4808]: I0217 16:15:09.506546 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab7f0766-47a0-4616-b6dc-32957d59188a-internal-tls-certs\") pod \"placement-76b995d5cb-7xs25\" (UID: \"ab7f0766-47a0-4616-b6dc-32957d59188a\") " pod="openstack/placement-76b995d5cb-7xs25" Feb 17 16:15:09 crc kubenswrapper[4808]: I0217 16:15:09.506624 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab7f0766-47a0-4616-b6dc-32957d59188a-logs\") pod \"placement-76b995d5cb-7xs25\" (UID: \"ab7f0766-47a0-4616-b6dc-32957d59188a\") " pod="openstack/placement-76b995d5cb-7xs25" Feb 17 16:15:09 crc kubenswrapper[4808]: I0217 16:15:09.512338 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab7f0766-47a0-4616-b6dc-32957d59188a-internal-tls-certs\") pod \"placement-76b995d5cb-7xs25\" (UID: \"ab7f0766-47a0-4616-b6dc-32957d59188a\") " pod="openstack/placement-76b995d5cb-7xs25" Feb 17 16:15:09 crc kubenswrapper[4808]: I0217 16:15:09.513241 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab7f0766-47a0-4616-b6dc-32957d59188a-config-data\") pod \"placement-76b995d5cb-7xs25\" (UID: \"ab7f0766-47a0-4616-b6dc-32957d59188a\") " pod="openstack/placement-76b995d5cb-7xs25" Feb 17 16:15:09 crc kubenswrapper[4808]: I0217 16:15:09.515086 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab7f0766-47a0-4616-b6dc-32957d59188a-scripts\") pod \"placement-76b995d5cb-7xs25\" (UID: \"ab7f0766-47a0-4616-b6dc-32957d59188a\") " pod="openstack/placement-76b995d5cb-7xs25" Feb 17 16:15:09 crc kubenswrapper[4808]: I0217 16:15:09.515167 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab7f0766-47a0-4616-b6dc-32957d59188a-public-tls-certs\") pod \"placement-76b995d5cb-7xs25\" (UID: \"ab7f0766-47a0-4616-b6dc-32957d59188a\") " pod="openstack/placement-76b995d5cb-7xs25" Feb 17 16:15:09 crc kubenswrapper[4808]: I0217 16:15:09.517035 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab7f0766-47a0-4616-b6dc-32957d59188a-combined-ca-bundle\") pod \"placement-76b995d5cb-7xs25\" (UID: \"ab7f0766-47a0-4616-b6dc-32957d59188a\") " pod="openstack/placement-76b995d5cb-7xs25" Feb 17 16:15:09 crc kubenswrapper[4808]: I0217 16:15:09.529950 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-msmrh\" (UniqueName: \"kubernetes.io/projected/ab7f0766-47a0-4616-b6dc-32957d59188a-kube-api-access-msmrh\") pod \"placement-76b995d5cb-7xs25\" (UID: \"ab7f0766-47a0-4616-b6dc-32957d59188a\") " pod="openstack/placement-76b995d5cb-7xs25" Feb 17 16:15:09 crc kubenswrapper[4808]: I0217 16:15:09.653363 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-76b995d5cb-7xs25" Feb 17 16:15:10 crc kubenswrapper[4808]: I0217 16:15:10.196117 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"311ff62c-be53-44b9-a2f7-933e94d8dfb1","Type":"ContainerStarted","Data":"ff2f31bf8a59a9020889f1060c244d02f3cdf820c32dde20eee91d0b4e8e88f5"} Feb 17 16:15:10 crc kubenswrapper[4808]: I0217 16:15:10.209726 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0","Type":"ContainerStarted","Data":"177996b4a729c403d13937849e62a1c2bc6f990a64abe1437c1ef760ae1c250e"} Feb 17 16:15:10 crc kubenswrapper[4808]: I0217 16:15:10.209765 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0","Type":"ContainerStarted","Data":"93b27ef0402c822c4382b1631c2f850f5ab2be4020697d343106fc4f85f7b674"} Feb 17 16:15:10 crc kubenswrapper[4808]: I0217 16:15:10.234293 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.234270678 podStartE2EDuration="4.234270678s" podCreationTimestamp="2026-02-17 16:15:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:15:10.230686811 +0000 UTC m=+1273.747045894" watchObservedRunningTime="2026-02-17 16:15:10.234270678 +0000 UTC m=+1273.750629751" Feb 17 16:15:10 crc kubenswrapper[4808]: I0217 16:15:10.255591 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.255555805 podStartE2EDuration="4.255555805s" podCreationTimestamp="2026-02-17 16:15:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:15:10.25425364 +0000 UTC m=+1273.770612723" watchObservedRunningTime="2026-02-17 16:15:10.255555805 +0000 UTC m=+1273.771914878" Feb 17 16:15:10 crc kubenswrapper[4808]: I0217 16:15:10.330634 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-76b995d5cb-7xs25"] Feb 17 16:15:11 crc kubenswrapper[4808]: W0217 16:15:11.994910 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podab7f0766_47a0_4616_b6dc_32957d59188a.slice/crio-b48a6abc26c7e221dbdced60372c9a60fa60a080c578e82c39e83edd08b08428 WatchSource:0}: Error finding container b48a6abc26c7e221dbdced60372c9a60fa60a080c578e82c39e83edd08b08428: Status 404 returned error can't find the container with id b48a6abc26c7e221dbdced60372c9a60fa60a080c578e82c39e83edd08b08428 Feb 17 16:15:12 crc kubenswrapper[4808]: I0217 16:15:12.235332 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-76b995d5cb-7xs25" event={"ID":"ab7f0766-47a0-4616-b6dc-32957d59188a","Type":"ContainerStarted","Data":"b48a6abc26c7e221dbdced60372c9a60fa60a080c578e82c39e83edd08b08428"} Feb 17 16:15:13 crc kubenswrapper[4808]: I0217 16:15:13.248875 4808 generic.go:334] "Generic (PLEG): container finished" podID="2ec52dbb-ca2f-4013-8536-972042607240" containerID="a81fffa1dbaddd4905f2490f1b43e8825142981115e721e7e79501c10a7af652" exitCode=0 Feb 17 16:15:13 crc kubenswrapper[4808]: I0217 16:15:13.248944 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-sync-wdrmd" event={"ID":"2ec52dbb-ca2f-4013-8536-972042607240","Type":"ContainerDied","Data":"a81fffa1dbaddd4905f2490f1b43e8825142981115e721e7e79501c10a7af652"} Feb 17 16:15:13 crc kubenswrapper[4808]: I0217 16:15:13.252559 4808 generic.go:334] "Generic (PLEG): container finished" podID="d0cc3be3-7aa7-4384-97ed-1ec7bf75f026" containerID="605854da0374a1e089d7a0c7ad0840ab1318edc5017bc1e2125f207c2fb40b06" exitCode=0 Feb 17 16:15:13 crc kubenswrapper[4808]: I0217 16:15:13.252635 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-jcqjf" event={"ID":"d0cc3be3-7aa7-4384-97ed-1ec7bf75f026","Type":"ContainerDied","Data":"605854da0374a1e089d7a0c7ad0840ab1318edc5017bc1e2125f207c2fb40b06"} Feb 17 16:15:13 crc kubenswrapper[4808]: I0217 16:15:13.564854 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-rwld8" Feb 17 16:15:13 crc kubenswrapper[4808]: I0217 16:15:13.700115 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5bf4d932-664a-46c6-bec5-f2b70950c824-db-sync-config-data\") pod \"5bf4d932-664a-46c6-bec5-f2b70950c824\" (UID: \"5bf4d932-664a-46c6-bec5-f2b70950c824\") " Feb 17 16:15:13 crc kubenswrapper[4808]: I0217 16:15:13.700163 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2zvc8\" (UniqueName: \"kubernetes.io/projected/5bf4d932-664a-46c6-bec5-f2b70950c824-kube-api-access-2zvc8\") pod \"5bf4d932-664a-46c6-bec5-f2b70950c824\" (UID: \"5bf4d932-664a-46c6-bec5-f2b70950c824\") " Feb 17 16:15:13 crc kubenswrapper[4808]: I0217 16:15:13.700191 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bf4d932-664a-46c6-bec5-f2b70950c824-combined-ca-bundle\") pod \"5bf4d932-664a-46c6-bec5-f2b70950c824\" (UID: \"5bf4d932-664a-46c6-bec5-f2b70950c824\") " Feb 17 16:15:13 crc kubenswrapper[4808]: I0217 16:15:13.703620 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5bf4d932-664a-46c6-bec5-f2b70950c824-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "5bf4d932-664a-46c6-bec5-f2b70950c824" (UID: "5bf4d932-664a-46c6-bec5-f2b70950c824"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:15:13 crc kubenswrapper[4808]: I0217 16:15:13.704206 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5bf4d932-664a-46c6-bec5-f2b70950c824-kube-api-access-2zvc8" (OuterVolumeSpecName: "kube-api-access-2zvc8") pod "5bf4d932-664a-46c6-bec5-f2b70950c824" (UID: "5bf4d932-664a-46c6-bec5-f2b70950c824"). InnerVolumeSpecName "kube-api-access-2zvc8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:15:13 crc kubenswrapper[4808]: I0217 16:15:13.731237 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5bf4d932-664a-46c6-bec5-f2b70950c824-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5bf4d932-664a-46c6-bec5-f2b70950c824" (UID: "5bf4d932-664a-46c6-bec5-f2b70950c824"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:15:13 crc kubenswrapper[4808]: I0217 16:15:13.803510 4808 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5bf4d932-664a-46c6-bec5-f2b70950c824-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:13 crc kubenswrapper[4808]: I0217 16:15:13.803821 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2zvc8\" (UniqueName: \"kubernetes.io/projected/5bf4d932-664a-46c6-bec5-f2b70950c824-kube-api-access-2zvc8\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:13 crc kubenswrapper[4808]: I0217 16:15:13.803837 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bf4d932-664a-46c6-bec5-f2b70950c824-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:14 crc kubenswrapper[4808]: I0217 16:15:14.275983 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ce9fba55-1b70-4d39-a052-bff96bd8e93a","Type":"ContainerStarted","Data":"5ae1963ac1b0852c4683f5358c8722c23e5499fa516e84308b0247d589ec8967"} Feb 17 16:15:14 crc kubenswrapper[4808]: I0217 16:15:14.278494 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-rwld8" event={"ID":"5bf4d932-664a-46c6-bec5-f2b70950c824","Type":"ContainerDied","Data":"9ba656f842dfb00605cd2712c9679dadbf966fdee137e5405e4ec802b02357c9"} Feb 17 16:15:14 crc kubenswrapper[4808]: I0217 16:15:14.278546 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ba656f842dfb00605cd2712c9679dadbf966fdee137e5405e4ec802b02357c9" Feb 17 16:15:14 crc kubenswrapper[4808]: I0217 16:15:14.278674 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-rwld8" Feb 17 16:15:14 crc kubenswrapper[4808]: I0217 16:15:14.284596 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-76b995d5cb-7xs25" event={"ID":"ab7f0766-47a0-4616-b6dc-32957d59188a","Type":"ContainerStarted","Data":"1ac5810a1c1e5917de8eae77f2195ae692569c3a3124154a08bc9b36894f6566"} Feb 17 16:15:14 crc kubenswrapper[4808]: I0217 16:15:14.284663 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-76b995d5cb-7xs25" event={"ID":"ab7f0766-47a0-4616-b6dc-32957d59188a","Type":"ContainerStarted","Data":"94683a775902e76377bb4a1d51e3c26fa151e5d1d30203b370523ab19d1a4405"} Feb 17 16:15:14 crc kubenswrapper[4808]: I0217 16:15:14.284877 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-76b995d5cb-7xs25" Feb 17 16:15:14 crc kubenswrapper[4808]: I0217 16:15:14.284929 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-76b995d5cb-7xs25" Feb 17 16:15:14 crc kubenswrapper[4808]: I0217 16:15:14.316691 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-76b995d5cb-7xs25" podStartSLOduration=5.316674262 podStartE2EDuration="5.316674262s" podCreationTimestamp="2026-02-17 16:15:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:15:14.315937242 +0000 UTC m=+1277.832296335" watchObservedRunningTime="2026-02-17 16:15:14.316674262 +0000 UTC m=+1277.833033345" Feb 17 16:15:14 crc kubenswrapper[4808]: I0217 16:15:14.796584 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-sync-wdrmd" Feb 17 16:15:14 crc kubenswrapper[4808]: I0217 16:15:14.803546 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-jcqjf" Feb 17 16:15:14 crc kubenswrapper[4808]: I0217 16:15:14.927092 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d0cc3be3-7aa7-4384-97ed-1ec7bf75f026-etc-machine-id\") pod \"d0cc3be3-7aa7-4384-97ed-1ec7bf75f026\" (UID: \"d0cc3be3-7aa7-4384-97ed-1ec7bf75f026\") " Feb 17 16:15:14 crc kubenswrapper[4808]: I0217 16:15:14.927442 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5jmms\" (UniqueName: \"kubernetes.io/projected/2ec52dbb-ca2f-4013-8536-972042607240-kube-api-access-5jmms\") pod \"2ec52dbb-ca2f-4013-8536-972042607240\" (UID: \"2ec52dbb-ca2f-4013-8536-972042607240\") " Feb 17 16:15:14 crc kubenswrapper[4808]: I0217 16:15:14.927463 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9mc46\" (UniqueName: \"kubernetes.io/projected/d0cc3be3-7aa7-4384-97ed-1ec7bf75f026-kube-api-access-9mc46\") pod \"d0cc3be3-7aa7-4384-97ed-1ec7bf75f026\" (UID: \"d0cc3be3-7aa7-4384-97ed-1ec7bf75f026\") " Feb 17 16:15:14 crc kubenswrapper[4808]: I0217 16:15:14.927551 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d0cc3be3-7aa7-4384-97ed-1ec7bf75f026-db-sync-config-data\") pod \"d0cc3be3-7aa7-4384-97ed-1ec7bf75f026\" (UID: \"d0cc3be3-7aa7-4384-97ed-1ec7bf75f026\") " Feb 17 16:15:14 crc kubenswrapper[4808]: I0217 16:15:14.927611 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0cc3be3-7aa7-4384-97ed-1ec7bf75f026-config-data\") pod \"d0cc3be3-7aa7-4384-97ed-1ec7bf75f026\" (UID: \"d0cc3be3-7aa7-4384-97ed-1ec7bf75f026\") " Feb 17 16:15:14 crc kubenswrapper[4808]: I0217 16:15:14.927640 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ec52dbb-ca2f-4013-8536-972042607240-scripts\") pod \"2ec52dbb-ca2f-4013-8536-972042607240\" (UID: \"2ec52dbb-ca2f-4013-8536-972042607240\") " Feb 17 16:15:14 crc kubenswrapper[4808]: I0217 16:15:14.927695 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ec52dbb-ca2f-4013-8536-972042607240-config-data\") pod \"2ec52dbb-ca2f-4013-8536-972042607240\" (UID: \"2ec52dbb-ca2f-4013-8536-972042607240\") " Feb 17 16:15:14 crc kubenswrapper[4808]: I0217 16:15:14.927752 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ec52dbb-ca2f-4013-8536-972042607240-combined-ca-bundle\") pod \"2ec52dbb-ca2f-4013-8536-972042607240\" (UID: \"2ec52dbb-ca2f-4013-8536-972042607240\") " Feb 17 16:15:14 crc kubenswrapper[4808]: I0217 16:15:14.927783 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0cc3be3-7aa7-4384-97ed-1ec7bf75f026-combined-ca-bundle\") pod \"d0cc3be3-7aa7-4384-97ed-1ec7bf75f026\" (UID: \"d0cc3be3-7aa7-4384-97ed-1ec7bf75f026\") " Feb 17 16:15:14 crc kubenswrapper[4808]: I0217 16:15:14.927802 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d0cc3be3-7aa7-4384-97ed-1ec7bf75f026-scripts\") pod \"d0cc3be3-7aa7-4384-97ed-1ec7bf75f026\" (UID: \"d0cc3be3-7aa7-4384-97ed-1ec7bf75f026\") " Feb 17 16:15:14 crc kubenswrapper[4808]: I0217 16:15:14.927864 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/2ec52dbb-ca2f-4013-8536-972042607240-certs\") pod \"2ec52dbb-ca2f-4013-8536-972042607240\" (UID: \"2ec52dbb-ca2f-4013-8536-972042607240\") " Feb 17 16:15:14 crc kubenswrapper[4808]: I0217 16:15:14.928541 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d0cc3be3-7aa7-4384-97ed-1ec7bf75f026-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "d0cc3be3-7aa7-4384-97ed-1ec7bf75f026" (UID: "d0cc3be3-7aa7-4384-97ed-1ec7bf75f026"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:15:14 crc kubenswrapper[4808]: I0217 16:15:14.928893 4808 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d0cc3be3-7aa7-4384-97ed-1ec7bf75f026-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:14 crc kubenswrapper[4808]: I0217 16:15:14.942028 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ec52dbb-ca2f-4013-8536-972042607240-scripts" (OuterVolumeSpecName: "scripts") pod "2ec52dbb-ca2f-4013-8536-972042607240" (UID: "2ec52dbb-ca2f-4013-8536-972042607240"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:15:14 crc kubenswrapper[4808]: I0217 16:15:14.960681 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ec52dbb-ca2f-4013-8536-972042607240-kube-api-access-5jmms" (OuterVolumeSpecName: "kube-api-access-5jmms") pod "2ec52dbb-ca2f-4013-8536-972042607240" (UID: "2ec52dbb-ca2f-4013-8536-972042607240"). InnerVolumeSpecName "kube-api-access-5jmms". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:15:14 crc kubenswrapper[4808]: I0217 16:15:14.960861 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0cc3be3-7aa7-4384-97ed-1ec7bf75f026-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "d0cc3be3-7aa7-4384-97ed-1ec7bf75f026" (UID: "d0cc3be3-7aa7-4384-97ed-1ec7bf75f026"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:15:14 crc kubenswrapper[4808]: I0217 16:15:14.962346 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-6d78867d94-7lhqs"] Feb 17 16:15:14 crc kubenswrapper[4808]: E0217 16:15:14.962912 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bf4d932-664a-46c6-bec5-f2b70950c824" containerName="barbican-db-sync" Feb 17 16:15:14 crc kubenswrapper[4808]: I0217 16:15:14.962938 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bf4d932-664a-46c6-bec5-f2b70950c824" containerName="barbican-db-sync" Feb 17 16:15:14 crc kubenswrapper[4808]: E0217 16:15:14.962949 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ec52dbb-ca2f-4013-8536-972042607240" containerName="cloudkitty-db-sync" Feb 17 16:15:14 crc kubenswrapper[4808]: I0217 16:15:14.962957 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ec52dbb-ca2f-4013-8536-972042607240" containerName="cloudkitty-db-sync" Feb 17 16:15:14 crc kubenswrapper[4808]: E0217 16:15:14.962979 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0cc3be3-7aa7-4384-97ed-1ec7bf75f026" containerName="cinder-db-sync" Feb 17 16:15:14 crc kubenswrapper[4808]: I0217 16:15:14.962986 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0cc3be3-7aa7-4384-97ed-1ec7bf75f026" containerName="cinder-db-sync" Feb 17 16:15:14 crc kubenswrapper[4808]: I0217 16:15:14.963173 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0cc3be3-7aa7-4384-97ed-1ec7bf75f026" containerName="cinder-db-sync" Feb 17 16:15:14 crc kubenswrapper[4808]: I0217 16:15:14.963190 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ec52dbb-ca2f-4013-8536-972042607240" containerName="cloudkitty-db-sync" Feb 17 16:15:14 crc kubenswrapper[4808]: I0217 16:15:14.963205 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="5bf4d932-664a-46c6-bec5-f2b70950c824" containerName="barbican-db-sync" Feb 17 16:15:14 crc kubenswrapper[4808]: I0217 16:15:14.963477 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0cc3be3-7aa7-4384-97ed-1ec7bf75f026-kube-api-access-9mc46" (OuterVolumeSpecName: "kube-api-access-9mc46") pod "d0cc3be3-7aa7-4384-97ed-1ec7bf75f026" (UID: "d0cc3be3-7aa7-4384-97ed-1ec7bf75f026"). InnerVolumeSpecName "kube-api-access-9mc46". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:15:14 crc kubenswrapper[4808]: I0217 16:15:14.965156 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-6d78867d94-7lhqs" Feb 17 16:15:14 crc kubenswrapper[4808]: I0217 16:15:14.968403 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ec52dbb-ca2f-4013-8536-972042607240-certs" (OuterVolumeSpecName: "certs") pod "2ec52dbb-ca2f-4013-8536-972042607240" (UID: "2ec52dbb-ca2f-4013-8536-972042607240"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:15:14 crc kubenswrapper[4808]: I0217 16:15:14.969260 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-26x5l" Feb 17 16:15:14 crc kubenswrapper[4808]: I0217 16:15:14.969653 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 17 16:15:14 crc kubenswrapper[4808]: I0217 16:15:14.992871 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0cc3be3-7aa7-4384-97ed-1ec7bf75f026-scripts" (OuterVolumeSpecName: "scripts") pod "d0cc3be3-7aa7-4384-97ed-1ec7bf75f026" (UID: "d0cc3be3-7aa7-4384-97ed-1ec7bf75f026"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:15:14 crc kubenswrapper[4808]: I0217 16:15:14.993140 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Feb 17 16:15:14 crc kubenswrapper[4808]: I0217 16:15:14.999122 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-55f6d995c5-hnz4n"] Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.001295 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-55f6d995c5-hnz4n" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.007738 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.019283 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ec52dbb-ca2f-4013-8536-972042607240-config-data" (OuterVolumeSpecName: "config-data") pod "2ec52dbb-ca2f-4013-8536-972042607240" (UID: "2ec52dbb-ca2f-4013-8536-972042607240"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.032445 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0cc3be3-7aa7-4384-97ed-1ec7bf75f026-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d0cc3be3-7aa7-4384-97ed-1ec7bf75f026" (UID: "d0cc3be3-7aa7-4384-97ed-1ec7bf75f026"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.033798 4808 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ec52dbb-ca2f-4013-8536-972042607240-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.033875 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ec52dbb-ca2f-4013-8536-972042607240-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.033926 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0cc3be3-7aa7-4384-97ed-1ec7bf75f026-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.033976 4808 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d0cc3be3-7aa7-4384-97ed-1ec7bf75f026-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.034023 4808 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/projected/2ec52dbb-ca2f-4013-8536-972042607240-certs\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.034074 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5jmms\" (UniqueName: \"kubernetes.io/projected/2ec52dbb-ca2f-4013-8536-972042607240-kube-api-access-5jmms\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.034135 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9mc46\" (UniqueName: \"kubernetes.io/projected/d0cc3be3-7aa7-4384-97ed-1ec7bf75f026-kube-api-access-9mc46\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.034195 4808 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d0cc3be3-7aa7-4384-97ed-1ec7bf75f026-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.036816 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-29sc9"] Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.038592 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-29sc9" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.045476 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-6d78867d94-7lhqs"] Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.054470 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ec52dbb-ca2f-4013-8536-972042607240-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2ec52dbb-ca2f-4013-8536-972042607240" (UID: "2ec52dbb-ca2f-4013-8536-972042607240"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.073300 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-55f6d995c5-hnz4n"] Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.090793 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-29sc9"] Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.131400 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0cc3be3-7aa7-4384-97ed-1ec7bf75f026-config-data" (OuterVolumeSpecName: "config-data") pod "d0cc3be3-7aa7-4384-97ed-1ec7bf75f026" (UID: "d0cc3be3-7aa7-4384-97ed-1ec7bf75f026"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.144118 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6974c05c-8d53-4225-8ccd-c8c7c8956073-dns-svc\") pod \"dnsmasq-dns-85ff748b95-29sc9\" (UID: \"6974c05c-8d53-4225-8ccd-c8c7c8956073\") " pod="openstack/dnsmasq-dns-85ff748b95-29sc9" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.144160 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6974c05c-8d53-4225-8ccd-c8c7c8956073-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-29sc9\" (UID: \"6974c05c-8d53-4225-8ccd-c8c7c8956073\") " pod="openstack/dnsmasq-dns-85ff748b95-29sc9" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.144183 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/990b124d-3558-48ad-87f8-503580da5cc7-config-data-custom\") pod \"barbican-keystone-listener-6d78867d94-7lhqs\" (UID: \"990b124d-3558-48ad-87f8-503580da5cc7\") " pod="openstack/barbican-keystone-listener-6d78867d94-7lhqs" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.144203 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a0db6993-f3e7-4aa7-b5cc-1b848a15b56c-config-data-custom\") pod \"barbican-worker-55f6d995c5-hnz4n\" (UID: \"a0db6993-f3e7-4aa7-b5cc-1b848a15b56c\") " pod="openstack/barbican-worker-55f6d995c5-hnz4n" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.144225 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6974c05c-8d53-4225-8ccd-c8c7c8956073-config\") pod \"dnsmasq-dns-85ff748b95-29sc9\" (UID: \"6974c05c-8d53-4225-8ccd-c8c7c8956073\") " pod="openstack/dnsmasq-dns-85ff748b95-29sc9" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.144250 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2dxj\" (UniqueName: \"kubernetes.io/projected/a0db6993-f3e7-4aa7-b5cc-1b848a15b56c-kube-api-access-q2dxj\") pod \"barbican-worker-55f6d995c5-hnz4n\" (UID: \"a0db6993-f3e7-4aa7-b5cc-1b848a15b56c\") " pod="openstack/barbican-worker-55f6d995c5-hnz4n" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.144290 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/990b124d-3558-48ad-87f8-503580da5cc7-config-data\") pod \"barbican-keystone-listener-6d78867d94-7lhqs\" (UID: \"990b124d-3558-48ad-87f8-503580da5cc7\") " pod="openstack/barbican-keystone-listener-6d78867d94-7lhqs" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.144309 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/990b124d-3558-48ad-87f8-503580da5cc7-logs\") pod \"barbican-keystone-listener-6d78867d94-7lhqs\" (UID: \"990b124d-3558-48ad-87f8-503580da5cc7\") " pod="openstack/barbican-keystone-listener-6d78867d94-7lhqs" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.144340 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drb8b\" (UniqueName: \"kubernetes.io/projected/990b124d-3558-48ad-87f8-503580da5cc7-kube-api-access-drb8b\") pod \"barbican-keystone-listener-6d78867d94-7lhqs\" (UID: \"990b124d-3558-48ad-87f8-503580da5cc7\") " pod="openstack/barbican-keystone-listener-6d78867d94-7lhqs" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.144361 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0db6993-f3e7-4aa7-b5cc-1b848a15b56c-config-data\") pod \"barbican-worker-55f6d995c5-hnz4n\" (UID: \"a0db6993-f3e7-4aa7-b5cc-1b848a15b56c\") " pod="openstack/barbican-worker-55f6d995c5-hnz4n" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.144381 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/990b124d-3558-48ad-87f8-503580da5cc7-combined-ca-bundle\") pod \"barbican-keystone-listener-6d78867d94-7lhqs\" (UID: \"990b124d-3558-48ad-87f8-503580da5cc7\") " pod="openstack/barbican-keystone-listener-6d78867d94-7lhqs" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.144412 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6974c05c-8d53-4225-8ccd-c8c7c8956073-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-29sc9\" (UID: \"6974c05c-8d53-4225-8ccd-c8c7c8956073\") " pod="openstack/dnsmasq-dns-85ff748b95-29sc9" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.144443 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6974c05c-8d53-4225-8ccd-c8c7c8956073-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-29sc9\" (UID: \"6974c05c-8d53-4225-8ccd-c8c7c8956073\") " pod="openstack/dnsmasq-dns-85ff748b95-29sc9" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.144469 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clkmv\" (UniqueName: \"kubernetes.io/projected/6974c05c-8d53-4225-8ccd-c8c7c8956073-kube-api-access-clkmv\") pod \"dnsmasq-dns-85ff748b95-29sc9\" (UID: \"6974c05c-8d53-4225-8ccd-c8c7c8956073\") " pod="openstack/dnsmasq-dns-85ff748b95-29sc9" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.144495 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0db6993-f3e7-4aa7-b5cc-1b848a15b56c-combined-ca-bundle\") pod \"barbican-worker-55f6d995c5-hnz4n\" (UID: \"a0db6993-f3e7-4aa7-b5cc-1b848a15b56c\") " pod="openstack/barbican-worker-55f6d995c5-hnz4n" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.144512 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a0db6993-f3e7-4aa7-b5cc-1b848a15b56c-logs\") pod \"barbican-worker-55f6d995c5-hnz4n\" (UID: \"a0db6993-f3e7-4aa7-b5cc-1b848a15b56c\") " pod="openstack/barbican-worker-55f6d995c5-hnz4n" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.144559 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0cc3be3-7aa7-4384-97ed-1ec7bf75f026-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.144584 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ec52dbb-ca2f-4013-8536-972042607240-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.195874 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-75bd7dcff4-tfcmj"] Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.205406 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-75bd7dcff4-tfcmj" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.212448 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.216523 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-75bd7dcff4-tfcmj"] Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.246415 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6974c05c-8d53-4225-8ccd-c8c7c8956073-dns-svc\") pod \"dnsmasq-dns-85ff748b95-29sc9\" (UID: \"6974c05c-8d53-4225-8ccd-c8c7c8956073\") " pod="openstack/dnsmasq-dns-85ff748b95-29sc9" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.246822 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6974c05c-8d53-4225-8ccd-c8c7c8956073-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-29sc9\" (UID: \"6974c05c-8d53-4225-8ccd-c8c7c8956073\") " pod="openstack/dnsmasq-dns-85ff748b95-29sc9" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.246870 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/990b124d-3558-48ad-87f8-503580da5cc7-config-data-custom\") pod \"barbican-keystone-listener-6d78867d94-7lhqs\" (UID: \"990b124d-3558-48ad-87f8-503580da5cc7\") " pod="openstack/barbican-keystone-listener-6d78867d94-7lhqs" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.247254 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a0db6993-f3e7-4aa7-b5cc-1b848a15b56c-config-data-custom\") pod \"barbican-worker-55f6d995c5-hnz4n\" (UID: \"a0db6993-f3e7-4aa7-b5cc-1b848a15b56c\") " pod="openstack/barbican-worker-55f6d995c5-hnz4n" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.247298 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6974c05c-8d53-4225-8ccd-c8c7c8956073-config\") pod \"dnsmasq-dns-85ff748b95-29sc9\" (UID: \"6974c05c-8d53-4225-8ccd-c8c7c8956073\") " pod="openstack/dnsmasq-dns-85ff748b95-29sc9" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.247376 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2dxj\" (UniqueName: \"kubernetes.io/projected/a0db6993-f3e7-4aa7-b5cc-1b848a15b56c-kube-api-access-q2dxj\") pod \"barbican-worker-55f6d995c5-hnz4n\" (UID: \"a0db6993-f3e7-4aa7-b5cc-1b848a15b56c\") " pod="openstack/barbican-worker-55f6d995c5-hnz4n" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.247403 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6974c05c-8d53-4225-8ccd-c8c7c8956073-dns-svc\") pod \"dnsmasq-dns-85ff748b95-29sc9\" (UID: \"6974c05c-8d53-4225-8ccd-c8c7c8956073\") " pod="openstack/dnsmasq-dns-85ff748b95-29sc9" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.247817 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/990b124d-3558-48ad-87f8-503580da5cc7-config-data\") pod \"barbican-keystone-listener-6d78867d94-7lhqs\" (UID: \"990b124d-3558-48ad-87f8-503580da5cc7\") " pod="openstack/barbican-keystone-listener-6d78867d94-7lhqs" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.247876 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/990b124d-3558-48ad-87f8-503580da5cc7-logs\") pod \"barbican-keystone-listener-6d78867d94-7lhqs\" (UID: \"990b124d-3558-48ad-87f8-503580da5cc7\") " pod="openstack/barbican-keystone-listener-6d78867d94-7lhqs" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.247931 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drb8b\" (UniqueName: \"kubernetes.io/projected/990b124d-3558-48ad-87f8-503580da5cc7-kube-api-access-drb8b\") pod \"barbican-keystone-listener-6d78867d94-7lhqs\" (UID: \"990b124d-3558-48ad-87f8-503580da5cc7\") " pod="openstack/barbican-keystone-listener-6d78867d94-7lhqs" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.247967 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0db6993-f3e7-4aa7-b5cc-1b848a15b56c-config-data\") pod \"barbican-worker-55f6d995c5-hnz4n\" (UID: \"a0db6993-f3e7-4aa7-b5cc-1b848a15b56c\") " pod="openstack/barbican-worker-55f6d995c5-hnz4n" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.248002 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/990b124d-3558-48ad-87f8-503580da5cc7-combined-ca-bundle\") pod \"barbican-keystone-listener-6d78867d94-7lhqs\" (UID: \"990b124d-3558-48ad-87f8-503580da5cc7\") " pod="openstack/barbican-keystone-listener-6d78867d94-7lhqs" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.248063 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6974c05c-8d53-4225-8ccd-c8c7c8956073-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-29sc9\" (UID: \"6974c05c-8d53-4225-8ccd-c8c7c8956073\") " pod="openstack/dnsmasq-dns-85ff748b95-29sc9" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.248172 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6974c05c-8d53-4225-8ccd-c8c7c8956073-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-29sc9\" (UID: \"6974c05c-8d53-4225-8ccd-c8c7c8956073\") " pod="openstack/dnsmasq-dns-85ff748b95-29sc9" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.248198 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/990b124d-3558-48ad-87f8-503580da5cc7-logs\") pod \"barbican-keystone-listener-6d78867d94-7lhqs\" (UID: \"990b124d-3558-48ad-87f8-503580da5cc7\") " pod="openstack/barbican-keystone-listener-6d78867d94-7lhqs" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.248221 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clkmv\" (UniqueName: \"kubernetes.io/projected/6974c05c-8d53-4225-8ccd-c8c7c8956073-kube-api-access-clkmv\") pod \"dnsmasq-dns-85ff748b95-29sc9\" (UID: \"6974c05c-8d53-4225-8ccd-c8c7c8956073\") " pod="openstack/dnsmasq-dns-85ff748b95-29sc9" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.248276 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0db6993-f3e7-4aa7-b5cc-1b848a15b56c-combined-ca-bundle\") pod \"barbican-worker-55f6d995c5-hnz4n\" (UID: \"a0db6993-f3e7-4aa7-b5cc-1b848a15b56c\") " pod="openstack/barbican-worker-55f6d995c5-hnz4n" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.248307 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a0db6993-f3e7-4aa7-b5cc-1b848a15b56c-logs\") pod \"barbican-worker-55f6d995c5-hnz4n\" (UID: \"a0db6993-f3e7-4aa7-b5cc-1b848a15b56c\") " pod="openstack/barbican-worker-55f6d995c5-hnz4n" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.248757 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a0db6993-f3e7-4aa7-b5cc-1b848a15b56c-logs\") pod \"barbican-worker-55f6d995c5-hnz4n\" (UID: \"a0db6993-f3e7-4aa7-b5cc-1b848a15b56c\") " pod="openstack/barbican-worker-55f6d995c5-hnz4n" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.248905 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6974c05c-8d53-4225-8ccd-c8c7c8956073-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-29sc9\" (UID: \"6974c05c-8d53-4225-8ccd-c8c7c8956073\") " pod="openstack/dnsmasq-dns-85ff748b95-29sc9" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.248953 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6974c05c-8d53-4225-8ccd-c8c7c8956073-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-29sc9\" (UID: \"6974c05c-8d53-4225-8ccd-c8c7c8956073\") " pod="openstack/dnsmasq-dns-85ff748b95-29sc9" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.248991 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6974c05c-8d53-4225-8ccd-c8c7c8956073-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-29sc9\" (UID: \"6974c05c-8d53-4225-8ccd-c8c7c8956073\") " pod="openstack/dnsmasq-dns-85ff748b95-29sc9" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.251027 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6974c05c-8d53-4225-8ccd-c8c7c8956073-config\") pod \"dnsmasq-dns-85ff748b95-29sc9\" (UID: \"6974c05c-8d53-4225-8ccd-c8c7c8956073\") " pod="openstack/dnsmasq-dns-85ff748b95-29sc9" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.251061 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/990b124d-3558-48ad-87f8-503580da5cc7-config-data-custom\") pod \"barbican-keystone-listener-6d78867d94-7lhqs\" (UID: \"990b124d-3558-48ad-87f8-503580da5cc7\") " pod="openstack/barbican-keystone-listener-6d78867d94-7lhqs" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.251707 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a0db6993-f3e7-4aa7-b5cc-1b848a15b56c-config-data-custom\") pod \"barbican-worker-55f6d995c5-hnz4n\" (UID: \"a0db6993-f3e7-4aa7-b5cc-1b848a15b56c\") " pod="openstack/barbican-worker-55f6d995c5-hnz4n" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.252153 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/990b124d-3558-48ad-87f8-503580da5cc7-config-data\") pod \"barbican-keystone-listener-6d78867d94-7lhqs\" (UID: \"990b124d-3558-48ad-87f8-503580da5cc7\") " pod="openstack/barbican-keystone-listener-6d78867d94-7lhqs" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.252160 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/990b124d-3558-48ad-87f8-503580da5cc7-combined-ca-bundle\") pod \"barbican-keystone-listener-6d78867d94-7lhqs\" (UID: \"990b124d-3558-48ad-87f8-503580da5cc7\") " pod="openstack/barbican-keystone-listener-6d78867d94-7lhqs" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.254532 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0db6993-f3e7-4aa7-b5cc-1b848a15b56c-combined-ca-bundle\") pod \"barbican-worker-55f6d995c5-hnz4n\" (UID: \"a0db6993-f3e7-4aa7-b5cc-1b848a15b56c\") " pod="openstack/barbican-worker-55f6d995c5-hnz4n" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.255249 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0db6993-f3e7-4aa7-b5cc-1b848a15b56c-config-data\") pod \"barbican-worker-55f6d995c5-hnz4n\" (UID: \"a0db6993-f3e7-4aa7-b5cc-1b848a15b56c\") " pod="openstack/barbican-worker-55f6d995c5-hnz4n" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.265629 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2dxj\" (UniqueName: \"kubernetes.io/projected/a0db6993-f3e7-4aa7-b5cc-1b848a15b56c-kube-api-access-q2dxj\") pod \"barbican-worker-55f6d995c5-hnz4n\" (UID: \"a0db6993-f3e7-4aa7-b5cc-1b848a15b56c\") " pod="openstack/barbican-worker-55f6d995c5-hnz4n" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.266119 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drb8b\" (UniqueName: \"kubernetes.io/projected/990b124d-3558-48ad-87f8-503580da5cc7-kube-api-access-drb8b\") pod \"barbican-keystone-listener-6d78867d94-7lhqs\" (UID: \"990b124d-3558-48ad-87f8-503580da5cc7\") " pod="openstack/barbican-keystone-listener-6d78867d94-7lhqs" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.266480 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clkmv\" (UniqueName: \"kubernetes.io/projected/6974c05c-8d53-4225-8ccd-c8c7c8956073-kube-api-access-clkmv\") pod \"dnsmasq-dns-85ff748b95-29sc9\" (UID: \"6974c05c-8d53-4225-8ccd-c8c7c8956073\") " pod="openstack/dnsmasq-dns-85ff748b95-29sc9" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.306729 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-sync-wdrmd" event={"ID":"2ec52dbb-ca2f-4013-8536-972042607240","Type":"ContainerDied","Data":"e334d06468b3a37f46d5f6db68268b3881996656b8f3df2be0b3c006d2589a72"} Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.306769 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e334d06468b3a37f46d5f6db68268b3881996656b8f3df2be0b3c006d2589a72" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.306821 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-sync-wdrmd" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.320462 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-jcqjf" event={"ID":"d0cc3be3-7aa7-4384-97ed-1ec7bf75f026","Type":"ContainerDied","Data":"722abc1b9b4878938b1d63e6058f446e8ab4a259fcfed886248ba3ca8f6e13fc"} Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.320498 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="722abc1b9b4878938b1d63e6058f446e8ab4a259fcfed886248ba3ca8f6e13fc" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.320695 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-jcqjf" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.352520 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd86efad-8ad2-4e38-b731-5f892d34a582-logs\") pod \"barbican-api-75bd7dcff4-tfcmj\" (UID: \"bd86efad-8ad2-4e38-b731-5f892d34a582\") " pod="openstack/barbican-api-75bd7dcff4-tfcmj" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.352592 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krq8t\" (UniqueName: \"kubernetes.io/projected/bd86efad-8ad2-4e38-b731-5f892d34a582-kube-api-access-krq8t\") pod \"barbican-api-75bd7dcff4-tfcmj\" (UID: \"bd86efad-8ad2-4e38-b731-5f892d34a582\") " pod="openstack/barbican-api-75bd7dcff4-tfcmj" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.352621 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bd86efad-8ad2-4e38-b731-5f892d34a582-config-data-custom\") pod \"barbican-api-75bd7dcff4-tfcmj\" (UID: \"bd86efad-8ad2-4e38-b731-5f892d34a582\") " pod="openstack/barbican-api-75bd7dcff4-tfcmj" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.352675 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd86efad-8ad2-4e38-b731-5f892d34a582-config-data\") pod \"barbican-api-75bd7dcff4-tfcmj\" (UID: \"bd86efad-8ad2-4e38-b731-5f892d34a582\") " pod="openstack/barbican-api-75bd7dcff4-tfcmj" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.352812 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd86efad-8ad2-4e38-b731-5f892d34a582-combined-ca-bundle\") pod \"barbican-api-75bd7dcff4-tfcmj\" (UID: \"bd86efad-8ad2-4e38-b731-5f892d34a582\") " pod="openstack/barbican-api-75bd7dcff4-tfcmj" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.372000 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-storageinit-cftjl"] Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.373535 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-storageinit-cftjl" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.378843 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.379493 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cloudkitty-client-internal" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.379683 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-cloudkitty-dockercfg-kqv9d" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.379701 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-scripts" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.379838 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-config-data" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.388478 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-6d78867d94-7lhqs" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.398622 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-55f6d995c5-hnz4n" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.418462 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-storageinit-cftjl"] Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.450351 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-29sc9" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.459412 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd86efad-8ad2-4e38-b731-5f892d34a582-combined-ca-bundle\") pod \"barbican-api-75bd7dcff4-tfcmj\" (UID: \"bd86efad-8ad2-4e38-b731-5f892d34a582\") " pod="openstack/barbican-api-75bd7dcff4-tfcmj" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.459551 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd86efad-8ad2-4e38-b731-5f892d34a582-logs\") pod \"barbican-api-75bd7dcff4-tfcmj\" (UID: \"bd86efad-8ad2-4e38-b731-5f892d34a582\") " pod="openstack/barbican-api-75bd7dcff4-tfcmj" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.459595 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krq8t\" (UniqueName: \"kubernetes.io/projected/bd86efad-8ad2-4e38-b731-5f892d34a582-kube-api-access-krq8t\") pod \"barbican-api-75bd7dcff4-tfcmj\" (UID: \"bd86efad-8ad2-4e38-b731-5f892d34a582\") " pod="openstack/barbican-api-75bd7dcff4-tfcmj" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.459642 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bd86efad-8ad2-4e38-b731-5f892d34a582-config-data-custom\") pod \"barbican-api-75bd7dcff4-tfcmj\" (UID: \"bd86efad-8ad2-4e38-b731-5f892d34a582\") " pod="openstack/barbican-api-75bd7dcff4-tfcmj" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.459694 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd86efad-8ad2-4e38-b731-5f892d34a582-config-data\") pod \"barbican-api-75bd7dcff4-tfcmj\" (UID: \"bd86efad-8ad2-4e38-b731-5f892d34a582\") " pod="openstack/barbican-api-75bd7dcff4-tfcmj" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.465704 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd86efad-8ad2-4e38-b731-5f892d34a582-logs\") pod \"barbican-api-75bd7dcff4-tfcmj\" (UID: \"bd86efad-8ad2-4e38-b731-5f892d34a582\") " pod="openstack/barbican-api-75bd7dcff4-tfcmj" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.467551 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd86efad-8ad2-4e38-b731-5f892d34a582-config-data\") pod \"barbican-api-75bd7dcff4-tfcmj\" (UID: \"bd86efad-8ad2-4e38-b731-5f892d34a582\") " pod="openstack/barbican-api-75bd7dcff4-tfcmj" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.482830 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bd86efad-8ad2-4e38-b731-5f892d34a582-config-data-custom\") pod \"barbican-api-75bd7dcff4-tfcmj\" (UID: \"bd86efad-8ad2-4e38-b731-5f892d34a582\") " pod="openstack/barbican-api-75bd7dcff4-tfcmj" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.494836 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd86efad-8ad2-4e38-b731-5f892d34a582-combined-ca-bundle\") pod \"barbican-api-75bd7dcff4-tfcmj\" (UID: \"bd86efad-8ad2-4e38-b731-5f892d34a582\") " pod="openstack/barbican-api-75bd7dcff4-tfcmj" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.503197 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krq8t\" (UniqueName: \"kubernetes.io/projected/bd86efad-8ad2-4e38-b731-5f892d34a582-kube-api-access-krq8t\") pod \"barbican-api-75bd7dcff4-tfcmj\" (UID: \"bd86efad-8ad2-4e38-b731-5f892d34a582\") " pod="openstack/barbican-api-75bd7dcff4-tfcmj" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.526802 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.532320 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.542096 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-75bd7dcff4-tfcmj" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.543134 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.543374 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-bqdgs" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.549919 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.555312 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.564018 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf7344d6-b8f4-4234-bb75-f4d7702b040b-scripts\") pod \"cloudkitty-storageinit-cftjl\" (UID: \"cf7344d6-b8f4-4234-bb75-f4d7702b040b\") " pod="openstack/cloudkitty-storageinit-cftjl" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.573588 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf7344d6-b8f4-4234-bb75-f4d7702b040b-config-data\") pod \"cloudkitty-storageinit-cftjl\" (UID: \"cf7344d6-b8f4-4234-bb75-f4d7702b040b\") " pod="openstack/cloudkitty-storageinit-cftjl" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.573889 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf7344d6-b8f4-4234-bb75-f4d7702b040b-combined-ca-bundle\") pod \"cloudkitty-storageinit-cftjl\" (UID: \"cf7344d6-b8f4-4234-bb75-f4d7702b040b\") " pod="openstack/cloudkitty-storageinit-cftjl" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.573957 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84l8p\" (UniqueName: \"kubernetes.io/projected/cf7344d6-b8f4-4234-bb75-f4d7702b040b-kube-api-access-84l8p\") pod \"cloudkitty-storageinit-cftjl\" (UID: \"cf7344d6-b8f4-4234-bb75-f4d7702b040b\") " pod="openstack/cloudkitty-storageinit-cftjl" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.573987 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/cf7344d6-b8f4-4234-bb75-f4d7702b040b-certs\") pod \"cloudkitty-storageinit-cftjl\" (UID: \"cf7344d6-b8f4-4234-bb75-f4d7702b040b\") " pod="openstack/cloudkitty-storageinit-cftjl" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.585159 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.624361 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-29sc9"] Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.653338 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-2xw29"] Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.656628 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-2xw29" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.676065 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/37da8fa5-9dda-4e98-9a63-a4c0036e0017-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"37da8fa5-9dda-4e98-9a63-a4c0036e0017\") " pod="openstack/cinder-scheduler-0" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.676135 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37da8fa5-9dda-4e98-9a63-a4c0036e0017-config-data\") pod \"cinder-scheduler-0\" (UID: \"37da8fa5-9dda-4e98-9a63-a4c0036e0017\") " pod="openstack/cinder-scheduler-0" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.676164 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxm9g\" (UniqueName: \"kubernetes.io/projected/37da8fa5-9dda-4e98-9a63-a4c0036e0017-kube-api-access-lxm9g\") pod \"cinder-scheduler-0\" (UID: \"37da8fa5-9dda-4e98-9a63-a4c0036e0017\") " pod="openstack/cinder-scheduler-0" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.676204 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf7344d6-b8f4-4234-bb75-f4d7702b040b-scripts\") pod \"cloudkitty-storageinit-cftjl\" (UID: \"cf7344d6-b8f4-4234-bb75-f4d7702b040b\") " pod="openstack/cloudkitty-storageinit-cftjl" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.676235 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf7344d6-b8f4-4234-bb75-f4d7702b040b-config-data\") pod \"cloudkitty-storageinit-cftjl\" (UID: \"cf7344d6-b8f4-4234-bb75-f4d7702b040b\") " pod="openstack/cloudkitty-storageinit-cftjl" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.676342 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/37da8fa5-9dda-4e98-9a63-a4c0036e0017-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"37da8fa5-9dda-4e98-9a63-a4c0036e0017\") " pod="openstack/cinder-scheduler-0" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.676359 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/37da8fa5-9dda-4e98-9a63-a4c0036e0017-scripts\") pod \"cinder-scheduler-0\" (UID: \"37da8fa5-9dda-4e98-9a63-a4c0036e0017\") " pod="openstack/cinder-scheduler-0" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.676383 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf7344d6-b8f4-4234-bb75-f4d7702b040b-combined-ca-bundle\") pod \"cloudkitty-storageinit-cftjl\" (UID: \"cf7344d6-b8f4-4234-bb75-f4d7702b040b\") " pod="openstack/cloudkitty-storageinit-cftjl" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.676424 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37da8fa5-9dda-4e98-9a63-a4c0036e0017-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"37da8fa5-9dda-4e98-9a63-a4c0036e0017\") " pod="openstack/cinder-scheduler-0" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.676447 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84l8p\" (UniqueName: \"kubernetes.io/projected/cf7344d6-b8f4-4234-bb75-f4d7702b040b-kube-api-access-84l8p\") pod \"cloudkitty-storageinit-cftjl\" (UID: \"cf7344d6-b8f4-4234-bb75-f4d7702b040b\") " pod="openstack/cloudkitty-storageinit-cftjl" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.676470 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/cf7344d6-b8f4-4234-bb75-f4d7702b040b-certs\") pod \"cloudkitty-storageinit-cftjl\" (UID: \"cf7344d6-b8f4-4234-bb75-f4d7702b040b\") " pod="openstack/cloudkitty-storageinit-cftjl" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.682287 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf7344d6-b8f4-4234-bb75-f4d7702b040b-scripts\") pod \"cloudkitty-storageinit-cftjl\" (UID: \"cf7344d6-b8f4-4234-bb75-f4d7702b040b\") " pod="openstack/cloudkitty-storageinit-cftjl" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.685940 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/cf7344d6-b8f4-4234-bb75-f4d7702b040b-certs\") pod \"cloudkitty-storageinit-cftjl\" (UID: \"cf7344d6-b8f4-4234-bb75-f4d7702b040b\") " pod="openstack/cloudkitty-storageinit-cftjl" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.690260 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf7344d6-b8f4-4234-bb75-f4d7702b040b-combined-ca-bundle\") pod \"cloudkitty-storageinit-cftjl\" (UID: \"cf7344d6-b8f4-4234-bb75-f4d7702b040b\") " pod="openstack/cloudkitty-storageinit-cftjl" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.691520 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf7344d6-b8f4-4234-bb75-f4d7702b040b-config-data\") pod \"cloudkitty-storageinit-cftjl\" (UID: \"cf7344d6-b8f4-4234-bb75-f4d7702b040b\") " pod="openstack/cloudkitty-storageinit-cftjl" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.699988 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-2xw29"] Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.706305 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84l8p\" (UniqueName: \"kubernetes.io/projected/cf7344d6-b8f4-4234-bb75-f4d7702b040b-kube-api-access-84l8p\") pod \"cloudkitty-storageinit-cftjl\" (UID: \"cf7344d6-b8f4-4234-bb75-f4d7702b040b\") " pod="openstack/cloudkitty-storageinit-cftjl" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.758495 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.760325 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.766808 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.792139 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ebaafdbf-7612-40c9-b044-697f41e930e2-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-2xw29\" (UID: \"ebaafdbf-7612-40c9-b044-697f41e930e2\") " pod="openstack/dnsmasq-dns-5c9776ccc5-2xw29" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.792192 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/37da8fa5-9dda-4e98-9a63-a4c0036e0017-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"37da8fa5-9dda-4e98-9a63-a4c0036e0017\") " pod="openstack/cinder-scheduler-0" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.792221 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/37da8fa5-9dda-4e98-9a63-a4c0036e0017-scripts\") pod \"cinder-scheduler-0\" (UID: \"37da8fa5-9dda-4e98-9a63-a4c0036e0017\") " pod="openstack/cinder-scheduler-0" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.792250 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37da8fa5-9dda-4e98-9a63-a4c0036e0017-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"37da8fa5-9dda-4e98-9a63-a4c0036e0017\") " pod="openstack/cinder-scheduler-0" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.792269 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ebaafdbf-7612-40c9-b044-697f41e930e2-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-2xw29\" (UID: \"ebaafdbf-7612-40c9-b044-697f41e930e2\") " pod="openstack/dnsmasq-dns-5c9776ccc5-2xw29" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.792339 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/37da8fa5-9dda-4e98-9a63-a4c0036e0017-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"37da8fa5-9dda-4e98-9a63-a4c0036e0017\") " pod="openstack/cinder-scheduler-0" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.792362 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebaafdbf-7612-40c9-b044-697f41e930e2-config\") pod \"dnsmasq-dns-5c9776ccc5-2xw29\" (UID: \"ebaafdbf-7612-40c9-b044-697f41e930e2\") " pod="openstack/dnsmasq-dns-5c9776ccc5-2xw29" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.792392 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37da8fa5-9dda-4e98-9a63-a4c0036e0017-config-data\") pod \"cinder-scheduler-0\" (UID: \"37da8fa5-9dda-4e98-9a63-a4c0036e0017\") " pod="openstack/cinder-scheduler-0" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.792413 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ebaafdbf-7612-40c9-b044-697f41e930e2-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-2xw29\" (UID: \"ebaafdbf-7612-40c9-b044-697f41e930e2\") " pod="openstack/dnsmasq-dns-5c9776ccc5-2xw29" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.792434 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxm9g\" (UniqueName: \"kubernetes.io/projected/37da8fa5-9dda-4e98-9a63-a4c0036e0017-kube-api-access-lxm9g\") pod \"cinder-scheduler-0\" (UID: \"37da8fa5-9dda-4e98-9a63-a4c0036e0017\") " pod="openstack/cinder-scheduler-0" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.792461 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7z6r\" (UniqueName: \"kubernetes.io/projected/ebaafdbf-7612-40c9-b044-697f41e930e2-kube-api-access-n7z6r\") pod \"dnsmasq-dns-5c9776ccc5-2xw29\" (UID: \"ebaafdbf-7612-40c9-b044-697f41e930e2\") " pod="openstack/dnsmasq-dns-5c9776ccc5-2xw29" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.792522 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ebaafdbf-7612-40c9-b044-697f41e930e2-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-2xw29\" (UID: \"ebaafdbf-7612-40c9-b044-697f41e930e2\") " pod="openstack/dnsmasq-dns-5c9776ccc5-2xw29" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.794967 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/37da8fa5-9dda-4e98-9a63-a4c0036e0017-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"37da8fa5-9dda-4e98-9a63-a4c0036e0017\") " pod="openstack/cinder-scheduler-0" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.805855 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/37da8fa5-9dda-4e98-9a63-a4c0036e0017-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"37da8fa5-9dda-4e98-9a63-a4c0036e0017\") " pod="openstack/cinder-scheduler-0" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.807816 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37da8fa5-9dda-4e98-9a63-a4c0036e0017-config-data\") pod \"cinder-scheduler-0\" (UID: \"37da8fa5-9dda-4e98-9a63-a4c0036e0017\") " pod="openstack/cinder-scheduler-0" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.809836 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/37da8fa5-9dda-4e98-9a63-a4c0036e0017-scripts\") pod \"cinder-scheduler-0\" (UID: \"37da8fa5-9dda-4e98-9a63-a4c0036e0017\") " pod="openstack/cinder-scheduler-0" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.811256 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37da8fa5-9dda-4e98-9a63-a4c0036e0017-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"37da8fa5-9dda-4e98-9a63-a4c0036e0017\") " pod="openstack/cinder-scheduler-0" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.837952 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxm9g\" (UniqueName: \"kubernetes.io/projected/37da8fa5-9dda-4e98-9a63-a4c0036e0017-kube-api-access-lxm9g\") pod \"cinder-scheduler-0\" (UID: \"37da8fa5-9dda-4e98-9a63-a4c0036e0017\") " pod="openstack/cinder-scheduler-0" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.859184 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.894418 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ebaafdbf-7612-40c9-b044-697f41e930e2-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-2xw29\" (UID: \"ebaafdbf-7612-40c9-b044-697f41e930e2\") " pod="openstack/dnsmasq-dns-5c9776ccc5-2xw29" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.894504 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ebaafdbf-7612-40c9-b044-697f41e930e2-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-2xw29\" (UID: \"ebaafdbf-7612-40c9-b044-697f41e930e2\") " pod="openstack/dnsmasq-dns-5c9776ccc5-2xw29" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.894546 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ebaafdbf-7612-40c9-b044-697f41e930e2-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-2xw29\" (UID: \"ebaafdbf-7612-40c9-b044-697f41e930e2\") " pod="openstack/dnsmasq-dns-5c9776ccc5-2xw29" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.894583 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9f172158-bc5a-40a6-afc6-df84970d436d-config-data-custom\") pod \"cinder-api-0\" (UID: \"9f172158-bc5a-40a6-afc6-df84970d436d\") " pod="openstack/cinder-api-0" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.894601 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f172158-bc5a-40a6-afc6-df84970d436d-config-data\") pod \"cinder-api-0\" (UID: \"9f172158-bc5a-40a6-afc6-df84970d436d\") " pod="openstack/cinder-api-0" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.894632 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9f172158-bc5a-40a6-afc6-df84970d436d-logs\") pod \"cinder-api-0\" (UID: \"9f172158-bc5a-40a6-afc6-df84970d436d\") " pod="openstack/cinder-api-0" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.894662 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8ndg\" (UniqueName: \"kubernetes.io/projected/9f172158-bc5a-40a6-afc6-df84970d436d-kube-api-access-l8ndg\") pod \"cinder-api-0\" (UID: \"9f172158-bc5a-40a6-afc6-df84970d436d\") " pod="openstack/cinder-api-0" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.894681 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebaafdbf-7612-40c9-b044-697f41e930e2-config\") pod \"dnsmasq-dns-5c9776ccc5-2xw29\" (UID: \"ebaafdbf-7612-40c9-b044-697f41e930e2\") " pod="openstack/dnsmasq-dns-5c9776ccc5-2xw29" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.896037 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ebaafdbf-7612-40c9-b044-697f41e930e2-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-2xw29\" (UID: \"ebaafdbf-7612-40c9-b044-697f41e930e2\") " pod="openstack/dnsmasq-dns-5c9776ccc5-2xw29" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.896076 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7z6r\" (UniqueName: \"kubernetes.io/projected/ebaafdbf-7612-40c9-b044-697f41e930e2-kube-api-access-n7z6r\") pod \"dnsmasq-dns-5c9776ccc5-2xw29\" (UID: \"ebaafdbf-7612-40c9-b044-697f41e930e2\") " pod="openstack/dnsmasq-dns-5c9776ccc5-2xw29" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.896100 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f172158-bc5a-40a6-afc6-df84970d436d-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"9f172158-bc5a-40a6-afc6-df84970d436d\") " pod="openstack/cinder-api-0" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.896129 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9f172158-bc5a-40a6-afc6-df84970d436d-etc-machine-id\") pod \"cinder-api-0\" (UID: \"9f172158-bc5a-40a6-afc6-df84970d436d\") " pod="openstack/cinder-api-0" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.896148 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f172158-bc5a-40a6-afc6-df84970d436d-scripts\") pod \"cinder-api-0\" (UID: \"9f172158-bc5a-40a6-afc6-df84970d436d\") " pod="openstack/cinder-api-0" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.896512 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ebaafdbf-7612-40c9-b044-697f41e930e2-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-2xw29\" (UID: \"ebaafdbf-7612-40c9-b044-697f41e930e2\") " pod="openstack/dnsmasq-dns-5c9776ccc5-2xw29" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.896866 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ebaafdbf-7612-40c9-b044-697f41e930e2-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-2xw29\" (UID: \"ebaafdbf-7612-40c9-b044-697f41e930e2\") " pod="openstack/dnsmasq-dns-5c9776ccc5-2xw29" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.897085 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebaafdbf-7612-40c9-b044-697f41e930e2-config\") pod \"dnsmasq-dns-5c9776ccc5-2xw29\" (UID: \"ebaafdbf-7612-40c9-b044-697f41e930e2\") " pod="openstack/dnsmasq-dns-5c9776ccc5-2xw29" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.897453 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ebaafdbf-7612-40c9-b044-697f41e930e2-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-2xw29\" (UID: \"ebaafdbf-7612-40c9-b044-697f41e930e2\") " pod="openstack/dnsmasq-dns-5c9776ccc5-2xw29" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.898328 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ebaafdbf-7612-40c9-b044-697f41e930e2-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-2xw29\" (UID: \"ebaafdbf-7612-40c9-b044-697f41e930e2\") " pod="openstack/dnsmasq-dns-5c9776ccc5-2xw29" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.924811 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7z6r\" (UniqueName: \"kubernetes.io/projected/ebaafdbf-7612-40c9-b044-697f41e930e2-kube-api-access-n7z6r\") pod \"dnsmasq-dns-5c9776ccc5-2xw29\" (UID: \"ebaafdbf-7612-40c9-b044-697f41e930e2\") " pod="openstack/dnsmasq-dns-5c9776ccc5-2xw29" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.992852 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-storageinit-cftjl" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.997245 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8ndg\" (UniqueName: \"kubernetes.io/projected/9f172158-bc5a-40a6-afc6-df84970d436d-kube-api-access-l8ndg\") pod \"cinder-api-0\" (UID: \"9f172158-bc5a-40a6-afc6-df84970d436d\") " pod="openstack/cinder-api-0" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.997320 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f172158-bc5a-40a6-afc6-df84970d436d-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"9f172158-bc5a-40a6-afc6-df84970d436d\") " pod="openstack/cinder-api-0" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.997353 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9f172158-bc5a-40a6-afc6-df84970d436d-etc-machine-id\") pod \"cinder-api-0\" (UID: \"9f172158-bc5a-40a6-afc6-df84970d436d\") " pod="openstack/cinder-api-0" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.997368 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f172158-bc5a-40a6-afc6-df84970d436d-scripts\") pod \"cinder-api-0\" (UID: \"9f172158-bc5a-40a6-afc6-df84970d436d\") " pod="openstack/cinder-api-0" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.997450 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9f172158-bc5a-40a6-afc6-df84970d436d-config-data-custom\") pod \"cinder-api-0\" (UID: \"9f172158-bc5a-40a6-afc6-df84970d436d\") " pod="openstack/cinder-api-0" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.997465 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f172158-bc5a-40a6-afc6-df84970d436d-config-data\") pod \"cinder-api-0\" (UID: \"9f172158-bc5a-40a6-afc6-df84970d436d\") " pod="openstack/cinder-api-0" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.997493 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9f172158-bc5a-40a6-afc6-df84970d436d-logs\") pod \"cinder-api-0\" (UID: \"9f172158-bc5a-40a6-afc6-df84970d436d\") " pod="openstack/cinder-api-0" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.997715 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9f172158-bc5a-40a6-afc6-df84970d436d-etc-machine-id\") pod \"cinder-api-0\" (UID: \"9f172158-bc5a-40a6-afc6-df84970d436d\") " pod="openstack/cinder-api-0" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.997893 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9f172158-bc5a-40a6-afc6-df84970d436d-logs\") pod \"cinder-api-0\" (UID: \"9f172158-bc5a-40a6-afc6-df84970d436d\") " pod="openstack/cinder-api-0" Feb 17 16:15:16 crc kubenswrapper[4808]: I0217 16:15:16.002100 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f172158-bc5a-40a6-afc6-df84970d436d-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"9f172158-bc5a-40a6-afc6-df84970d436d\") " pod="openstack/cinder-api-0" Feb 17 16:15:16 crc kubenswrapper[4808]: I0217 16:15:16.003083 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f172158-bc5a-40a6-afc6-df84970d436d-config-data\") pod \"cinder-api-0\" (UID: \"9f172158-bc5a-40a6-afc6-df84970d436d\") " pod="openstack/cinder-api-0" Feb 17 16:15:16 crc kubenswrapper[4808]: I0217 16:15:16.010975 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 17 16:15:16 crc kubenswrapper[4808]: I0217 16:15:16.013196 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f172158-bc5a-40a6-afc6-df84970d436d-scripts\") pod \"cinder-api-0\" (UID: \"9f172158-bc5a-40a6-afc6-df84970d436d\") " pod="openstack/cinder-api-0" Feb 17 16:15:16 crc kubenswrapper[4808]: I0217 16:15:16.014275 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9f172158-bc5a-40a6-afc6-df84970d436d-config-data-custom\") pod \"cinder-api-0\" (UID: \"9f172158-bc5a-40a6-afc6-df84970d436d\") " pod="openstack/cinder-api-0" Feb 17 16:15:16 crc kubenswrapper[4808]: I0217 16:15:16.015947 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8ndg\" (UniqueName: \"kubernetes.io/projected/9f172158-bc5a-40a6-afc6-df84970d436d-kube-api-access-l8ndg\") pod \"cinder-api-0\" (UID: \"9f172158-bc5a-40a6-afc6-df84970d436d\") " pod="openstack/cinder-api-0" Feb 17 16:15:16 crc kubenswrapper[4808]: I0217 16:15:16.045243 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-2xw29" Feb 17 16:15:16 crc kubenswrapper[4808]: I0217 16:15:16.063287 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-5c8b8554dd-86wnt" Feb 17 16:15:16 crc kubenswrapper[4808]: I0217 16:15:16.143105 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 17 16:15:16 crc kubenswrapper[4808]: I0217 16:15:16.224631 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-6d78867d94-7lhqs"] Feb 17 16:15:16 crc kubenswrapper[4808]: I0217 16:15:16.313336 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-55f6d995c5-hnz4n"] Feb 17 16:15:16 crc kubenswrapper[4808]: I0217 16:15:16.330942 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6576669595-nvtln"] Feb 17 16:15:16 crc kubenswrapper[4808]: I0217 16:15:16.331251 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6576669595-nvtln" podUID="dd20b2ca-153a-4f21-9c41-4f00bdc82b56" containerName="neutron-api" containerID="cri-o://811f9cc94c4ee217b19fe631254bddba36393da079ca418fd65bacd8378b729d" gracePeriod=30 Feb 17 16:15:16 crc kubenswrapper[4808]: I0217 16:15:16.331382 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6576669595-nvtln" podUID="dd20b2ca-153a-4f21-9c41-4f00bdc82b56" containerName="neutron-httpd" containerID="cri-o://fee07854741e5a088b7b1dea17a21007719827fd0ce55cfd2c9c99ff36340d84" gracePeriod=30 Feb 17 16:15:16 crc kubenswrapper[4808]: I0217 16:15:16.353873 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6c6489dbc7-2ddnw"] Feb 17 16:15:16 crc kubenswrapper[4808]: I0217 16:15:16.355912 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6c6489dbc7-2ddnw" Feb 17 16:15:16 crc kubenswrapper[4808]: I0217 16:15:16.369731 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6d78867d94-7lhqs" event={"ID":"990b124d-3558-48ad-87f8-503580da5cc7","Type":"ContainerStarted","Data":"31cc8d75c1f4d242197ba91a2b42ad543f364921b9fb333fa6cbb71110597d2b"} Feb 17 16:15:16 crc kubenswrapper[4808]: I0217 16:15:16.373208 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6c6489dbc7-2ddnw"] Feb 17 16:15:16 crc kubenswrapper[4808]: I0217 16:15:16.387079 4808 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-6576669595-nvtln" podUID="dd20b2ca-153a-4f21-9c41-4f00bdc82b56" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.169:9696/\": EOF" Feb 17 16:15:16 crc kubenswrapper[4808]: I0217 16:15:16.414902 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7e54d61-1bf6-41ae-b885-7e6448d351a5-public-tls-certs\") pod \"neutron-6c6489dbc7-2ddnw\" (UID: \"b7e54d61-1bf6-41ae-b885-7e6448d351a5\") " pod="openstack/neutron-6c6489dbc7-2ddnw" Feb 17 16:15:16 crc kubenswrapper[4808]: I0217 16:15:16.417829 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7e54d61-1bf6-41ae-b885-7e6448d351a5-ovndb-tls-certs\") pod \"neutron-6c6489dbc7-2ddnw\" (UID: \"b7e54d61-1bf6-41ae-b885-7e6448d351a5\") " pod="openstack/neutron-6c6489dbc7-2ddnw" Feb 17 16:15:16 crc kubenswrapper[4808]: I0217 16:15:16.418033 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sf8l5\" (UniqueName: \"kubernetes.io/projected/b7e54d61-1bf6-41ae-b885-7e6448d351a5-kube-api-access-sf8l5\") pod \"neutron-6c6489dbc7-2ddnw\" (UID: \"b7e54d61-1bf6-41ae-b885-7e6448d351a5\") " pod="openstack/neutron-6c6489dbc7-2ddnw" Feb 17 16:15:16 crc kubenswrapper[4808]: I0217 16:15:16.418236 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b7e54d61-1bf6-41ae-b885-7e6448d351a5-config\") pod \"neutron-6c6489dbc7-2ddnw\" (UID: \"b7e54d61-1bf6-41ae-b885-7e6448d351a5\") " pod="openstack/neutron-6c6489dbc7-2ddnw" Feb 17 16:15:16 crc kubenswrapper[4808]: I0217 16:15:16.418570 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b7e54d61-1bf6-41ae-b885-7e6448d351a5-httpd-config\") pod \"neutron-6c6489dbc7-2ddnw\" (UID: \"b7e54d61-1bf6-41ae-b885-7e6448d351a5\") " pod="openstack/neutron-6c6489dbc7-2ddnw" Feb 17 16:15:16 crc kubenswrapper[4808]: I0217 16:15:16.418828 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7e54d61-1bf6-41ae-b885-7e6448d351a5-combined-ca-bundle\") pod \"neutron-6c6489dbc7-2ddnw\" (UID: \"b7e54d61-1bf6-41ae-b885-7e6448d351a5\") " pod="openstack/neutron-6c6489dbc7-2ddnw" Feb 17 16:15:16 crc kubenswrapper[4808]: I0217 16:15:16.418870 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7e54d61-1bf6-41ae-b885-7e6448d351a5-internal-tls-certs\") pod \"neutron-6c6489dbc7-2ddnw\" (UID: \"b7e54d61-1bf6-41ae-b885-7e6448d351a5\") " pod="openstack/neutron-6c6489dbc7-2ddnw" Feb 17 16:15:16 crc kubenswrapper[4808]: I0217 16:15:16.424967 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-29sc9"] Feb 17 16:15:16 crc kubenswrapper[4808]: W0217 16:15:16.428530 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6974c05c_8d53_4225_8ccd_c8c7c8956073.slice/crio-f4d27695837be070b4363e7cb9ae125043b0ce87e34d2269a5ad68632157ac0d WatchSource:0}: Error finding container f4d27695837be070b4363e7cb9ae125043b0ce87e34d2269a5ad68632157ac0d: Status 404 returned error can't find the container with id f4d27695837be070b4363e7cb9ae125043b0ce87e34d2269a5ad68632157ac0d Feb 17 16:15:16 crc kubenswrapper[4808]: I0217 16:15:16.439473 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-75bd7dcff4-tfcmj"] Feb 17 16:15:16 crc kubenswrapper[4808]: W0217 16:15:16.462752 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbd86efad_8ad2_4e38_b731_5f892d34a582.slice/crio-5dc94be747fd1b78b9a66a8cfe5962566975f11bb39b1a72c4640a142fb1468d WatchSource:0}: Error finding container 5dc94be747fd1b78b9a66a8cfe5962566975f11bb39b1a72c4640a142fb1468d: Status 404 returned error can't find the container with id 5dc94be747fd1b78b9a66a8cfe5962566975f11bb39b1a72c4640a142fb1468d Feb 17 16:15:16 crc kubenswrapper[4808]: I0217 16:15:16.521389 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7e54d61-1bf6-41ae-b885-7e6448d351a5-ovndb-tls-certs\") pod \"neutron-6c6489dbc7-2ddnw\" (UID: \"b7e54d61-1bf6-41ae-b885-7e6448d351a5\") " pod="openstack/neutron-6c6489dbc7-2ddnw" Feb 17 16:15:16 crc kubenswrapper[4808]: I0217 16:15:16.521683 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sf8l5\" (UniqueName: \"kubernetes.io/projected/b7e54d61-1bf6-41ae-b885-7e6448d351a5-kube-api-access-sf8l5\") pod \"neutron-6c6489dbc7-2ddnw\" (UID: \"b7e54d61-1bf6-41ae-b885-7e6448d351a5\") " pod="openstack/neutron-6c6489dbc7-2ddnw" Feb 17 16:15:16 crc kubenswrapper[4808]: I0217 16:15:16.521803 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b7e54d61-1bf6-41ae-b885-7e6448d351a5-config\") pod \"neutron-6c6489dbc7-2ddnw\" (UID: \"b7e54d61-1bf6-41ae-b885-7e6448d351a5\") " pod="openstack/neutron-6c6489dbc7-2ddnw" Feb 17 16:15:16 crc kubenswrapper[4808]: I0217 16:15:16.535001 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b7e54d61-1bf6-41ae-b885-7e6448d351a5-httpd-config\") pod \"neutron-6c6489dbc7-2ddnw\" (UID: \"b7e54d61-1bf6-41ae-b885-7e6448d351a5\") " pod="openstack/neutron-6c6489dbc7-2ddnw" Feb 17 16:15:16 crc kubenswrapper[4808]: I0217 16:15:16.535046 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7e54d61-1bf6-41ae-b885-7e6448d351a5-combined-ca-bundle\") pod \"neutron-6c6489dbc7-2ddnw\" (UID: \"b7e54d61-1bf6-41ae-b885-7e6448d351a5\") " pod="openstack/neutron-6c6489dbc7-2ddnw" Feb 17 16:15:16 crc kubenswrapper[4808]: I0217 16:15:16.535099 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7e54d61-1bf6-41ae-b885-7e6448d351a5-internal-tls-certs\") pod \"neutron-6c6489dbc7-2ddnw\" (UID: \"b7e54d61-1bf6-41ae-b885-7e6448d351a5\") " pod="openstack/neutron-6c6489dbc7-2ddnw" Feb 17 16:15:16 crc kubenswrapper[4808]: I0217 16:15:16.535159 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7e54d61-1bf6-41ae-b885-7e6448d351a5-public-tls-certs\") pod \"neutron-6c6489dbc7-2ddnw\" (UID: \"b7e54d61-1bf6-41ae-b885-7e6448d351a5\") " pod="openstack/neutron-6c6489dbc7-2ddnw" Feb 17 16:15:16 crc kubenswrapper[4808]: I0217 16:15:16.527314 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/b7e54d61-1bf6-41ae-b885-7e6448d351a5-config\") pod \"neutron-6c6489dbc7-2ddnw\" (UID: \"b7e54d61-1bf6-41ae-b885-7e6448d351a5\") " pod="openstack/neutron-6c6489dbc7-2ddnw" Feb 17 16:15:16 crc kubenswrapper[4808]: I0217 16:15:16.526627 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7e54d61-1bf6-41ae-b885-7e6448d351a5-ovndb-tls-certs\") pod \"neutron-6c6489dbc7-2ddnw\" (UID: \"b7e54d61-1bf6-41ae-b885-7e6448d351a5\") " pod="openstack/neutron-6c6489dbc7-2ddnw" Feb 17 16:15:16 crc kubenswrapper[4808]: I0217 16:15:16.539178 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7e54d61-1bf6-41ae-b885-7e6448d351a5-public-tls-certs\") pod \"neutron-6c6489dbc7-2ddnw\" (UID: \"b7e54d61-1bf6-41ae-b885-7e6448d351a5\") " pod="openstack/neutron-6c6489dbc7-2ddnw" Feb 17 16:15:16 crc kubenswrapper[4808]: I0217 16:15:16.540908 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7e54d61-1bf6-41ae-b885-7e6448d351a5-combined-ca-bundle\") pod \"neutron-6c6489dbc7-2ddnw\" (UID: \"b7e54d61-1bf6-41ae-b885-7e6448d351a5\") " pod="openstack/neutron-6c6489dbc7-2ddnw" Feb 17 16:15:16 crc kubenswrapper[4808]: I0217 16:15:16.541640 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sf8l5\" (UniqueName: \"kubernetes.io/projected/b7e54d61-1bf6-41ae-b885-7e6448d351a5-kube-api-access-sf8l5\") pod \"neutron-6c6489dbc7-2ddnw\" (UID: \"b7e54d61-1bf6-41ae-b885-7e6448d351a5\") " pod="openstack/neutron-6c6489dbc7-2ddnw" Feb 17 16:15:16 crc kubenswrapper[4808]: I0217 16:15:16.542182 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b7e54d61-1bf6-41ae-b885-7e6448d351a5-httpd-config\") pod \"neutron-6c6489dbc7-2ddnw\" (UID: \"b7e54d61-1bf6-41ae-b885-7e6448d351a5\") " pod="openstack/neutron-6c6489dbc7-2ddnw" Feb 17 16:15:16 crc kubenswrapper[4808]: I0217 16:15:16.554602 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7e54d61-1bf6-41ae-b885-7e6448d351a5-internal-tls-certs\") pod \"neutron-6c6489dbc7-2ddnw\" (UID: \"b7e54d61-1bf6-41ae-b885-7e6448d351a5\") " pod="openstack/neutron-6c6489dbc7-2ddnw" Feb 17 16:15:16 crc kubenswrapper[4808]: I0217 16:15:16.580788 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 17 16:15:16 crc kubenswrapper[4808]: I0217 16:15:16.581274 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 17 16:15:16 crc kubenswrapper[4808]: I0217 16:15:16.664534 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 17 16:15:16 crc kubenswrapper[4808]: I0217 16:15:16.699331 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6c6489dbc7-2ddnw" Feb 17 16:15:16 crc kubenswrapper[4808]: I0217 16:15:16.729672 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 17 16:15:16 crc kubenswrapper[4808]: I0217 16:15:16.794903 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 17 16:15:16 crc kubenswrapper[4808]: I0217 16:15:16.812075 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-storageinit-cftjl"] Feb 17 16:15:16 crc kubenswrapper[4808]: W0217 16:15:16.831695 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcf7344d6_b8f4_4234_bb75_f4d7702b040b.slice/crio-ad12513f4962dbcb71cd89e1403abeaaad21ab0da490387e800ae06c89c226bc WatchSource:0}: Error finding container ad12513f4962dbcb71cd89e1403abeaaad21ab0da490387e800ae06c89c226bc: Status 404 returned error can't find the container with id ad12513f4962dbcb71cd89e1403abeaaad21ab0da490387e800ae06c89c226bc Feb 17 16:15:17 crc kubenswrapper[4808]: I0217 16:15:17.033625 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 17 16:15:17 crc kubenswrapper[4808]: I0217 16:15:17.191765 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-2xw29"] Feb 17 16:15:17 crc kubenswrapper[4808]: W0217 16:15:17.199082 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podebaafdbf_7612_40c9_b044_697f41e930e2.slice/crio-e99cc9a0fa3bce5cde0547a70bbca7ff59974ec820617eba60536a7f6b74d369 WatchSource:0}: Error finding container e99cc9a0fa3bce5cde0547a70bbca7ff59974ec820617eba60536a7f6b74d369: Status 404 returned error can't find the container with id e99cc9a0fa3bce5cde0547a70bbca7ff59974ec820617eba60536a7f6b74d369 Feb 17 16:15:17 crc kubenswrapper[4808]: I0217 16:15:17.395113 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-storageinit-cftjl" event={"ID":"cf7344d6-b8f4-4234-bb75-f4d7702b040b","Type":"ContainerStarted","Data":"0c5f393313c4812ace12e3dfcc1699bc58edf0ad3bd0769e445698189b780158"} Feb 17 16:15:17 crc kubenswrapper[4808]: I0217 16:15:17.395396 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-storageinit-cftjl" event={"ID":"cf7344d6-b8f4-4234-bb75-f4d7702b040b","Type":"ContainerStarted","Data":"ad12513f4962dbcb71cd89e1403abeaaad21ab0da490387e800ae06c89c226bc"} Feb 17 16:15:17 crc kubenswrapper[4808]: I0217 16:15:17.399798 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"37da8fa5-9dda-4e98-9a63-a4c0036e0017","Type":"ContainerStarted","Data":"5ac05208b68a6fcecfd3daeda1e831c1b6b22287e3316af8e4abbf40c7bb9c8b"} Feb 17 16:15:17 crc kubenswrapper[4808]: I0217 16:15:17.409363 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-75bd7dcff4-tfcmj" event={"ID":"bd86efad-8ad2-4e38-b731-5f892d34a582","Type":"ContainerStarted","Data":"6b29334979377aae11d80c31ca2d701fe0397a6ebb1d0f68188d0b3c533f4e13"} Feb 17 16:15:17 crc kubenswrapper[4808]: I0217 16:15:17.409448 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-75bd7dcff4-tfcmj" event={"ID":"bd86efad-8ad2-4e38-b731-5f892d34a582","Type":"ContainerStarted","Data":"8e81ed5ac5da2865c2bd786f6e608662f1f3114d1959d90beba10db5607a33f1"} Feb 17 16:15:17 crc kubenswrapper[4808]: I0217 16:15:17.409489 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-75bd7dcff4-tfcmj" event={"ID":"bd86efad-8ad2-4e38-b731-5f892d34a582","Type":"ContainerStarted","Data":"5dc94be747fd1b78b9a66a8cfe5962566975f11bb39b1a72c4640a142fb1468d"} Feb 17 16:15:17 crc kubenswrapper[4808]: I0217 16:15:17.410407 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-75bd7dcff4-tfcmj" Feb 17 16:15:17 crc kubenswrapper[4808]: I0217 16:15:17.410441 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-75bd7dcff4-tfcmj" Feb 17 16:15:17 crc kubenswrapper[4808]: I0217 16:15:17.422166 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-storageinit-cftjl" podStartSLOduration=2.422146583 podStartE2EDuration="2.422146583s" podCreationTimestamp="2026-02-17 16:15:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:15:17.410894768 +0000 UTC m=+1280.927253851" watchObservedRunningTime="2026-02-17 16:15:17.422146583 +0000 UTC m=+1280.938505656" Feb 17 16:15:17 crc kubenswrapper[4808]: I0217 16:15:17.440057 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6c6489dbc7-2ddnw"] Feb 17 16:15:17 crc kubenswrapper[4808]: I0217 16:15:17.445797 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-75bd7dcff4-tfcmj" podStartSLOduration=2.445777634 podStartE2EDuration="2.445777634s" podCreationTimestamp="2026-02-17 16:15:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:15:17.427543139 +0000 UTC m=+1280.943902212" watchObservedRunningTime="2026-02-17 16:15:17.445777634 +0000 UTC m=+1280.962136707" Feb 17 16:15:17 crc kubenswrapper[4808]: I0217 16:15:17.445850 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"9f172158-bc5a-40a6-afc6-df84970d436d","Type":"ContainerStarted","Data":"fcedd92b0b29bbf31af03e2bbced87e666dc9a438c55215268bb770cfadf5c2a"} Feb 17 16:15:17 crc kubenswrapper[4808]: I0217 16:15:17.451417 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-2xw29" event={"ID":"ebaafdbf-7612-40c9-b044-697f41e930e2","Type":"ContainerStarted","Data":"e99cc9a0fa3bce5cde0547a70bbca7ff59974ec820617eba60536a7f6b74d369"} Feb 17 16:15:17 crc kubenswrapper[4808]: I0217 16:15:17.458386 4808 generic.go:334] "Generic (PLEG): container finished" podID="6974c05c-8d53-4225-8ccd-c8c7c8956073" containerID="d99cd647368dafaff2816f4fe6bc8fcc90f0c68e206ab7df9e289310b1ebed6f" exitCode=0 Feb 17 16:15:17 crc kubenswrapper[4808]: I0217 16:15:17.458494 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-29sc9" event={"ID":"6974c05c-8d53-4225-8ccd-c8c7c8956073","Type":"ContainerDied","Data":"d99cd647368dafaff2816f4fe6bc8fcc90f0c68e206ab7df9e289310b1ebed6f"} Feb 17 16:15:17 crc kubenswrapper[4808]: I0217 16:15:17.458538 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-29sc9" event={"ID":"6974c05c-8d53-4225-8ccd-c8c7c8956073","Type":"ContainerStarted","Data":"f4d27695837be070b4363e7cb9ae125043b0ce87e34d2269a5ad68632157ac0d"} Feb 17 16:15:17 crc kubenswrapper[4808]: I0217 16:15:17.476178 4808 generic.go:334] "Generic (PLEG): container finished" podID="dd20b2ca-153a-4f21-9c41-4f00bdc82b56" containerID="fee07854741e5a088b7b1dea17a21007719827fd0ce55cfd2c9c99ff36340d84" exitCode=0 Feb 17 16:15:17 crc kubenswrapper[4808]: I0217 16:15:17.476259 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6576669595-nvtln" event={"ID":"dd20b2ca-153a-4f21-9c41-4f00bdc82b56","Type":"ContainerDied","Data":"fee07854741e5a088b7b1dea17a21007719827fd0ce55cfd2c9c99ff36340d84"} Feb 17 16:15:17 crc kubenswrapper[4808]: I0217 16:15:17.489464 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-55f6d995c5-hnz4n" event={"ID":"a0db6993-f3e7-4aa7-b5cc-1b848a15b56c","Type":"ContainerStarted","Data":"3ac1e3efa8e9d62a3f262d3c0293a5072cfc89a70e67782faeb9e36ee9c3e8e5"} Feb 17 16:15:17 crc kubenswrapper[4808]: I0217 16:15:17.489506 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 17 16:15:17 crc kubenswrapper[4808]: I0217 16:15:17.489616 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 17 16:15:17 crc kubenswrapper[4808]: I0217 16:15:17.513835 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 17 16:15:17 crc kubenswrapper[4808]: I0217 16:15:17.513887 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 17 16:15:17 crc kubenswrapper[4808]: I0217 16:15:17.652828 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 17 16:15:17 crc kubenswrapper[4808]: I0217 16:15:17.661851 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 17 16:15:17 crc kubenswrapper[4808]: I0217 16:15:17.916926 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-29sc9" Feb 17 16:15:18 crc kubenswrapper[4808]: I0217 16:15:18.105517 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6974c05c-8d53-4225-8ccd-c8c7c8956073-ovsdbserver-nb\") pod \"6974c05c-8d53-4225-8ccd-c8c7c8956073\" (UID: \"6974c05c-8d53-4225-8ccd-c8c7c8956073\") " Feb 17 16:15:18 crc kubenswrapper[4808]: I0217 16:15:18.105595 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-clkmv\" (UniqueName: \"kubernetes.io/projected/6974c05c-8d53-4225-8ccd-c8c7c8956073-kube-api-access-clkmv\") pod \"6974c05c-8d53-4225-8ccd-c8c7c8956073\" (UID: \"6974c05c-8d53-4225-8ccd-c8c7c8956073\") " Feb 17 16:15:18 crc kubenswrapper[4808]: I0217 16:15:18.106171 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6974c05c-8d53-4225-8ccd-c8c7c8956073-dns-swift-storage-0\") pod \"6974c05c-8d53-4225-8ccd-c8c7c8956073\" (UID: \"6974c05c-8d53-4225-8ccd-c8c7c8956073\") " Feb 17 16:15:18 crc kubenswrapper[4808]: I0217 16:15:18.106328 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6974c05c-8d53-4225-8ccd-c8c7c8956073-dns-svc\") pod \"6974c05c-8d53-4225-8ccd-c8c7c8956073\" (UID: \"6974c05c-8d53-4225-8ccd-c8c7c8956073\") " Feb 17 16:15:18 crc kubenswrapper[4808]: I0217 16:15:18.106437 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6974c05c-8d53-4225-8ccd-c8c7c8956073-config\") pod \"6974c05c-8d53-4225-8ccd-c8c7c8956073\" (UID: \"6974c05c-8d53-4225-8ccd-c8c7c8956073\") " Feb 17 16:15:18 crc kubenswrapper[4808]: I0217 16:15:18.106673 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6974c05c-8d53-4225-8ccd-c8c7c8956073-ovsdbserver-sb\") pod \"6974c05c-8d53-4225-8ccd-c8c7c8956073\" (UID: \"6974c05c-8d53-4225-8ccd-c8c7c8956073\") " Feb 17 16:15:18 crc kubenswrapper[4808]: I0217 16:15:18.138417 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6974c05c-8d53-4225-8ccd-c8c7c8956073-kube-api-access-clkmv" (OuterVolumeSpecName: "kube-api-access-clkmv") pod "6974c05c-8d53-4225-8ccd-c8c7c8956073" (UID: "6974c05c-8d53-4225-8ccd-c8c7c8956073"). InnerVolumeSpecName "kube-api-access-clkmv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:15:18 crc kubenswrapper[4808]: I0217 16:15:18.207787 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6974c05c-8d53-4225-8ccd-c8c7c8956073-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6974c05c-8d53-4225-8ccd-c8c7c8956073" (UID: "6974c05c-8d53-4225-8ccd-c8c7c8956073"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:15:18 crc kubenswrapper[4808]: I0217 16:15:18.210250 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-clkmv\" (UniqueName: \"kubernetes.io/projected/6974c05c-8d53-4225-8ccd-c8c7c8956073-kube-api-access-clkmv\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:18 crc kubenswrapper[4808]: I0217 16:15:18.210298 4808 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6974c05c-8d53-4225-8ccd-c8c7c8956073-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:18 crc kubenswrapper[4808]: I0217 16:15:18.241864 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6974c05c-8d53-4225-8ccd-c8c7c8956073-config" (OuterVolumeSpecName: "config") pod "6974c05c-8d53-4225-8ccd-c8c7c8956073" (UID: "6974c05c-8d53-4225-8ccd-c8c7c8956073"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:15:18 crc kubenswrapper[4808]: I0217 16:15:18.252238 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6974c05c-8d53-4225-8ccd-c8c7c8956073-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6974c05c-8d53-4225-8ccd-c8c7c8956073" (UID: "6974c05c-8d53-4225-8ccd-c8c7c8956073"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:15:18 crc kubenswrapper[4808]: I0217 16:15:18.314819 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6974c05c-8d53-4225-8ccd-c8c7c8956073-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6974c05c-8d53-4225-8ccd-c8c7c8956073" (UID: "6974c05c-8d53-4225-8ccd-c8c7c8956073"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:15:18 crc kubenswrapper[4808]: I0217 16:15:18.315020 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6974c05c-8d53-4225-8ccd-c8c7c8956073-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "6974c05c-8d53-4225-8ccd-c8c7c8956073" (UID: "6974c05c-8d53-4225-8ccd-c8c7c8956073"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:15:18 crc kubenswrapper[4808]: I0217 16:15:18.328434 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6974c05c-8d53-4225-8ccd-c8c7c8956073-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:18 crc kubenswrapper[4808]: I0217 16:15:18.328468 4808 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6974c05c-8d53-4225-8ccd-c8c7c8956073-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:18 crc kubenswrapper[4808]: I0217 16:15:18.328479 4808 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6974c05c-8d53-4225-8ccd-c8c7c8956073-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:18 crc kubenswrapper[4808]: I0217 16:15:18.328488 4808 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6974c05c-8d53-4225-8ccd-c8c7c8956073-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:18 crc kubenswrapper[4808]: I0217 16:15:18.558122 4808 generic.go:334] "Generic (PLEG): container finished" podID="ebaafdbf-7612-40c9-b044-697f41e930e2" containerID="d7d5b1aacc9ee39478911942c54b18b463b829b4e46aa33564c91e96616177dd" exitCode=0 Feb 17 16:15:18 crc kubenswrapper[4808]: I0217 16:15:18.558216 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-2xw29" event={"ID":"ebaafdbf-7612-40c9-b044-697f41e930e2","Type":"ContainerDied","Data":"d7d5b1aacc9ee39478911942c54b18b463b829b4e46aa33564c91e96616177dd"} Feb 17 16:15:18 crc kubenswrapper[4808]: I0217 16:15:18.575503 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-29sc9" event={"ID":"6974c05c-8d53-4225-8ccd-c8c7c8956073","Type":"ContainerDied","Data":"f4d27695837be070b4363e7cb9ae125043b0ce87e34d2269a5ad68632157ac0d"} Feb 17 16:15:18 crc kubenswrapper[4808]: I0217 16:15:18.575554 4808 scope.go:117] "RemoveContainer" containerID="d99cd647368dafaff2816f4fe6bc8fcc90f0c68e206ab7df9e289310b1ebed6f" Feb 17 16:15:18 crc kubenswrapper[4808]: I0217 16:15:18.575787 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-29sc9" Feb 17 16:15:18 crc kubenswrapper[4808]: I0217 16:15:18.589224 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6c6489dbc7-2ddnw" event={"ID":"b7e54d61-1bf6-41ae-b885-7e6448d351a5","Type":"ContainerStarted","Data":"5fd374d9d6028f00e305deb8758c5c4143b1950a00f15dfa9e62eaede9d208ba"} Feb 17 16:15:18 crc kubenswrapper[4808]: I0217 16:15:18.589268 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6c6489dbc7-2ddnw" event={"ID":"b7e54d61-1bf6-41ae-b885-7e6448d351a5","Type":"ContainerStarted","Data":"961cf37b2717b91cab861ae741064ca67b4f2bf52c3c18d7423efce877131d78"} Feb 17 16:15:18 crc kubenswrapper[4808]: I0217 16:15:18.611955 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"9f172158-bc5a-40a6-afc6-df84970d436d","Type":"ContainerStarted","Data":"35656b2866277a003526143f6d404a3a9c98f5de68552024746c712c1205e4da"} Feb 17 16:15:18 crc kubenswrapper[4808]: I0217 16:15:18.614222 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 17 16:15:18 crc kubenswrapper[4808]: I0217 16:15:18.614823 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 17 16:15:18 crc kubenswrapper[4808]: I0217 16:15:18.877004 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-29sc9"] Feb 17 16:15:18 crc kubenswrapper[4808]: I0217 16:15:18.884986 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-29sc9"] Feb 17 16:15:19 crc kubenswrapper[4808]: I0217 16:15:19.047072 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 17 16:15:19 crc kubenswrapper[4808]: I0217 16:15:19.166814 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6974c05c-8d53-4225-8ccd-c8c7c8956073" path="/var/lib/kubelet/pods/6974c05c-8d53-4225-8ccd-c8c7c8956073/volumes" Feb 17 16:15:19 crc kubenswrapper[4808]: I0217 16:15:19.630122 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"37da8fa5-9dda-4e98-9a63-a4c0036e0017","Type":"ContainerStarted","Data":"3e8a06d14230c2f33211006c669f2e9d81553a63563d9c660acf7efbe1266550"} Feb 17 16:15:19 crc kubenswrapper[4808]: I0217 16:15:19.637635 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"9f172158-bc5a-40a6-afc6-df84970d436d","Type":"ContainerStarted","Data":"8aa1d22280596defb819b3119564e868d6ec09231fa1d0d6b3bcc085ed8b0dd1"} Feb 17 16:15:19 crc kubenswrapper[4808]: I0217 16:15:19.642543 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6c6489dbc7-2ddnw" event={"ID":"b7e54d61-1bf6-41ae-b885-7e6448d351a5","Type":"ContainerStarted","Data":"6f42cce323fc28581406cdc74f3517b723c8ab5654a6663336e6f738e93f94dd"} Feb 17 16:15:19 crc kubenswrapper[4808]: I0217 16:15:19.642664 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6c6489dbc7-2ddnw" Feb 17 16:15:19 crc kubenswrapper[4808]: I0217 16:15:19.642700 4808 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 17 16:15:19 crc kubenswrapper[4808]: I0217 16:15:19.642724 4808 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 17 16:15:19 crc kubenswrapper[4808]: I0217 16:15:19.663233 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6c6489dbc7-2ddnw" podStartSLOduration=3.663215821 podStartE2EDuration="3.663215821s" podCreationTimestamp="2026-02-17 16:15:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:15:19.659791188 +0000 UTC m=+1283.176150271" watchObservedRunningTime="2026-02-17 16:15:19.663215821 +0000 UTC m=+1283.179574884" Feb 17 16:15:20 crc kubenswrapper[4808]: I0217 16:15:20.640229 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 17 16:15:20 crc kubenswrapper[4808]: I0217 16:15:20.660477 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 17 16:15:20 crc kubenswrapper[4808]: I0217 16:15:20.660758 4808 generic.go:334] "Generic (PLEG): container finished" podID="cf7344d6-b8f4-4234-bb75-f4d7702b040b" containerID="0c5f393313c4812ace12e3dfcc1699bc58edf0ad3bd0769e445698189b780158" exitCode=0 Feb 17 16:15:20 crc kubenswrapper[4808]: I0217 16:15:20.660878 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-storageinit-cftjl" event={"ID":"cf7344d6-b8f4-4234-bb75-f4d7702b040b","Type":"ContainerDied","Data":"0c5f393313c4812ace12e3dfcc1699bc58edf0ad3bd0769e445698189b780158"} Feb 17 16:15:20 crc kubenswrapper[4808]: I0217 16:15:20.661315 4808 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 17 16:15:20 crc kubenswrapper[4808]: I0217 16:15:20.661326 4808 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 17 16:15:21 crc kubenswrapper[4808]: I0217 16:15:21.441235 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 17 16:15:21 crc kubenswrapper[4808]: I0217 16:15:21.592129 4808 patch_prober.go:28] interesting pod/machine-config-daemon-k8v8k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:15:21 crc kubenswrapper[4808]: I0217 16:15:21.592182 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:15:21 crc kubenswrapper[4808]: I0217 16:15:21.671913 4808 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 17 16:15:21 crc kubenswrapper[4808]: I0217 16:15:21.673211 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="9f172158-bc5a-40a6-afc6-df84970d436d" containerName="cinder-api-log" containerID="cri-o://35656b2866277a003526143f6d404a3a9c98f5de68552024746c712c1205e4da" gracePeriod=30 Feb 17 16:15:21 crc kubenswrapper[4808]: I0217 16:15:21.673324 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="9f172158-bc5a-40a6-afc6-df84970d436d" containerName="cinder-api" containerID="cri-o://8aa1d22280596defb819b3119564e868d6ec09231fa1d0d6b3bcc085ed8b0dd1" gracePeriod=30 Feb 17 16:15:21 crc kubenswrapper[4808]: I0217 16:15:21.687800 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 17 16:15:21 crc kubenswrapper[4808]: I0217 16:15:21.704489 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=6.704466149 podStartE2EDuration="6.704466149s" podCreationTimestamp="2026-02-17 16:15:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:15:21.690696806 +0000 UTC m=+1285.207055879" watchObservedRunningTime="2026-02-17 16:15:21.704466149 +0000 UTC m=+1285.220825222" Feb 17 16:15:22 crc kubenswrapper[4808]: I0217 16:15:22.039707 4808 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-6576669595-nvtln" podUID="dd20b2ca-153a-4f21-9c41-4f00bdc82b56" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.169:9696/\": dial tcp 10.217.0.169:9696: connect: connection refused" Feb 17 16:15:22 crc kubenswrapper[4808]: I0217 16:15:22.272659 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-5f445fb886-lsqq4"] Feb 17 16:15:22 crc kubenswrapper[4808]: E0217 16:15:22.273137 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6974c05c-8d53-4225-8ccd-c8c7c8956073" containerName="init" Feb 17 16:15:22 crc kubenswrapper[4808]: I0217 16:15:22.273167 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="6974c05c-8d53-4225-8ccd-c8c7c8956073" containerName="init" Feb 17 16:15:22 crc kubenswrapper[4808]: I0217 16:15:22.273494 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="6974c05c-8d53-4225-8ccd-c8c7c8956073" containerName="init" Feb 17 16:15:22 crc kubenswrapper[4808]: I0217 16:15:22.274836 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5f445fb886-lsqq4" Feb 17 16:15:22 crc kubenswrapper[4808]: I0217 16:15:22.276530 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Feb 17 16:15:22 crc kubenswrapper[4808]: I0217 16:15:22.276740 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Feb 17 16:15:22 crc kubenswrapper[4808]: I0217 16:15:22.290841 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5f445fb886-lsqq4"] Feb 17 16:15:22 crc kubenswrapper[4808]: I0217 16:15:22.439732 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9bf13d7-3430-4818-b8fc-239796570b6c-config-data\") pod \"barbican-api-5f445fb886-lsqq4\" (UID: \"a9bf13d7-3430-4818-b8fc-239796570b6c\") " pod="openstack/barbican-api-5f445fb886-lsqq4" Feb 17 16:15:22 crc kubenswrapper[4808]: I0217 16:15:22.439816 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9bf13d7-3430-4818-b8fc-239796570b6c-combined-ca-bundle\") pod \"barbican-api-5f445fb886-lsqq4\" (UID: \"a9bf13d7-3430-4818-b8fc-239796570b6c\") " pod="openstack/barbican-api-5f445fb886-lsqq4" Feb 17 16:15:22 crc kubenswrapper[4808]: I0217 16:15:22.439845 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9glp\" (UniqueName: \"kubernetes.io/projected/a9bf13d7-3430-4818-b8fc-239796570b6c-kube-api-access-b9glp\") pod \"barbican-api-5f445fb886-lsqq4\" (UID: \"a9bf13d7-3430-4818-b8fc-239796570b6c\") " pod="openstack/barbican-api-5f445fb886-lsqq4" Feb 17 16:15:22 crc kubenswrapper[4808]: I0217 16:15:22.439872 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9bf13d7-3430-4818-b8fc-239796570b6c-internal-tls-certs\") pod \"barbican-api-5f445fb886-lsqq4\" (UID: \"a9bf13d7-3430-4818-b8fc-239796570b6c\") " pod="openstack/barbican-api-5f445fb886-lsqq4" Feb 17 16:15:22 crc kubenswrapper[4808]: I0217 16:15:22.439927 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a9bf13d7-3430-4818-b8fc-239796570b6c-logs\") pod \"barbican-api-5f445fb886-lsqq4\" (UID: \"a9bf13d7-3430-4818-b8fc-239796570b6c\") " pod="openstack/barbican-api-5f445fb886-lsqq4" Feb 17 16:15:22 crc kubenswrapper[4808]: I0217 16:15:22.440008 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a9bf13d7-3430-4818-b8fc-239796570b6c-config-data-custom\") pod \"barbican-api-5f445fb886-lsqq4\" (UID: \"a9bf13d7-3430-4818-b8fc-239796570b6c\") " pod="openstack/barbican-api-5f445fb886-lsqq4" Feb 17 16:15:22 crc kubenswrapper[4808]: I0217 16:15:22.440038 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9bf13d7-3430-4818-b8fc-239796570b6c-public-tls-certs\") pod \"barbican-api-5f445fb886-lsqq4\" (UID: \"a9bf13d7-3430-4818-b8fc-239796570b6c\") " pod="openstack/barbican-api-5f445fb886-lsqq4" Feb 17 16:15:22 crc kubenswrapper[4808]: I0217 16:15:22.541882 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9bf13d7-3430-4818-b8fc-239796570b6c-config-data\") pod \"barbican-api-5f445fb886-lsqq4\" (UID: \"a9bf13d7-3430-4818-b8fc-239796570b6c\") " pod="openstack/barbican-api-5f445fb886-lsqq4" Feb 17 16:15:22 crc kubenswrapper[4808]: I0217 16:15:22.541946 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9bf13d7-3430-4818-b8fc-239796570b6c-combined-ca-bundle\") pod \"barbican-api-5f445fb886-lsqq4\" (UID: \"a9bf13d7-3430-4818-b8fc-239796570b6c\") " pod="openstack/barbican-api-5f445fb886-lsqq4" Feb 17 16:15:22 crc kubenswrapper[4808]: I0217 16:15:22.541980 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9glp\" (UniqueName: \"kubernetes.io/projected/a9bf13d7-3430-4818-b8fc-239796570b6c-kube-api-access-b9glp\") pod \"barbican-api-5f445fb886-lsqq4\" (UID: \"a9bf13d7-3430-4818-b8fc-239796570b6c\") " pod="openstack/barbican-api-5f445fb886-lsqq4" Feb 17 16:15:22 crc kubenswrapper[4808]: I0217 16:15:22.542014 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9bf13d7-3430-4818-b8fc-239796570b6c-internal-tls-certs\") pod \"barbican-api-5f445fb886-lsqq4\" (UID: \"a9bf13d7-3430-4818-b8fc-239796570b6c\") " pod="openstack/barbican-api-5f445fb886-lsqq4" Feb 17 16:15:22 crc kubenswrapper[4808]: I0217 16:15:22.542051 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a9bf13d7-3430-4818-b8fc-239796570b6c-logs\") pod \"barbican-api-5f445fb886-lsqq4\" (UID: \"a9bf13d7-3430-4818-b8fc-239796570b6c\") " pod="openstack/barbican-api-5f445fb886-lsqq4" Feb 17 16:15:22 crc kubenswrapper[4808]: I0217 16:15:22.542101 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a9bf13d7-3430-4818-b8fc-239796570b6c-config-data-custom\") pod \"barbican-api-5f445fb886-lsqq4\" (UID: \"a9bf13d7-3430-4818-b8fc-239796570b6c\") " pod="openstack/barbican-api-5f445fb886-lsqq4" Feb 17 16:15:22 crc kubenswrapper[4808]: I0217 16:15:22.542151 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9bf13d7-3430-4818-b8fc-239796570b6c-public-tls-certs\") pod \"barbican-api-5f445fb886-lsqq4\" (UID: \"a9bf13d7-3430-4818-b8fc-239796570b6c\") " pod="openstack/barbican-api-5f445fb886-lsqq4" Feb 17 16:15:22 crc kubenswrapper[4808]: I0217 16:15:22.544362 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a9bf13d7-3430-4818-b8fc-239796570b6c-logs\") pod \"barbican-api-5f445fb886-lsqq4\" (UID: \"a9bf13d7-3430-4818-b8fc-239796570b6c\") " pod="openstack/barbican-api-5f445fb886-lsqq4" Feb 17 16:15:22 crc kubenswrapper[4808]: I0217 16:15:22.551893 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a9bf13d7-3430-4818-b8fc-239796570b6c-config-data-custom\") pod \"barbican-api-5f445fb886-lsqq4\" (UID: \"a9bf13d7-3430-4818-b8fc-239796570b6c\") " pod="openstack/barbican-api-5f445fb886-lsqq4" Feb 17 16:15:22 crc kubenswrapper[4808]: I0217 16:15:22.553252 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9bf13d7-3430-4818-b8fc-239796570b6c-public-tls-certs\") pod \"barbican-api-5f445fb886-lsqq4\" (UID: \"a9bf13d7-3430-4818-b8fc-239796570b6c\") " pod="openstack/barbican-api-5f445fb886-lsqq4" Feb 17 16:15:22 crc kubenswrapper[4808]: I0217 16:15:22.553662 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9bf13d7-3430-4818-b8fc-239796570b6c-combined-ca-bundle\") pod \"barbican-api-5f445fb886-lsqq4\" (UID: \"a9bf13d7-3430-4818-b8fc-239796570b6c\") " pod="openstack/barbican-api-5f445fb886-lsqq4" Feb 17 16:15:22 crc kubenswrapper[4808]: I0217 16:15:22.553919 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9bf13d7-3430-4818-b8fc-239796570b6c-internal-tls-certs\") pod \"barbican-api-5f445fb886-lsqq4\" (UID: \"a9bf13d7-3430-4818-b8fc-239796570b6c\") " pod="openstack/barbican-api-5f445fb886-lsqq4" Feb 17 16:15:22 crc kubenswrapper[4808]: I0217 16:15:22.557331 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9bf13d7-3430-4818-b8fc-239796570b6c-config-data\") pod \"barbican-api-5f445fb886-lsqq4\" (UID: \"a9bf13d7-3430-4818-b8fc-239796570b6c\") " pod="openstack/barbican-api-5f445fb886-lsqq4" Feb 17 16:15:22 crc kubenswrapper[4808]: I0217 16:15:22.569125 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9glp\" (UniqueName: \"kubernetes.io/projected/a9bf13d7-3430-4818-b8fc-239796570b6c-kube-api-access-b9glp\") pod \"barbican-api-5f445fb886-lsqq4\" (UID: \"a9bf13d7-3430-4818-b8fc-239796570b6c\") " pod="openstack/barbican-api-5f445fb886-lsqq4" Feb 17 16:15:22 crc kubenswrapper[4808]: I0217 16:15:22.607563 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5f445fb886-lsqq4" Feb 17 16:15:22 crc kubenswrapper[4808]: I0217 16:15:22.685006 4808 generic.go:334] "Generic (PLEG): container finished" podID="9f172158-bc5a-40a6-afc6-df84970d436d" containerID="8aa1d22280596defb819b3119564e868d6ec09231fa1d0d6b3bcc085ed8b0dd1" exitCode=0 Feb 17 16:15:22 crc kubenswrapper[4808]: I0217 16:15:22.685042 4808 generic.go:334] "Generic (PLEG): container finished" podID="9f172158-bc5a-40a6-afc6-df84970d436d" containerID="35656b2866277a003526143f6d404a3a9c98f5de68552024746c712c1205e4da" exitCode=143 Feb 17 16:15:22 crc kubenswrapper[4808]: I0217 16:15:22.685115 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"9f172158-bc5a-40a6-afc6-df84970d436d","Type":"ContainerDied","Data":"8aa1d22280596defb819b3119564e868d6ec09231fa1d0d6b3bcc085ed8b0dd1"} Feb 17 16:15:22 crc kubenswrapper[4808]: I0217 16:15:22.685167 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"9f172158-bc5a-40a6-afc6-df84970d436d","Type":"ContainerDied","Data":"35656b2866277a003526143f6d404a3a9c98f5de68552024746c712c1205e4da"} Feb 17 16:15:22 crc kubenswrapper[4808]: I0217 16:15:22.687864 4808 generic.go:334] "Generic (PLEG): container finished" podID="dd20b2ca-153a-4f21-9c41-4f00bdc82b56" containerID="811f9cc94c4ee217b19fe631254bddba36393da079ca418fd65bacd8378b729d" exitCode=0 Feb 17 16:15:22 crc kubenswrapper[4808]: I0217 16:15:22.688678 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6576669595-nvtln" event={"ID":"dd20b2ca-153a-4f21-9c41-4f00bdc82b56","Type":"ContainerDied","Data":"811f9cc94c4ee217b19fe631254bddba36393da079ca418fd65bacd8378b729d"} Feb 17 16:15:26 crc kubenswrapper[4808]: I0217 16:15:26.144352 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 17 16:15:26 crc kubenswrapper[4808]: I0217 16:15:26.146040 4808 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="9f172158-bc5a-40a6-afc6-df84970d436d" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.182:8776/healthcheck\": dial tcp 10.217.0.182:8776: connect: connection refused" Feb 17 16:15:26 crc kubenswrapper[4808]: I0217 16:15:26.929778 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-75bd7dcff4-tfcmj" Feb 17 16:15:26 crc kubenswrapper[4808]: I0217 16:15:26.968822 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-75bd7dcff4-tfcmj" Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.242628 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-storageinit-cftjl" Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.266157 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6576669595-nvtln" Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.347322 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf7344d6-b8f4-4234-bb75-f4d7702b040b-scripts\") pod \"cf7344d6-b8f4-4234-bb75-f4d7702b040b\" (UID: \"cf7344d6-b8f4-4234-bb75-f4d7702b040b\") " Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.347619 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf7344d6-b8f4-4234-bb75-f4d7702b040b-combined-ca-bundle\") pod \"cf7344d6-b8f4-4234-bb75-f4d7702b040b\" (UID: \"cf7344d6-b8f4-4234-bb75-f4d7702b040b\") " Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.347664 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfzgz\" (UniqueName: \"kubernetes.io/projected/dd20b2ca-153a-4f21-9c41-4f00bdc82b56-kube-api-access-kfzgz\") pod \"dd20b2ca-153a-4f21-9c41-4f00bdc82b56\" (UID: \"dd20b2ca-153a-4f21-9c41-4f00bdc82b56\") " Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.347685 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/cf7344d6-b8f4-4234-bb75-f4d7702b040b-certs\") pod \"cf7344d6-b8f4-4234-bb75-f4d7702b040b\" (UID: \"cf7344d6-b8f4-4234-bb75-f4d7702b040b\") " Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.347706 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/dd20b2ca-153a-4f21-9c41-4f00bdc82b56-config\") pod \"dd20b2ca-153a-4f21-9c41-4f00bdc82b56\" (UID: \"dd20b2ca-153a-4f21-9c41-4f00bdc82b56\") " Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.347732 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf7344d6-b8f4-4234-bb75-f4d7702b040b-config-data\") pod \"cf7344d6-b8f4-4234-bb75-f4d7702b040b\" (UID: \"cf7344d6-b8f4-4234-bb75-f4d7702b040b\") " Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.347826 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd20b2ca-153a-4f21-9c41-4f00bdc82b56-ovndb-tls-certs\") pod \"dd20b2ca-153a-4f21-9c41-4f00bdc82b56\" (UID: \"dd20b2ca-153a-4f21-9c41-4f00bdc82b56\") " Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.347919 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd20b2ca-153a-4f21-9c41-4f00bdc82b56-combined-ca-bundle\") pod \"dd20b2ca-153a-4f21-9c41-4f00bdc82b56\" (UID: \"dd20b2ca-153a-4f21-9c41-4f00bdc82b56\") " Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.347967 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/dd20b2ca-153a-4f21-9c41-4f00bdc82b56-httpd-config\") pod \"dd20b2ca-153a-4f21-9c41-4f00bdc82b56\" (UID: \"dd20b2ca-153a-4f21-9c41-4f00bdc82b56\") " Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.347982 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd20b2ca-153a-4f21-9c41-4f00bdc82b56-public-tls-certs\") pod \"dd20b2ca-153a-4f21-9c41-4f00bdc82b56\" (UID: \"dd20b2ca-153a-4f21-9c41-4f00bdc82b56\") " Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.348044 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-84l8p\" (UniqueName: \"kubernetes.io/projected/cf7344d6-b8f4-4234-bb75-f4d7702b040b-kube-api-access-84l8p\") pod \"cf7344d6-b8f4-4234-bb75-f4d7702b040b\" (UID: \"cf7344d6-b8f4-4234-bb75-f4d7702b040b\") " Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.348088 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd20b2ca-153a-4f21-9c41-4f00bdc82b56-internal-tls-certs\") pod \"dd20b2ca-153a-4f21-9c41-4f00bdc82b56\" (UID: \"dd20b2ca-153a-4f21-9c41-4f00bdc82b56\") " Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.367497 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf7344d6-b8f4-4234-bb75-f4d7702b040b-scripts" (OuterVolumeSpecName: "scripts") pod "cf7344d6-b8f4-4234-bb75-f4d7702b040b" (UID: "cf7344d6-b8f4-4234-bb75-f4d7702b040b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.374584 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd20b2ca-153a-4f21-9c41-4f00bdc82b56-kube-api-access-kfzgz" (OuterVolumeSpecName: "kube-api-access-kfzgz") pod "dd20b2ca-153a-4f21-9c41-4f00bdc82b56" (UID: "dd20b2ca-153a-4f21-9c41-4f00bdc82b56"). InnerVolumeSpecName "kube-api-access-kfzgz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.375499 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf7344d6-b8f4-4234-bb75-f4d7702b040b-kube-api-access-84l8p" (OuterVolumeSpecName: "kube-api-access-84l8p") pod "cf7344d6-b8f4-4234-bb75-f4d7702b040b" (UID: "cf7344d6-b8f4-4234-bb75-f4d7702b040b"). InnerVolumeSpecName "kube-api-access-84l8p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.386800 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd20b2ca-153a-4f21-9c41-4f00bdc82b56-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "dd20b2ca-153a-4f21-9c41-4f00bdc82b56" (UID: "dd20b2ca-153a-4f21-9c41-4f00bdc82b56"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.404305 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf7344d6-b8f4-4234-bb75-f4d7702b040b-config-data" (OuterVolumeSpecName: "config-data") pod "cf7344d6-b8f4-4234-bb75-f4d7702b040b" (UID: "cf7344d6-b8f4-4234-bb75-f4d7702b040b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.413370 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf7344d6-b8f4-4234-bb75-f4d7702b040b-certs" (OuterVolumeSpecName: "certs") pod "cf7344d6-b8f4-4234-bb75-f4d7702b040b" (UID: "cf7344d6-b8f4-4234-bb75-f4d7702b040b"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.448333 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf7344d6-b8f4-4234-bb75-f4d7702b040b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cf7344d6-b8f4-4234-bb75-f4d7702b040b" (UID: "cf7344d6-b8f4-4234-bb75-f4d7702b040b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.452762 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-84l8p\" (UniqueName: \"kubernetes.io/projected/cf7344d6-b8f4-4234-bb75-f4d7702b040b-kube-api-access-84l8p\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.452887 4808 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf7344d6-b8f4-4234-bb75-f4d7702b040b-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.453168 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf7344d6-b8f4-4234-bb75-f4d7702b040b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.465984 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfzgz\" (UniqueName: \"kubernetes.io/projected/dd20b2ca-153a-4f21-9c41-4f00bdc82b56-kube-api-access-kfzgz\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.466154 4808 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/projected/cf7344d6-b8f4-4234-bb75-f4d7702b040b-certs\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.466226 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf7344d6-b8f4-4234-bb75-f4d7702b040b-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.466302 4808 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/dd20b2ca-153a-4f21-9c41-4f00bdc82b56-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.505859 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd20b2ca-153a-4f21-9c41-4f00bdc82b56-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dd20b2ca-153a-4f21-9c41-4f00bdc82b56" (UID: "dd20b2ca-153a-4f21-9c41-4f00bdc82b56"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.539719 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd20b2ca-153a-4f21-9c41-4f00bdc82b56-config" (OuterVolumeSpecName: "config") pod "dd20b2ca-153a-4f21-9c41-4f00bdc82b56" (UID: "dd20b2ca-153a-4f21-9c41-4f00bdc82b56"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.545865 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.568654 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/dd20b2ca-153a-4f21-9c41-4f00bdc82b56-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.568685 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd20b2ca-153a-4f21-9c41-4f00bdc82b56-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.613883 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd20b2ca-153a-4f21-9c41-4f00bdc82b56-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "dd20b2ca-153a-4f21-9c41-4f00bdc82b56" (UID: "dd20b2ca-153a-4f21-9c41-4f00bdc82b56"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.636312 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd20b2ca-153a-4f21-9c41-4f00bdc82b56-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "dd20b2ca-153a-4f21-9c41-4f00bdc82b56" (UID: "dd20b2ca-153a-4f21-9c41-4f00bdc82b56"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.648041 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd20b2ca-153a-4f21-9c41-4f00bdc82b56-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "dd20b2ca-153a-4f21-9c41-4f00bdc82b56" (UID: "dd20b2ca-153a-4f21-9c41-4f00bdc82b56"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.669541 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9f172158-bc5a-40a6-afc6-df84970d436d-logs\") pod \"9f172158-bc5a-40a6-afc6-df84970d436d\" (UID: \"9f172158-bc5a-40a6-afc6-df84970d436d\") " Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.669677 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9f172158-bc5a-40a6-afc6-df84970d436d-config-data-custom\") pod \"9f172158-bc5a-40a6-afc6-df84970d436d\" (UID: \"9f172158-bc5a-40a6-afc6-df84970d436d\") " Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.669824 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f172158-bc5a-40a6-afc6-df84970d436d-combined-ca-bundle\") pod \"9f172158-bc5a-40a6-afc6-df84970d436d\" (UID: \"9f172158-bc5a-40a6-afc6-df84970d436d\") " Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.669850 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f172158-bc5a-40a6-afc6-df84970d436d-config-data\") pod \"9f172158-bc5a-40a6-afc6-df84970d436d\" (UID: \"9f172158-bc5a-40a6-afc6-df84970d436d\") " Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.669882 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9f172158-bc5a-40a6-afc6-df84970d436d-etc-machine-id\") pod \"9f172158-bc5a-40a6-afc6-df84970d436d\" (UID: \"9f172158-bc5a-40a6-afc6-df84970d436d\") " Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.669928 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f172158-bc5a-40a6-afc6-df84970d436d-scripts\") pod \"9f172158-bc5a-40a6-afc6-df84970d436d\" (UID: \"9f172158-bc5a-40a6-afc6-df84970d436d\") " Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.669963 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l8ndg\" (UniqueName: \"kubernetes.io/projected/9f172158-bc5a-40a6-afc6-df84970d436d-kube-api-access-l8ndg\") pod \"9f172158-bc5a-40a6-afc6-df84970d436d\" (UID: \"9f172158-bc5a-40a6-afc6-df84970d436d\") " Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.670431 4808 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd20b2ca-153a-4f21-9c41-4f00bdc82b56-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.670445 4808 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd20b2ca-153a-4f21-9c41-4f00bdc82b56-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.670454 4808 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd20b2ca-153a-4f21-9c41-4f00bdc82b56-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.670748 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9f172158-bc5a-40a6-afc6-df84970d436d-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "9f172158-bc5a-40a6-afc6-df84970d436d" (UID: "9f172158-bc5a-40a6-afc6-df84970d436d"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.674246 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f172158-bc5a-40a6-afc6-df84970d436d-logs" (OuterVolumeSpecName: "logs") pod "9f172158-bc5a-40a6-afc6-df84970d436d" (UID: "9f172158-bc5a-40a6-afc6-df84970d436d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.678962 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f172158-bc5a-40a6-afc6-df84970d436d-scripts" (OuterVolumeSpecName: "scripts") pod "9f172158-bc5a-40a6-afc6-df84970d436d" (UID: "9f172158-bc5a-40a6-afc6-df84970d436d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.679377 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f172158-bc5a-40a6-afc6-df84970d436d-kube-api-access-l8ndg" (OuterVolumeSpecName: "kube-api-access-l8ndg") pod "9f172158-bc5a-40a6-afc6-df84970d436d" (UID: "9f172158-bc5a-40a6-afc6-df84970d436d"). InnerVolumeSpecName "kube-api-access-l8ndg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.685751 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f172158-bc5a-40a6-afc6-df84970d436d-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "9f172158-bc5a-40a6-afc6-df84970d436d" (UID: "9f172158-bc5a-40a6-afc6-df84970d436d"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.730678 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f172158-bc5a-40a6-afc6-df84970d436d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9f172158-bc5a-40a6-afc6-df84970d436d" (UID: "9f172158-bc5a-40a6-afc6-df84970d436d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.746411 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f172158-bc5a-40a6-afc6-df84970d436d-config-data" (OuterVolumeSpecName: "config-data") pod "9f172158-bc5a-40a6-afc6-df84970d436d" (UID: "9f172158-bc5a-40a6-afc6-df84970d436d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.771664 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f172158-bc5a-40a6-afc6-df84970d436d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.771687 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f172158-bc5a-40a6-afc6-df84970d436d-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.771697 4808 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9f172158-bc5a-40a6-afc6-df84970d436d-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.771705 4808 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f172158-bc5a-40a6-afc6-df84970d436d-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.771713 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l8ndg\" (UniqueName: \"kubernetes.io/projected/9f172158-bc5a-40a6-afc6-df84970d436d-kube-api-access-l8ndg\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.771724 4808 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9f172158-bc5a-40a6-afc6-df84970d436d-logs\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.771732 4808 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9f172158-bc5a-40a6-afc6-df84970d436d-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.772185 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-2xw29" event={"ID":"ebaafdbf-7612-40c9-b044-697f41e930e2","Type":"ContainerStarted","Data":"593b85e7ed11967846ba3f0a308af29ad73243d26b49fd486a4676c69dbd2953"} Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.773903 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c9776ccc5-2xw29" Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.776038 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-55f6d995c5-hnz4n" event={"ID":"a0db6993-f3e7-4aa7-b5cc-1b848a15b56c","Type":"ContainerStarted","Data":"3f73cc1f4bde00bd908b4cd2358df0443bc927e5f04373e71d79090a1a91ee61"} Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.779067 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6576669595-nvtln" event={"ID":"dd20b2ca-153a-4f21-9c41-4f00bdc82b56","Type":"ContainerDied","Data":"6a095cda0c57e7c83e37162d0a00993ab0fc7d2ed318b1cd5b24f7f8e6f8ed0d"} Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.779100 4808 scope.go:117] "RemoveContainer" containerID="fee07854741e5a088b7b1dea17a21007719827fd0ce55cfd2c9c99ff36340d84" Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.779205 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6576669595-nvtln" Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.789977 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-storageinit-cftjl" event={"ID":"cf7344d6-b8f4-4234-bb75-f4d7702b040b","Type":"ContainerDied","Data":"ad12513f4962dbcb71cd89e1403abeaaad21ab0da490387e800ae06c89c226bc"} Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.790072 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad12513f4962dbcb71cd89e1403abeaaad21ab0da490387e800ae06c89c226bc" Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.790140 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-storageinit-cftjl" Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.802436 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c9776ccc5-2xw29" podStartSLOduration=12.802419914 podStartE2EDuration="12.802419914s" podCreationTimestamp="2026-02-17 16:15:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:15:27.797669685 +0000 UTC m=+1291.314028768" watchObservedRunningTime="2026-02-17 16:15:27.802419914 +0000 UTC m=+1291.318778987" Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.804944 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ce9fba55-1b70-4d39-a052-bff96bd8e93a","Type":"ContainerStarted","Data":"880dacad4a3e154e4d52b5e6d057696d1bf66aa3b76e3929039347494764eb64"} Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.805102 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ce9fba55-1b70-4d39-a052-bff96bd8e93a" containerName="ceilometer-central-agent" containerID="cri-o://dab1c654217acba93cbe85ef948ea50d4d0076687aeb53ea5db8956f9dc60a1a" gracePeriod=30 Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.805160 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.805203 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ce9fba55-1b70-4d39-a052-bff96bd8e93a" containerName="proxy-httpd" containerID="cri-o://880dacad4a3e154e4d52b5e6d057696d1bf66aa3b76e3929039347494764eb64" gracePeriod=30 Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.805225 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ce9fba55-1b70-4d39-a052-bff96bd8e93a" containerName="ceilometer-notification-agent" containerID="cri-o://dd8761ee926d8071fc41da21713fb32d5f439b5455e53db35d9392155b78adbe" gracePeriod=30 Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.805307 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ce9fba55-1b70-4d39-a052-bff96bd8e93a" containerName="sg-core" containerID="cri-o://5ae1963ac1b0852c4683f5358c8722c23e5499fa516e84308b0247d589ec8967" gracePeriod=30 Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.817039 4808 scope.go:117] "RemoveContainer" containerID="811f9cc94c4ee217b19fe631254bddba36393da079ca418fd65bacd8378b729d" Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.839458 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.842031 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"9f172158-bc5a-40a6-afc6-df84970d436d","Type":"ContainerDied","Data":"fcedd92b0b29bbf31af03e2bbced87e666dc9a438c55215268bb770cfadf5c2a"} Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.845310 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6d78867d94-7lhqs" event={"ID":"990b124d-3558-48ad-87f8-503580da5cc7","Type":"ContainerStarted","Data":"811ef05894ae13a541e79c744cd318f4beab6bb8a4ad62bce48c9bb1f1fb9b22"} Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.909183 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.335394349 podStartE2EDuration="1m8.909161564s" podCreationTimestamp="2026-02-17 16:14:19 +0000 UTC" firstStartedPulling="2026-02-17 16:14:20.796322593 +0000 UTC m=+1224.312681666" lastFinishedPulling="2026-02-17 16:15:27.370089808 +0000 UTC m=+1290.886448881" observedRunningTime="2026-02-17 16:15:27.855756518 +0000 UTC m=+1291.372115611" watchObservedRunningTime="2026-02-17 16:15:27.909161564 +0000 UTC m=+1291.425520637" Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.945351 4808 scope.go:117] "RemoveContainer" containerID="8aa1d22280596defb819b3119564e868d6ec09231fa1d0d6b3bcc085ed8b0dd1" Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.969376 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5f445fb886-lsqq4"] Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.989868 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6576669595-nvtln"] Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.016711 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-6576669595-nvtln"] Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.027127 4808 scope.go:117] "RemoveContainer" containerID="35656b2866277a003526143f6d404a3a9c98f5de68552024746c712c1205e4da" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.028227 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.038469 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.049709 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 17 16:15:28 crc kubenswrapper[4808]: E0217 16:15:28.050142 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd20b2ca-153a-4f21-9c41-4f00bdc82b56" containerName="neutron-httpd" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.050157 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd20b2ca-153a-4f21-9c41-4f00bdc82b56" containerName="neutron-httpd" Feb 17 16:15:28 crc kubenswrapper[4808]: E0217 16:15:28.050174 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f172158-bc5a-40a6-afc6-df84970d436d" containerName="cinder-api-log" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.050181 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f172158-bc5a-40a6-afc6-df84970d436d" containerName="cinder-api-log" Feb 17 16:15:28 crc kubenswrapper[4808]: E0217 16:15:28.050207 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd20b2ca-153a-4f21-9c41-4f00bdc82b56" containerName="neutron-api" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.050213 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd20b2ca-153a-4f21-9c41-4f00bdc82b56" containerName="neutron-api" Feb 17 16:15:28 crc kubenswrapper[4808]: E0217 16:15:28.050227 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f172158-bc5a-40a6-afc6-df84970d436d" containerName="cinder-api" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.050235 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f172158-bc5a-40a6-afc6-df84970d436d" containerName="cinder-api" Feb 17 16:15:28 crc kubenswrapper[4808]: E0217 16:15:28.050253 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf7344d6-b8f4-4234-bb75-f4d7702b040b" containerName="cloudkitty-storageinit" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.050261 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf7344d6-b8f4-4234-bb75-f4d7702b040b" containerName="cloudkitty-storageinit" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.050486 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd20b2ca-153a-4f21-9c41-4f00bdc82b56" containerName="neutron-httpd" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.050500 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f172158-bc5a-40a6-afc6-df84970d436d" containerName="cinder-api-log" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.050509 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd20b2ca-153a-4f21-9c41-4f00bdc82b56" containerName="neutron-api" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.050518 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf7344d6-b8f4-4234-bb75-f4d7702b040b" containerName="cloudkitty-storageinit" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.050531 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f172158-bc5a-40a6-afc6-df84970d436d" containerName="cinder-api" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.051806 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.057931 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.057953 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.058642 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.068373 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.214550 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b221adbf-8d08-4f9c-8bb2-578555a453df-logs\") pod \"cinder-api-0\" (UID: \"b221adbf-8d08-4f9c-8bb2-578555a453df\") " pod="openstack/cinder-api-0" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.214745 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b221adbf-8d08-4f9c-8bb2-578555a453df-public-tls-certs\") pod \"cinder-api-0\" (UID: \"b221adbf-8d08-4f9c-8bb2-578555a453df\") " pod="openstack/cinder-api-0" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.214827 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b221adbf-8d08-4f9c-8bb2-578555a453df-config-data\") pod \"cinder-api-0\" (UID: \"b221adbf-8d08-4f9c-8bb2-578555a453df\") " pod="openstack/cinder-api-0" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.214920 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b221adbf-8d08-4f9c-8bb2-578555a453df-etc-machine-id\") pod \"cinder-api-0\" (UID: \"b221adbf-8d08-4f9c-8bb2-578555a453df\") " pod="openstack/cinder-api-0" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.215077 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2skn\" (UniqueName: \"kubernetes.io/projected/b221adbf-8d08-4f9c-8bb2-578555a453df-kube-api-access-s2skn\") pod \"cinder-api-0\" (UID: \"b221adbf-8d08-4f9c-8bb2-578555a453df\") " pod="openstack/cinder-api-0" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.215325 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b221adbf-8d08-4f9c-8bb2-578555a453df-config-data-custom\") pod \"cinder-api-0\" (UID: \"b221adbf-8d08-4f9c-8bb2-578555a453df\") " pod="openstack/cinder-api-0" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.215414 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b221adbf-8d08-4f9c-8bb2-578555a453df-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"b221adbf-8d08-4f9c-8bb2-578555a453df\") " pod="openstack/cinder-api-0" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.215517 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b221adbf-8d08-4f9c-8bb2-578555a453df-scripts\") pod \"cinder-api-0\" (UID: \"b221adbf-8d08-4f9c-8bb2-578555a453df\") " pod="openstack/cinder-api-0" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.215627 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b221adbf-8d08-4f9c-8bb2-578555a453df-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"b221adbf-8d08-4f9c-8bb2-578555a453df\") " pod="openstack/cinder-api-0" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.319390 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b221adbf-8d08-4f9c-8bb2-578555a453df-config-data-custom\") pod \"cinder-api-0\" (UID: \"b221adbf-8d08-4f9c-8bb2-578555a453df\") " pod="openstack/cinder-api-0" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.319659 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b221adbf-8d08-4f9c-8bb2-578555a453df-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"b221adbf-8d08-4f9c-8bb2-578555a453df\") " pod="openstack/cinder-api-0" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.319693 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b221adbf-8d08-4f9c-8bb2-578555a453df-scripts\") pod \"cinder-api-0\" (UID: \"b221adbf-8d08-4f9c-8bb2-578555a453df\") " pod="openstack/cinder-api-0" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.319772 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b221adbf-8d08-4f9c-8bb2-578555a453df-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"b221adbf-8d08-4f9c-8bb2-578555a453df\") " pod="openstack/cinder-api-0" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.319820 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b221adbf-8d08-4f9c-8bb2-578555a453df-logs\") pod \"cinder-api-0\" (UID: \"b221adbf-8d08-4f9c-8bb2-578555a453df\") " pod="openstack/cinder-api-0" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.319844 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b221adbf-8d08-4f9c-8bb2-578555a453df-config-data\") pod \"cinder-api-0\" (UID: \"b221adbf-8d08-4f9c-8bb2-578555a453df\") " pod="openstack/cinder-api-0" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.319858 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b221adbf-8d08-4f9c-8bb2-578555a453df-public-tls-certs\") pod \"cinder-api-0\" (UID: \"b221adbf-8d08-4f9c-8bb2-578555a453df\") " pod="openstack/cinder-api-0" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.319877 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b221adbf-8d08-4f9c-8bb2-578555a453df-etc-machine-id\") pod \"cinder-api-0\" (UID: \"b221adbf-8d08-4f9c-8bb2-578555a453df\") " pod="openstack/cinder-api-0" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.319898 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2skn\" (UniqueName: \"kubernetes.io/projected/b221adbf-8d08-4f9c-8bb2-578555a453df-kube-api-access-s2skn\") pod \"cinder-api-0\" (UID: \"b221adbf-8d08-4f9c-8bb2-578555a453df\") " pod="openstack/cinder-api-0" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.321697 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b221adbf-8d08-4f9c-8bb2-578555a453df-etc-machine-id\") pod \"cinder-api-0\" (UID: \"b221adbf-8d08-4f9c-8bb2-578555a453df\") " pod="openstack/cinder-api-0" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.329657 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b221adbf-8d08-4f9c-8bb2-578555a453df-logs\") pod \"cinder-api-0\" (UID: \"b221adbf-8d08-4f9c-8bb2-578555a453df\") " pod="openstack/cinder-api-0" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.330049 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b221adbf-8d08-4f9c-8bb2-578555a453df-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"b221adbf-8d08-4f9c-8bb2-578555a453df\") " pod="openstack/cinder-api-0" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.346169 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b221adbf-8d08-4f9c-8bb2-578555a453df-public-tls-certs\") pod \"cinder-api-0\" (UID: \"b221adbf-8d08-4f9c-8bb2-578555a453df\") " pod="openstack/cinder-api-0" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.348077 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b221adbf-8d08-4f9c-8bb2-578555a453df-scripts\") pod \"cinder-api-0\" (UID: \"b221adbf-8d08-4f9c-8bb2-578555a453df\") " pod="openstack/cinder-api-0" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.350499 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b221adbf-8d08-4f9c-8bb2-578555a453df-config-data\") pod \"cinder-api-0\" (UID: \"b221adbf-8d08-4f9c-8bb2-578555a453df\") " pod="openstack/cinder-api-0" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.351283 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b221adbf-8d08-4f9c-8bb2-578555a453df-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"b221adbf-8d08-4f9c-8bb2-578555a453df\") " pod="openstack/cinder-api-0" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.351899 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b221adbf-8d08-4f9c-8bb2-578555a453df-config-data-custom\") pod \"cinder-api-0\" (UID: \"b221adbf-8d08-4f9c-8bb2-578555a453df\") " pod="openstack/cinder-api-0" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.352174 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2skn\" (UniqueName: \"kubernetes.io/projected/b221adbf-8d08-4f9c-8bb2-578555a453df-kube-api-access-s2skn\") pod \"cinder-api-0\" (UID: \"b221adbf-8d08-4f9c-8bb2-578555a453df\") " pod="openstack/cinder-api-0" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.409523 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.453370 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-proc-0"] Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.454590 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.461881 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cloudkitty-client-internal" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.461902 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-cloudkitty-dockercfg-kqv9d" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.461989 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-proc-config-data" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.462082 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-scripts" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.462295 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-config-data" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.488670 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-proc-0"] Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.571018 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-2xw29"] Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.613482 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-67bdc55879-786qn"] Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.615196 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67bdc55879-786qn" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.629020 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23a1fa53-e668-4800-b54a-904f42d9eb5e-config-data\") pod \"cloudkitty-proc-0\" (UID: \"23a1fa53-e668-4800-b54a-904f42d9eb5e\") " pod="openstack/cloudkitty-proc-0" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.629361 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vhzz\" (UniqueName: \"kubernetes.io/projected/23a1fa53-e668-4800-b54a-904f42d9eb5e-kube-api-access-7vhzz\") pod \"cloudkitty-proc-0\" (UID: \"23a1fa53-e668-4800-b54a-904f42d9eb5e\") " pod="openstack/cloudkitty-proc-0" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.629383 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/23a1fa53-e668-4800-b54a-904f42d9eb5e-config-data-custom\") pod \"cloudkitty-proc-0\" (UID: \"23a1fa53-e668-4800-b54a-904f42d9eb5e\") " pod="openstack/cloudkitty-proc-0" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.629410 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/23a1fa53-e668-4800-b54a-904f42d9eb5e-scripts\") pod \"cloudkitty-proc-0\" (UID: \"23a1fa53-e668-4800-b54a-904f42d9eb5e\") " pod="openstack/cloudkitty-proc-0" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.629452 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23a1fa53-e668-4800-b54a-904f42d9eb5e-combined-ca-bundle\") pod \"cloudkitty-proc-0\" (UID: \"23a1fa53-e668-4800-b54a-904f42d9eb5e\") " pod="openstack/cloudkitty-proc-0" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.629480 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/23a1fa53-e668-4800-b54a-904f42d9eb5e-certs\") pod \"cloudkitty-proc-0\" (UID: \"23a1fa53-e668-4800-b54a-904f42d9eb5e\") " pod="openstack/cloudkitty-proc-0" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.649885 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67bdc55879-786qn"] Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.733685 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vhzz\" (UniqueName: \"kubernetes.io/projected/23a1fa53-e668-4800-b54a-904f42d9eb5e-kube-api-access-7vhzz\") pod \"cloudkitty-proc-0\" (UID: \"23a1fa53-e668-4800-b54a-904f42d9eb5e\") " pod="openstack/cloudkitty-proc-0" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.733730 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/23a1fa53-e668-4800-b54a-904f42d9eb5e-config-data-custom\") pod \"cloudkitty-proc-0\" (UID: \"23a1fa53-e668-4800-b54a-904f42d9eb5e\") " pod="openstack/cloudkitty-proc-0" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.733764 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrdlq\" (UniqueName: \"kubernetes.io/projected/ef386302-14e1-4b00-b816-e85da8d23114-kube-api-access-zrdlq\") pod \"dnsmasq-dns-67bdc55879-786qn\" (UID: \"ef386302-14e1-4b00-b816-e85da8d23114\") " pod="openstack/dnsmasq-dns-67bdc55879-786qn" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.733782 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/23a1fa53-e668-4800-b54a-904f42d9eb5e-scripts\") pod \"cloudkitty-proc-0\" (UID: \"23a1fa53-e668-4800-b54a-904f42d9eb5e\") " pod="openstack/cloudkitty-proc-0" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.733803 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ef386302-14e1-4b00-b816-e85da8d23114-dns-svc\") pod \"dnsmasq-dns-67bdc55879-786qn\" (UID: \"ef386302-14e1-4b00-b816-e85da8d23114\") " pod="openstack/dnsmasq-dns-67bdc55879-786qn" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.733848 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23a1fa53-e668-4800-b54a-904f42d9eb5e-combined-ca-bundle\") pod \"cloudkitty-proc-0\" (UID: \"23a1fa53-e668-4800-b54a-904f42d9eb5e\") " pod="openstack/cloudkitty-proc-0" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.733877 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/23a1fa53-e668-4800-b54a-904f42d9eb5e-certs\") pod \"cloudkitty-proc-0\" (UID: \"23a1fa53-e668-4800-b54a-904f42d9eb5e\") " pod="openstack/cloudkitty-proc-0" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.733896 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ef386302-14e1-4b00-b816-e85da8d23114-dns-swift-storage-0\") pod \"dnsmasq-dns-67bdc55879-786qn\" (UID: \"ef386302-14e1-4b00-b816-e85da8d23114\") " pod="openstack/dnsmasq-dns-67bdc55879-786qn" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.733914 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23a1fa53-e668-4800-b54a-904f42d9eb5e-config-data\") pod \"cloudkitty-proc-0\" (UID: \"23a1fa53-e668-4800-b54a-904f42d9eb5e\") " pod="openstack/cloudkitty-proc-0" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.733971 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ef386302-14e1-4b00-b816-e85da8d23114-ovsdbserver-sb\") pod \"dnsmasq-dns-67bdc55879-786qn\" (UID: \"ef386302-14e1-4b00-b816-e85da8d23114\") " pod="openstack/dnsmasq-dns-67bdc55879-786qn" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.734018 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef386302-14e1-4b00-b816-e85da8d23114-config\") pod \"dnsmasq-dns-67bdc55879-786qn\" (UID: \"ef386302-14e1-4b00-b816-e85da8d23114\") " pod="openstack/dnsmasq-dns-67bdc55879-786qn" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.734032 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ef386302-14e1-4b00-b816-e85da8d23114-ovsdbserver-nb\") pod \"dnsmasq-dns-67bdc55879-786qn\" (UID: \"ef386302-14e1-4b00-b816-e85da8d23114\") " pod="openstack/dnsmasq-dns-67bdc55879-786qn" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.745493 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23a1fa53-e668-4800-b54a-904f42d9eb5e-config-data\") pod \"cloudkitty-proc-0\" (UID: \"23a1fa53-e668-4800-b54a-904f42d9eb5e\") " pod="openstack/cloudkitty-proc-0" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.746400 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/23a1fa53-e668-4800-b54a-904f42d9eb5e-scripts\") pod \"cloudkitty-proc-0\" (UID: \"23a1fa53-e668-4800-b54a-904f42d9eb5e\") " pod="openstack/cloudkitty-proc-0" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.756396 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/23a1fa53-e668-4800-b54a-904f42d9eb5e-config-data-custom\") pod \"cloudkitty-proc-0\" (UID: \"23a1fa53-e668-4800-b54a-904f42d9eb5e\") " pod="openstack/cloudkitty-proc-0" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.765291 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-api-0"] Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.766990 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.771825 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-api-config-data" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.773661 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23a1fa53-e668-4800-b54a-904f42d9eb5e-combined-ca-bundle\") pod \"cloudkitty-proc-0\" (UID: \"23a1fa53-e668-4800-b54a-904f42d9eb5e\") " pod="openstack/cloudkitty-proc-0" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.779689 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vhzz\" (UniqueName: \"kubernetes.io/projected/23a1fa53-e668-4800-b54a-904f42d9eb5e-kube-api-access-7vhzz\") pod \"cloudkitty-proc-0\" (UID: \"23a1fa53-e668-4800-b54a-904f42d9eb5e\") " pod="openstack/cloudkitty-proc-0" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.790234 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/23a1fa53-e668-4800-b54a-904f42d9eb5e-certs\") pod \"cloudkitty-proc-0\" (UID: \"23a1fa53-e668-4800-b54a-904f42d9eb5e\") " pod="openstack/cloudkitty-proc-0" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.805626 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-api-0"] Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.817960 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.836746 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrdlq\" (UniqueName: \"kubernetes.io/projected/ef386302-14e1-4b00-b816-e85da8d23114-kube-api-access-zrdlq\") pod \"dnsmasq-dns-67bdc55879-786qn\" (UID: \"ef386302-14e1-4b00-b816-e85da8d23114\") " pod="openstack/dnsmasq-dns-67bdc55879-786qn" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.836790 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ef386302-14e1-4b00-b816-e85da8d23114-dns-svc\") pod \"dnsmasq-dns-67bdc55879-786qn\" (UID: \"ef386302-14e1-4b00-b816-e85da8d23114\") " pod="openstack/dnsmasq-dns-67bdc55879-786qn" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.836865 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ef386302-14e1-4b00-b816-e85da8d23114-dns-swift-storage-0\") pod \"dnsmasq-dns-67bdc55879-786qn\" (UID: \"ef386302-14e1-4b00-b816-e85da8d23114\") " pod="openstack/dnsmasq-dns-67bdc55879-786qn" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.836949 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ef386302-14e1-4b00-b816-e85da8d23114-ovsdbserver-sb\") pod \"dnsmasq-dns-67bdc55879-786qn\" (UID: \"ef386302-14e1-4b00-b816-e85da8d23114\") " pod="openstack/dnsmasq-dns-67bdc55879-786qn" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.837015 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ef386302-14e1-4b00-b816-e85da8d23114-ovsdbserver-nb\") pod \"dnsmasq-dns-67bdc55879-786qn\" (UID: \"ef386302-14e1-4b00-b816-e85da8d23114\") " pod="openstack/dnsmasq-dns-67bdc55879-786qn" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.837035 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef386302-14e1-4b00-b816-e85da8d23114-config\") pod \"dnsmasq-dns-67bdc55879-786qn\" (UID: \"ef386302-14e1-4b00-b816-e85da8d23114\") " pod="openstack/dnsmasq-dns-67bdc55879-786qn" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.838079 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef386302-14e1-4b00-b816-e85da8d23114-config\") pod \"dnsmasq-dns-67bdc55879-786qn\" (UID: \"ef386302-14e1-4b00-b816-e85da8d23114\") " pod="openstack/dnsmasq-dns-67bdc55879-786qn" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.839043 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ef386302-14e1-4b00-b816-e85da8d23114-dns-svc\") pod \"dnsmasq-dns-67bdc55879-786qn\" (UID: \"ef386302-14e1-4b00-b816-e85da8d23114\") " pod="openstack/dnsmasq-dns-67bdc55879-786qn" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.841498 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ef386302-14e1-4b00-b816-e85da8d23114-ovsdbserver-sb\") pod \"dnsmasq-dns-67bdc55879-786qn\" (UID: \"ef386302-14e1-4b00-b816-e85da8d23114\") " pod="openstack/dnsmasq-dns-67bdc55879-786qn" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.841942 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ef386302-14e1-4b00-b816-e85da8d23114-ovsdbserver-nb\") pod \"dnsmasq-dns-67bdc55879-786qn\" (UID: \"ef386302-14e1-4b00-b816-e85da8d23114\") " pod="openstack/dnsmasq-dns-67bdc55879-786qn" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.848274 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ef386302-14e1-4b00-b816-e85da8d23114-dns-swift-storage-0\") pod \"dnsmasq-dns-67bdc55879-786qn\" (UID: \"ef386302-14e1-4b00-b816-e85da8d23114\") " pod="openstack/dnsmasq-dns-67bdc55879-786qn" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.869532 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrdlq\" (UniqueName: \"kubernetes.io/projected/ef386302-14e1-4b00-b816-e85da8d23114-kube-api-access-zrdlq\") pod \"dnsmasq-dns-67bdc55879-786qn\" (UID: \"ef386302-14e1-4b00-b816-e85da8d23114\") " pod="openstack/dnsmasq-dns-67bdc55879-786qn" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.876747 4808 generic.go:334] "Generic (PLEG): container finished" podID="ce9fba55-1b70-4d39-a052-bff96bd8e93a" containerID="880dacad4a3e154e4d52b5e6d057696d1bf66aa3b76e3929039347494764eb64" exitCode=0 Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.876776 4808 generic.go:334] "Generic (PLEG): container finished" podID="ce9fba55-1b70-4d39-a052-bff96bd8e93a" containerID="5ae1963ac1b0852c4683f5358c8722c23e5499fa516e84308b0247d589ec8967" exitCode=2 Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.876785 4808 generic.go:334] "Generic (PLEG): container finished" podID="ce9fba55-1b70-4d39-a052-bff96bd8e93a" containerID="dab1c654217acba93cbe85ef948ea50d4d0076687aeb53ea5db8956f9dc60a1a" exitCode=0 Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.876831 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ce9fba55-1b70-4d39-a052-bff96bd8e93a","Type":"ContainerDied","Data":"880dacad4a3e154e4d52b5e6d057696d1bf66aa3b76e3929039347494764eb64"} Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.876854 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ce9fba55-1b70-4d39-a052-bff96bd8e93a","Type":"ContainerDied","Data":"5ae1963ac1b0852c4683f5358c8722c23e5499fa516e84308b0247d589ec8967"} Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.876864 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ce9fba55-1b70-4d39-a052-bff96bd8e93a","Type":"ContainerDied","Data":"dab1c654217acba93cbe85ef948ea50d4d0076687aeb53ea5db8956f9dc60a1a"} Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.882747 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5f445fb886-lsqq4" event={"ID":"a9bf13d7-3430-4818-b8fc-239796570b6c","Type":"ContainerStarted","Data":"015c6612d90bd5fc05796bc7fed418ea69aea7bd10e869ca6f1496576d4a26e0"} Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.882774 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5f445fb886-lsqq4" event={"ID":"a9bf13d7-3430-4818-b8fc-239796570b6c","Type":"ContainerStarted","Data":"db010c2307a19729c4620396f288f51bb34619fae666877d27a78254eb216149"} Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.883054 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5f445fb886-lsqq4" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.883106 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5f445fb886-lsqq4" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.916780 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6d78867d94-7lhqs" event={"ID":"990b124d-3558-48ad-87f8-503580da5cc7","Type":"ContainerStarted","Data":"12cf51cbaaaaa0035e7d43146a9493c075855fc56dd958e42443ac7da0c4910a"} Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.933089 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-55f6d995c5-hnz4n" event={"ID":"a0db6993-f3e7-4aa7-b5cc-1b848a15b56c","Type":"ContainerStarted","Data":"5d3a8263da4ef5c89e34853733b82044dda3120c3c78cadd666ba2951bb4612c"} Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.964393 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67bdc55879-786qn" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.972105 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb0a53ca-554f-4be2-a185-3eba97454429-scripts\") pod \"cloudkitty-api-0\" (UID: \"bb0a53ca-554f-4be2-a185-3eba97454429\") " pod="openstack/cloudkitty-api-0" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.998303 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb0a53ca-554f-4be2-a185-3eba97454429-config-data\") pod \"cloudkitty-api-0\" (UID: \"bb0a53ca-554f-4be2-a185-3eba97454429\") " pod="openstack/cloudkitty-api-0" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.999145 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/bb0a53ca-554f-4be2-a185-3eba97454429-certs\") pod \"cloudkitty-api-0\" (UID: \"bb0a53ca-554f-4be2-a185-3eba97454429\") " pod="openstack/cloudkitty-api-0" Feb 17 16:15:29 crc kubenswrapper[4808]: I0217 16:15:29.000877 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbp64\" (UniqueName: \"kubernetes.io/projected/bb0a53ca-554f-4be2-a185-3eba97454429-kube-api-access-gbp64\") pod \"cloudkitty-api-0\" (UID: \"bb0a53ca-554f-4be2-a185-3eba97454429\") " pod="openstack/cloudkitty-api-0" Feb 17 16:15:29 crc kubenswrapper[4808]: I0217 16:15:29.001026 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bb0a53ca-554f-4be2-a185-3eba97454429-config-data-custom\") pod \"cloudkitty-api-0\" (UID: \"bb0a53ca-554f-4be2-a185-3eba97454429\") " pod="openstack/cloudkitty-api-0" Feb 17 16:15:29 crc kubenswrapper[4808]: I0217 16:15:29.001123 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb0a53ca-554f-4be2-a185-3eba97454429-combined-ca-bundle\") pod \"cloudkitty-api-0\" (UID: \"bb0a53ca-554f-4be2-a185-3eba97454429\") " pod="openstack/cloudkitty-api-0" Feb 17 16:15:29 crc kubenswrapper[4808]: I0217 16:15:29.001280 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb0a53ca-554f-4be2-a185-3eba97454429-logs\") pod \"cloudkitty-api-0\" (UID: \"bb0a53ca-554f-4be2-a185-3eba97454429\") " pod="openstack/cloudkitty-api-0" Feb 17 16:15:29 crc kubenswrapper[4808]: I0217 16:15:28.990757 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"37da8fa5-9dda-4e98-9a63-a4c0036e0017","Type":"ContainerStarted","Data":"0299101d44d10b5033809e45bef98b67a9f7bed24aac135e1eb10a2b4058b95e"} Feb 17 16:15:29 crc kubenswrapper[4808]: I0217 16:15:29.015994 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-5f445fb886-lsqq4" podStartSLOduration=7.015970201 podStartE2EDuration="7.015970201s" podCreationTimestamp="2026-02-17 16:15:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:15:28.917095545 +0000 UTC m=+1292.433454618" watchObservedRunningTime="2026-02-17 16:15:29.015970201 +0000 UTC m=+1292.532329274" Feb 17 16:15:29 crc kubenswrapper[4808]: I0217 16:15:29.066009 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-6d78867d94-7lhqs" podStartSLOduration=4.178243585 podStartE2EDuration="15.065981976s" podCreationTimestamp="2026-02-17 16:15:14 +0000 UTC" firstStartedPulling="2026-02-17 16:15:16.242707329 +0000 UTC m=+1279.759066402" lastFinishedPulling="2026-02-17 16:15:27.13044572 +0000 UTC m=+1290.646804793" observedRunningTime="2026-02-17 16:15:29.004112241 +0000 UTC m=+1292.520471324" watchObservedRunningTime="2026-02-17 16:15:29.065981976 +0000 UTC m=+1292.582341069" Feb 17 16:15:29 crc kubenswrapper[4808]: I0217 16:15:29.093701 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=12.917956252 podStartE2EDuration="14.093674705s" podCreationTimestamp="2026-02-17 16:15:15 +0000 UTC" firstStartedPulling="2026-02-17 16:15:16.833995759 +0000 UTC m=+1280.350354832" lastFinishedPulling="2026-02-17 16:15:18.009714212 +0000 UTC m=+1281.526073285" observedRunningTime="2026-02-17 16:15:29.043351443 +0000 UTC m=+1292.559710516" watchObservedRunningTime="2026-02-17 16:15:29.093674705 +0000 UTC m=+1292.610033788" Feb 17 16:15:29 crc kubenswrapper[4808]: I0217 16:15:29.101250 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-55f6d995c5-hnz4n" podStartSLOduration=4.15260695 podStartE2EDuration="15.101227829s" podCreationTimestamp="2026-02-17 16:15:14 +0000 UTC" firstStartedPulling="2026-02-17 16:15:16.381259141 +0000 UTC m=+1279.897618214" lastFinishedPulling="2026-02-17 16:15:27.32988002 +0000 UTC m=+1290.846239093" observedRunningTime="2026-02-17 16:15:29.058436781 +0000 UTC m=+1292.574795874" watchObservedRunningTime="2026-02-17 16:15:29.101227829 +0000 UTC m=+1292.617586912" Feb 17 16:15:29 crc kubenswrapper[4808]: I0217 16:15:29.110171 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bb0a53ca-554f-4be2-a185-3eba97454429-config-data-custom\") pod \"cloudkitty-api-0\" (UID: \"bb0a53ca-554f-4be2-a185-3eba97454429\") " pod="openstack/cloudkitty-api-0" Feb 17 16:15:29 crc kubenswrapper[4808]: I0217 16:15:29.110231 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb0a53ca-554f-4be2-a185-3eba97454429-combined-ca-bundle\") pod \"cloudkitty-api-0\" (UID: \"bb0a53ca-554f-4be2-a185-3eba97454429\") " pod="openstack/cloudkitty-api-0" Feb 17 16:15:29 crc kubenswrapper[4808]: I0217 16:15:29.110300 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb0a53ca-554f-4be2-a185-3eba97454429-logs\") pod \"cloudkitty-api-0\" (UID: \"bb0a53ca-554f-4be2-a185-3eba97454429\") " pod="openstack/cloudkitty-api-0" Feb 17 16:15:29 crc kubenswrapper[4808]: I0217 16:15:29.110362 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb0a53ca-554f-4be2-a185-3eba97454429-scripts\") pod \"cloudkitty-api-0\" (UID: \"bb0a53ca-554f-4be2-a185-3eba97454429\") " pod="openstack/cloudkitty-api-0" Feb 17 16:15:29 crc kubenswrapper[4808]: I0217 16:15:29.110396 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb0a53ca-554f-4be2-a185-3eba97454429-config-data\") pod \"cloudkitty-api-0\" (UID: \"bb0a53ca-554f-4be2-a185-3eba97454429\") " pod="openstack/cloudkitty-api-0" Feb 17 16:15:29 crc kubenswrapper[4808]: I0217 16:15:29.110506 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/bb0a53ca-554f-4be2-a185-3eba97454429-certs\") pod \"cloudkitty-api-0\" (UID: \"bb0a53ca-554f-4be2-a185-3eba97454429\") " pod="openstack/cloudkitty-api-0" Feb 17 16:15:29 crc kubenswrapper[4808]: I0217 16:15:29.110530 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbp64\" (UniqueName: \"kubernetes.io/projected/bb0a53ca-554f-4be2-a185-3eba97454429-kube-api-access-gbp64\") pod \"cloudkitty-api-0\" (UID: \"bb0a53ca-554f-4be2-a185-3eba97454429\") " pod="openstack/cloudkitty-api-0" Feb 17 16:15:29 crc kubenswrapper[4808]: I0217 16:15:29.117185 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb0a53ca-554f-4be2-a185-3eba97454429-logs\") pod \"cloudkitty-api-0\" (UID: \"bb0a53ca-554f-4be2-a185-3eba97454429\") " pod="openstack/cloudkitty-api-0" Feb 17 16:15:29 crc kubenswrapper[4808]: I0217 16:15:29.119193 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb0a53ca-554f-4be2-a185-3eba97454429-config-data\") pod \"cloudkitty-api-0\" (UID: \"bb0a53ca-554f-4be2-a185-3eba97454429\") " pod="openstack/cloudkitty-api-0" Feb 17 16:15:29 crc kubenswrapper[4808]: I0217 16:15:29.133094 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bb0a53ca-554f-4be2-a185-3eba97454429-config-data-custom\") pod \"cloudkitty-api-0\" (UID: \"bb0a53ca-554f-4be2-a185-3eba97454429\") " pod="openstack/cloudkitty-api-0" Feb 17 16:15:29 crc kubenswrapper[4808]: I0217 16:15:29.138090 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb0a53ca-554f-4be2-a185-3eba97454429-scripts\") pod \"cloudkitty-api-0\" (UID: \"bb0a53ca-554f-4be2-a185-3eba97454429\") " pod="openstack/cloudkitty-api-0" Feb 17 16:15:29 crc kubenswrapper[4808]: I0217 16:15:29.142094 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/bb0a53ca-554f-4be2-a185-3eba97454429-certs\") pod \"cloudkitty-api-0\" (UID: \"bb0a53ca-554f-4be2-a185-3eba97454429\") " pod="openstack/cloudkitty-api-0" Feb 17 16:15:29 crc kubenswrapper[4808]: I0217 16:15:29.145122 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbp64\" (UniqueName: \"kubernetes.io/projected/bb0a53ca-554f-4be2-a185-3eba97454429-kube-api-access-gbp64\") pod \"cloudkitty-api-0\" (UID: \"bb0a53ca-554f-4be2-a185-3eba97454429\") " pod="openstack/cloudkitty-api-0" Feb 17 16:15:29 crc kubenswrapper[4808]: I0217 16:15:29.160184 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb0a53ca-554f-4be2-a185-3eba97454429-combined-ca-bundle\") pod \"cloudkitty-api-0\" (UID: \"bb0a53ca-554f-4be2-a185-3eba97454429\") " pod="openstack/cloudkitty-api-0" Feb 17 16:15:29 crc kubenswrapper[4808]: I0217 16:15:29.177149 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f172158-bc5a-40a6-afc6-df84970d436d" path="/var/lib/kubelet/pods/9f172158-bc5a-40a6-afc6-df84970d436d/volumes" Feb 17 16:15:29 crc kubenswrapper[4808]: I0217 16:15:29.178045 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd20b2ca-153a-4f21-9c41-4f00bdc82b56" path="/var/lib/kubelet/pods/dd20b2ca-153a-4f21-9c41-4f00bdc82b56/volumes" Feb 17 16:15:29 crc kubenswrapper[4808]: I0217 16:15:29.247481 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 17 16:15:29 crc kubenswrapper[4808]: I0217 16:15:29.430392 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Feb 17 16:15:29 crc kubenswrapper[4808]: I0217 16:15:29.566653 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-proc-0"] Feb 17 16:15:29 crc kubenswrapper[4808]: I0217 16:15:29.842080 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67bdc55879-786qn"] Feb 17 16:15:30 crc kubenswrapper[4808]: I0217 16:15:30.016958 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"23a1fa53-e668-4800-b54a-904f42d9eb5e","Type":"ContainerStarted","Data":"d486a3a307b0de09a60edde55636666b3342a5903cc110cae3e17e9502f50af9"} Feb 17 16:15:30 crc kubenswrapper[4808]: I0217 16:15:30.021271 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67bdc55879-786qn" event={"ID":"ef386302-14e1-4b00-b816-e85da8d23114","Type":"ContainerStarted","Data":"d83fa5a20f760435e6a158fc895b5bd4256f47d348c4b60bfa4934c4b8383f1a"} Feb 17 16:15:30 crc kubenswrapper[4808]: I0217 16:15:30.024899 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b221adbf-8d08-4f9c-8bb2-578555a453df","Type":"ContainerStarted","Data":"d8c64ebcef65f5baba79f233ba06426dadfbe0680217c995d73865efa0d666fb"} Feb 17 16:15:30 crc kubenswrapper[4808]: I0217 16:15:30.027052 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5f445fb886-lsqq4" event={"ID":"a9bf13d7-3430-4818-b8fc-239796570b6c","Type":"ContainerStarted","Data":"ca295969bb0e5c39df0b90c6d6227d025c5b6e39a664f6e1537222ae6832dd6c"} Feb 17 16:15:30 crc kubenswrapper[4808]: I0217 16:15:30.027819 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c9776ccc5-2xw29" podUID="ebaafdbf-7612-40c9-b044-697f41e930e2" containerName="dnsmasq-dns" containerID="cri-o://593b85e7ed11967846ba3f0a308af29ad73243d26b49fd486a4676c69dbd2953" gracePeriod=10 Feb 17 16:15:30 crc kubenswrapper[4808]: I0217 16:15:30.067856 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-api-0"] Feb 17 16:15:30 crc kubenswrapper[4808]: I0217 16:15:30.598725 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-2xw29" Feb 17 16:15:30 crc kubenswrapper[4808]: I0217 16:15:30.666296 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n7z6r\" (UniqueName: \"kubernetes.io/projected/ebaafdbf-7612-40c9-b044-697f41e930e2-kube-api-access-n7z6r\") pod \"ebaafdbf-7612-40c9-b044-697f41e930e2\" (UID: \"ebaafdbf-7612-40c9-b044-697f41e930e2\") " Feb 17 16:15:30 crc kubenswrapper[4808]: I0217 16:15:30.666699 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ebaafdbf-7612-40c9-b044-697f41e930e2-dns-swift-storage-0\") pod \"ebaafdbf-7612-40c9-b044-697f41e930e2\" (UID: \"ebaafdbf-7612-40c9-b044-697f41e930e2\") " Feb 17 16:15:30 crc kubenswrapper[4808]: I0217 16:15:30.666756 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebaafdbf-7612-40c9-b044-697f41e930e2-config\") pod \"ebaafdbf-7612-40c9-b044-697f41e930e2\" (UID: \"ebaafdbf-7612-40c9-b044-697f41e930e2\") " Feb 17 16:15:30 crc kubenswrapper[4808]: I0217 16:15:30.666831 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ebaafdbf-7612-40c9-b044-697f41e930e2-ovsdbserver-sb\") pod \"ebaafdbf-7612-40c9-b044-697f41e930e2\" (UID: \"ebaafdbf-7612-40c9-b044-697f41e930e2\") " Feb 17 16:15:30 crc kubenswrapper[4808]: I0217 16:15:30.666850 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ebaafdbf-7612-40c9-b044-697f41e930e2-ovsdbserver-nb\") pod \"ebaafdbf-7612-40c9-b044-697f41e930e2\" (UID: \"ebaafdbf-7612-40c9-b044-697f41e930e2\") " Feb 17 16:15:30 crc kubenswrapper[4808]: I0217 16:15:30.666883 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ebaafdbf-7612-40c9-b044-697f41e930e2-dns-svc\") pod \"ebaafdbf-7612-40c9-b044-697f41e930e2\" (UID: \"ebaafdbf-7612-40c9-b044-697f41e930e2\") " Feb 17 16:15:30 crc kubenswrapper[4808]: I0217 16:15:30.672797 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ebaafdbf-7612-40c9-b044-697f41e930e2-kube-api-access-n7z6r" (OuterVolumeSpecName: "kube-api-access-n7z6r") pod "ebaafdbf-7612-40c9-b044-697f41e930e2" (UID: "ebaafdbf-7612-40c9-b044-697f41e930e2"). InnerVolumeSpecName "kube-api-access-n7z6r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:15:30 crc kubenswrapper[4808]: I0217 16:15:30.750675 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ebaafdbf-7612-40c9-b044-697f41e930e2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ebaafdbf-7612-40c9-b044-697f41e930e2" (UID: "ebaafdbf-7612-40c9-b044-697f41e930e2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:15:30 crc kubenswrapper[4808]: I0217 16:15:30.765094 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ebaafdbf-7612-40c9-b044-697f41e930e2-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ebaafdbf-7612-40c9-b044-697f41e930e2" (UID: "ebaafdbf-7612-40c9-b044-697f41e930e2"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:15:30 crc kubenswrapper[4808]: I0217 16:15:30.769966 4808 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ebaafdbf-7612-40c9-b044-697f41e930e2-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:30 crc kubenswrapper[4808]: I0217 16:15:30.769997 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n7z6r\" (UniqueName: \"kubernetes.io/projected/ebaafdbf-7612-40c9-b044-697f41e930e2-kube-api-access-n7z6r\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:30 crc kubenswrapper[4808]: I0217 16:15:30.770011 4808 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ebaafdbf-7612-40c9-b044-697f41e930e2-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:30 crc kubenswrapper[4808]: I0217 16:15:30.772043 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ebaafdbf-7612-40c9-b044-697f41e930e2-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ebaafdbf-7612-40c9-b044-697f41e930e2" (UID: "ebaafdbf-7612-40c9-b044-697f41e930e2"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:15:30 crc kubenswrapper[4808]: I0217 16:15:30.780717 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ebaafdbf-7612-40c9-b044-697f41e930e2-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ebaafdbf-7612-40c9-b044-697f41e930e2" (UID: "ebaafdbf-7612-40c9-b044-697f41e930e2"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:15:30 crc kubenswrapper[4808]: I0217 16:15:30.791665 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ebaafdbf-7612-40c9-b044-697f41e930e2-config" (OuterVolumeSpecName: "config") pod "ebaafdbf-7612-40c9-b044-697f41e930e2" (UID: "ebaafdbf-7612-40c9-b044-697f41e930e2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:15:30 crc kubenswrapper[4808]: I0217 16:15:30.872017 4808 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ebaafdbf-7612-40c9-b044-697f41e930e2-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:30 crc kubenswrapper[4808]: I0217 16:15:30.872041 4808 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ebaafdbf-7612-40c9-b044-697f41e930e2-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:30 crc kubenswrapper[4808]: I0217 16:15:30.872052 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebaafdbf-7612-40c9-b044-697f41e930e2-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:31 crc kubenswrapper[4808]: I0217 16:15:31.011749 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 17 16:15:31 crc kubenswrapper[4808]: I0217 16:15:31.043072 4808 generic.go:334] "Generic (PLEG): container finished" podID="ef386302-14e1-4b00-b816-e85da8d23114" containerID="76cc030230faf69f3923cb1665482598e8d9c392060ca1c1353369b5c8628b5a" exitCode=0 Feb 17 16:15:31 crc kubenswrapper[4808]: I0217 16:15:31.043123 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67bdc55879-786qn" event={"ID":"ef386302-14e1-4b00-b816-e85da8d23114","Type":"ContainerDied","Data":"76cc030230faf69f3923cb1665482598e8d9c392060ca1c1353369b5c8628b5a"} Feb 17 16:15:31 crc kubenswrapper[4808]: I0217 16:15:31.046019 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"bb0a53ca-554f-4be2-a185-3eba97454429","Type":"ContainerStarted","Data":"0778140cec010c1252604b91cd534db0da28521dd85bdc49c1940e48ff51c5ad"} Feb 17 16:15:31 crc kubenswrapper[4808]: I0217 16:15:31.046057 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"bb0a53ca-554f-4be2-a185-3eba97454429","Type":"ContainerStarted","Data":"ff8c13248ed3bc6b83102bff59c9c6021e22f8698b1b6f41e54decc4c38650d2"} Feb 17 16:15:31 crc kubenswrapper[4808]: I0217 16:15:31.046067 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"bb0a53ca-554f-4be2-a185-3eba97454429","Type":"ContainerStarted","Data":"643be3a025f081600c92f8d5d11a7801aaad867291685319f6312aa567fb9d6a"} Feb 17 16:15:31 crc kubenswrapper[4808]: I0217 16:15:31.046651 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-api-0" Feb 17 16:15:31 crc kubenswrapper[4808]: I0217 16:15:31.048711 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b221adbf-8d08-4f9c-8bb2-578555a453df","Type":"ContainerStarted","Data":"aa8228c5daf85af14f81736842275b7f307863cb24e1467c7a4c23f8458865ca"} Feb 17 16:15:31 crc kubenswrapper[4808]: I0217 16:15:31.048734 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b221adbf-8d08-4f9c-8bb2-578555a453df","Type":"ContainerStarted","Data":"a26d0c09826de2ec55266756d360518d92b4685278c12c05abb29f8474277c36"} Feb 17 16:15:31 crc kubenswrapper[4808]: I0217 16:15:31.049239 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 17 16:15:31 crc kubenswrapper[4808]: I0217 16:15:31.053310 4808 generic.go:334] "Generic (PLEG): container finished" podID="ebaafdbf-7612-40c9-b044-697f41e930e2" containerID="593b85e7ed11967846ba3f0a308af29ad73243d26b49fd486a4676c69dbd2953" exitCode=0 Feb 17 16:15:31 crc kubenswrapper[4808]: I0217 16:15:31.053939 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-2xw29" Feb 17 16:15:31 crc kubenswrapper[4808]: I0217 16:15:31.057812 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-2xw29" event={"ID":"ebaafdbf-7612-40c9-b044-697f41e930e2","Type":"ContainerDied","Data":"593b85e7ed11967846ba3f0a308af29ad73243d26b49fd486a4676c69dbd2953"} Feb 17 16:15:31 crc kubenswrapper[4808]: I0217 16:15:31.058291 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-2xw29" event={"ID":"ebaafdbf-7612-40c9-b044-697f41e930e2","Type":"ContainerDied","Data":"e99cc9a0fa3bce5cde0547a70bbca7ff59974ec820617eba60536a7f6b74d369"} Feb 17 16:15:31 crc kubenswrapper[4808]: I0217 16:15:31.058354 4808 scope.go:117] "RemoveContainer" containerID="593b85e7ed11967846ba3f0a308af29ad73243d26b49fd486a4676c69dbd2953" Feb 17 16:15:31 crc kubenswrapper[4808]: I0217 16:15:31.112890 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-api-0" podStartSLOduration=3.112864165 podStartE2EDuration="3.112864165s" podCreationTimestamp="2026-02-17 16:15:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:15:31.086652646 +0000 UTC m=+1294.603011719" watchObservedRunningTime="2026-02-17 16:15:31.112864165 +0000 UTC m=+1294.629223248" Feb 17 16:15:31 crc kubenswrapper[4808]: I0217 16:15:31.114807 4808 scope.go:117] "RemoveContainer" containerID="d7d5b1aacc9ee39478911942c54b18b463b829b4e46aa33564c91e96616177dd" Feb 17 16:15:31 crc kubenswrapper[4808]: I0217 16:15:31.140972 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.140949395 podStartE2EDuration="4.140949395s" podCreationTimestamp="2026-02-17 16:15:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:15:31.115018224 +0000 UTC m=+1294.631377297" watchObservedRunningTime="2026-02-17 16:15:31.140949395 +0000 UTC m=+1294.657308468" Feb 17 16:15:31 crc kubenswrapper[4808]: I0217 16:15:31.172523 4808 scope.go:117] "RemoveContainer" containerID="593b85e7ed11967846ba3f0a308af29ad73243d26b49fd486a4676c69dbd2953" Feb 17 16:15:31 crc kubenswrapper[4808]: E0217 16:15:31.174485 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"593b85e7ed11967846ba3f0a308af29ad73243d26b49fd486a4676c69dbd2953\": container with ID starting with 593b85e7ed11967846ba3f0a308af29ad73243d26b49fd486a4676c69dbd2953 not found: ID does not exist" containerID="593b85e7ed11967846ba3f0a308af29ad73243d26b49fd486a4676c69dbd2953" Feb 17 16:15:31 crc kubenswrapper[4808]: I0217 16:15:31.174540 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"593b85e7ed11967846ba3f0a308af29ad73243d26b49fd486a4676c69dbd2953"} err="failed to get container status \"593b85e7ed11967846ba3f0a308af29ad73243d26b49fd486a4676c69dbd2953\": rpc error: code = NotFound desc = could not find container \"593b85e7ed11967846ba3f0a308af29ad73243d26b49fd486a4676c69dbd2953\": container with ID starting with 593b85e7ed11967846ba3f0a308af29ad73243d26b49fd486a4676c69dbd2953 not found: ID does not exist" Feb 17 16:15:31 crc kubenswrapper[4808]: I0217 16:15:31.174590 4808 scope.go:117] "RemoveContainer" containerID="d7d5b1aacc9ee39478911942c54b18b463b829b4e46aa33564c91e96616177dd" Feb 17 16:15:31 crc kubenswrapper[4808]: E0217 16:15:31.180430 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d7d5b1aacc9ee39478911942c54b18b463b829b4e46aa33564c91e96616177dd\": container with ID starting with d7d5b1aacc9ee39478911942c54b18b463b829b4e46aa33564c91e96616177dd not found: ID does not exist" containerID="d7d5b1aacc9ee39478911942c54b18b463b829b4e46aa33564c91e96616177dd" Feb 17 16:15:31 crc kubenswrapper[4808]: I0217 16:15:31.180477 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7d5b1aacc9ee39478911942c54b18b463b829b4e46aa33564c91e96616177dd"} err="failed to get container status \"d7d5b1aacc9ee39478911942c54b18b463b829b4e46aa33564c91e96616177dd\": rpc error: code = NotFound desc = could not find container \"d7d5b1aacc9ee39478911942c54b18b463b829b4e46aa33564c91e96616177dd\": container with ID starting with d7d5b1aacc9ee39478911942c54b18b463b829b4e46aa33564c91e96616177dd not found: ID does not exist" Feb 17 16:15:31 crc kubenswrapper[4808]: I0217 16:15:31.227483 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-2xw29"] Feb 17 16:15:31 crc kubenswrapper[4808]: I0217 16:15:31.270339 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-2xw29"] Feb 17 16:15:31 crc kubenswrapper[4808]: I0217 16:15:31.369618 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 17 16:15:31 crc kubenswrapper[4808]: I0217 16:15:31.622734 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-api-0"] Feb 17 16:15:32 crc kubenswrapper[4808]: I0217 16:15:32.064908 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"23a1fa53-e668-4800-b54a-904f42d9eb5e","Type":"ContainerStarted","Data":"50f1247e3e06436abc5b877c08bbabce85a826f30dcdbef9ab02ea5e21f03a94"} Feb 17 16:15:32 crc kubenswrapper[4808]: I0217 16:15:32.066833 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67bdc55879-786qn" event={"ID":"ef386302-14e1-4b00-b816-e85da8d23114","Type":"ContainerStarted","Data":"893c1ea963c8e724fa2b9baa335921cef2a62410cb7f634726388e519c6b4a53"} Feb 17 16:15:32 crc kubenswrapper[4808]: I0217 16:15:32.067638 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-67bdc55879-786qn" Feb 17 16:15:32 crc kubenswrapper[4808]: I0217 16:15:32.092511 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-proc-0" podStartSLOduration=2.509988524 podStartE2EDuration="4.092490281s" podCreationTimestamp="2026-02-17 16:15:28 +0000 UTC" firstStartedPulling="2026-02-17 16:15:29.589977453 +0000 UTC m=+1293.106336516" lastFinishedPulling="2026-02-17 16:15:31.1724792 +0000 UTC m=+1294.688838273" observedRunningTime="2026-02-17 16:15:32.085122812 +0000 UTC m=+1295.601481895" watchObservedRunningTime="2026-02-17 16:15:32.092490281 +0000 UTC m=+1295.608849354" Feb 17 16:15:32 crc kubenswrapper[4808]: I0217 16:15:32.114657 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-proc-0"] Feb 17 16:15:32 crc kubenswrapper[4808]: I0217 16:15:32.120049 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-67bdc55879-786qn" podStartSLOduration=4.120033386 podStartE2EDuration="4.120033386s" podCreationTimestamp="2026-02-17 16:15:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:15:32.118111484 +0000 UTC m=+1295.634470567" watchObservedRunningTime="2026-02-17 16:15:32.120033386 +0000 UTC m=+1295.636392459" Feb 17 16:15:32 crc kubenswrapper[4808]: I0217 16:15:32.185282 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 17 16:15:33 crc kubenswrapper[4808]: I0217 16:15:33.101658 4808 generic.go:334] "Generic (PLEG): container finished" podID="ce9fba55-1b70-4d39-a052-bff96bd8e93a" containerID="dd8761ee926d8071fc41da21713fb32d5f439b5455e53db35d9392155b78adbe" exitCode=0 Feb 17 16:15:33 crc kubenswrapper[4808]: I0217 16:15:33.101713 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ce9fba55-1b70-4d39-a052-bff96bd8e93a","Type":"ContainerDied","Data":"dd8761ee926d8071fc41da21713fb32d5f439b5455e53db35d9392155b78adbe"} Feb 17 16:15:33 crc kubenswrapper[4808]: I0217 16:15:33.102261 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="37da8fa5-9dda-4e98-9a63-a4c0036e0017" containerName="cinder-scheduler" containerID="cri-o://3e8a06d14230c2f33211006c669f2e9d81553a63563d9c660acf7efbe1266550" gracePeriod=30 Feb 17 16:15:33 crc kubenswrapper[4808]: I0217 16:15:33.102430 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="37da8fa5-9dda-4e98-9a63-a4c0036e0017" containerName="probe" containerID="cri-o://0299101d44d10b5033809e45bef98b67a9f7bed24aac135e1eb10a2b4058b95e" gracePeriod=30 Feb 17 16:15:33 crc kubenswrapper[4808]: I0217 16:15:33.102729 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cloudkitty-api-0" podUID="bb0a53ca-554f-4be2-a185-3eba97454429" containerName="cloudkitty-api-log" containerID="cri-o://ff8c13248ed3bc6b83102bff59c9c6021e22f8698b1b6f41e54decc4c38650d2" gracePeriod=30 Feb 17 16:15:33 crc kubenswrapper[4808]: I0217 16:15:33.103283 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cloudkitty-api-0" podUID="bb0a53ca-554f-4be2-a185-3eba97454429" containerName="cloudkitty-api" containerID="cri-o://0778140cec010c1252604b91cd534db0da28521dd85bdc49c1940e48ff51c5ad" gracePeriod=30 Feb 17 16:15:33 crc kubenswrapper[4808]: I0217 16:15:33.168238 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ebaafdbf-7612-40c9-b044-697f41e930e2" path="/var/lib/kubelet/pods/ebaafdbf-7612-40c9-b044-697f41e930e2/volumes" Feb 17 16:15:33 crc kubenswrapper[4808]: I0217 16:15:33.636424 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:15:33 crc kubenswrapper[4808]: I0217 16:15:33.797273 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce9fba55-1b70-4d39-a052-bff96bd8e93a-config-data\") pod \"ce9fba55-1b70-4d39-a052-bff96bd8e93a\" (UID: \"ce9fba55-1b70-4d39-a052-bff96bd8e93a\") " Feb 17 16:15:33 crc kubenswrapper[4808]: I0217 16:15:33.797599 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ce9fba55-1b70-4d39-a052-bff96bd8e93a-run-httpd\") pod \"ce9fba55-1b70-4d39-a052-bff96bd8e93a\" (UID: \"ce9fba55-1b70-4d39-a052-bff96bd8e93a\") " Feb 17 16:15:33 crc kubenswrapper[4808]: I0217 16:15:33.797708 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce9fba55-1b70-4d39-a052-bff96bd8e93a-scripts\") pod \"ce9fba55-1b70-4d39-a052-bff96bd8e93a\" (UID: \"ce9fba55-1b70-4d39-a052-bff96bd8e93a\") " Feb 17 16:15:33 crc kubenswrapper[4808]: I0217 16:15:33.797744 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j5gdz\" (UniqueName: \"kubernetes.io/projected/ce9fba55-1b70-4d39-a052-bff96bd8e93a-kube-api-access-j5gdz\") pod \"ce9fba55-1b70-4d39-a052-bff96bd8e93a\" (UID: \"ce9fba55-1b70-4d39-a052-bff96bd8e93a\") " Feb 17 16:15:33 crc kubenswrapper[4808]: I0217 16:15:33.797886 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ce9fba55-1b70-4d39-a052-bff96bd8e93a-log-httpd\") pod \"ce9fba55-1b70-4d39-a052-bff96bd8e93a\" (UID: \"ce9fba55-1b70-4d39-a052-bff96bd8e93a\") " Feb 17 16:15:33 crc kubenswrapper[4808]: I0217 16:15:33.797926 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce9fba55-1b70-4d39-a052-bff96bd8e93a-combined-ca-bundle\") pod \"ce9fba55-1b70-4d39-a052-bff96bd8e93a\" (UID: \"ce9fba55-1b70-4d39-a052-bff96bd8e93a\") " Feb 17 16:15:33 crc kubenswrapper[4808]: I0217 16:15:33.797996 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ce9fba55-1b70-4d39-a052-bff96bd8e93a-sg-core-conf-yaml\") pod \"ce9fba55-1b70-4d39-a052-bff96bd8e93a\" (UID: \"ce9fba55-1b70-4d39-a052-bff96bd8e93a\") " Feb 17 16:15:33 crc kubenswrapper[4808]: I0217 16:15:33.799253 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce9fba55-1b70-4d39-a052-bff96bd8e93a-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "ce9fba55-1b70-4d39-a052-bff96bd8e93a" (UID: "ce9fba55-1b70-4d39-a052-bff96bd8e93a"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:15:33 crc kubenswrapper[4808]: I0217 16:15:33.799407 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce9fba55-1b70-4d39-a052-bff96bd8e93a-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "ce9fba55-1b70-4d39-a052-bff96bd8e93a" (UID: "ce9fba55-1b70-4d39-a052-bff96bd8e93a"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:15:33 crc kubenswrapper[4808]: I0217 16:15:33.808825 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce9fba55-1b70-4d39-a052-bff96bd8e93a-kube-api-access-j5gdz" (OuterVolumeSpecName: "kube-api-access-j5gdz") pod "ce9fba55-1b70-4d39-a052-bff96bd8e93a" (UID: "ce9fba55-1b70-4d39-a052-bff96bd8e93a"). InnerVolumeSpecName "kube-api-access-j5gdz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:15:33 crc kubenswrapper[4808]: I0217 16:15:33.855026 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce9fba55-1b70-4d39-a052-bff96bd8e93a-scripts" (OuterVolumeSpecName: "scripts") pod "ce9fba55-1b70-4d39-a052-bff96bd8e93a" (UID: "ce9fba55-1b70-4d39-a052-bff96bd8e93a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:15:33 crc kubenswrapper[4808]: I0217 16:15:33.901137 4808 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ce9fba55-1b70-4d39-a052-bff96bd8e93a-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:33 crc kubenswrapper[4808]: I0217 16:15:33.901185 4808 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ce9fba55-1b70-4d39-a052-bff96bd8e93a-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:33 crc kubenswrapper[4808]: I0217 16:15:33.901196 4808 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce9fba55-1b70-4d39-a052-bff96bd8e93a-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:33 crc kubenswrapper[4808]: I0217 16:15:33.901205 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j5gdz\" (UniqueName: \"kubernetes.io/projected/ce9fba55-1b70-4d39-a052-bff96bd8e93a-kube-api-access-j5gdz\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:33 crc kubenswrapper[4808]: I0217 16:15:33.912303 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce9fba55-1b70-4d39-a052-bff96bd8e93a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ce9fba55-1b70-4d39-a052-bff96bd8e93a" (UID: "ce9fba55-1b70-4d39-a052-bff96bd8e93a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:15:33 crc kubenswrapper[4808]: I0217 16:15:33.930706 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce9fba55-1b70-4d39-a052-bff96bd8e93a-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "ce9fba55-1b70-4d39-a052-bff96bd8e93a" (UID: "ce9fba55-1b70-4d39-a052-bff96bd8e93a"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:15:33 crc kubenswrapper[4808]: I0217 16:15:33.967157 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce9fba55-1b70-4d39-a052-bff96bd8e93a-config-data" (OuterVolumeSpecName: "config-data") pod "ce9fba55-1b70-4d39-a052-bff96bd8e93a" (UID: "ce9fba55-1b70-4d39-a052-bff96bd8e93a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.003024 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce9fba55-1b70-4d39-a052-bff96bd8e93a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.003066 4808 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ce9fba55-1b70-4d39-a052-bff96bd8e93a-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.003080 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce9fba55-1b70-4d39-a052-bff96bd8e93a-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.054250 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.114397 4808 generic.go:334] "Generic (PLEG): container finished" podID="37da8fa5-9dda-4e98-9a63-a4c0036e0017" containerID="0299101d44d10b5033809e45bef98b67a9f7bed24aac135e1eb10a2b4058b95e" exitCode=0 Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.114466 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"37da8fa5-9dda-4e98-9a63-a4c0036e0017","Type":"ContainerDied","Data":"0299101d44d10b5033809e45bef98b67a9f7bed24aac135e1eb10a2b4058b95e"} Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.117641 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.117631 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ce9fba55-1b70-4d39-a052-bff96bd8e93a","Type":"ContainerDied","Data":"722643afae2a4e200c6ad3b18d935dcb7ed1baa99b37d21d611a112237864c00"} Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.117902 4808 scope.go:117] "RemoveContainer" containerID="880dacad4a3e154e4d52b5e6d057696d1bf66aa3b76e3929039347494764eb64" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.119833 4808 generic.go:334] "Generic (PLEG): container finished" podID="bb0a53ca-554f-4be2-a185-3eba97454429" containerID="0778140cec010c1252604b91cd534db0da28521dd85bdc49c1940e48ff51c5ad" exitCode=0 Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.119853 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.119880 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"bb0a53ca-554f-4be2-a185-3eba97454429","Type":"ContainerDied","Data":"0778140cec010c1252604b91cd534db0da28521dd85bdc49c1940e48ff51c5ad"} Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.119917 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"bb0a53ca-554f-4be2-a185-3eba97454429","Type":"ContainerDied","Data":"ff8c13248ed3bc6b83102bff59c9c6021e22f8698b1b6f41e54decc4c38650d2"} Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.119860 4808 generic.go:334] "Generic (PLEG): container finished" podID="bb0a53ca-554f-4be2-a185-3eba97454429" containerID="ff8c13248ed3bc6b83102bff59c9c6021e22f8698b1b6f41e54decc4c38650d2" exitCode=143 Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.120016 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"bb0a53ca-554f-4be2-a185-3eba97454429","Type":"ContainerDied","Data":"643be3a025f081600c92f8d5d11a7801aaad867291685319f6312aa567fb9d6a"} Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.120117 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cloudkitty-proc-0" podUID="23a1fa53-e668-4800-b54a-904f42d9eb5e" containerName="cloudkitty-proc" containerID="cri-o://50f1247e3e06436abc5b877c08bbabce85a826f30dcdbef9ab02ea5e21f03a94" gracePeriod=30 Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.141372 4808 scope.go:117] "RemoveContainer" containerID="5ae1963ac1b0852c4683f5358c8722c23e5499fa516e84308b0247d589ec8967" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.171867 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.182798 4808 scope.go:117] "RemoveContainer" containerID="dd8761ee926d8071fc41da21713fb32d5f439b5455e53db35d9392155b78adbe" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.213482 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb0a53ca-554f-4be2-a185-3eba97454429-scripts\") pod \"bb0a53ca-554f-4be2-a185-3eba97454429\" (UID: \"bb0a53ca-554f-4be2-a185-3eba97454429\") " Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.213717 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/bb0a53ca-554f-4be2-a185-3eba97454429-certs\") pod \"bb0a53ca-554f-4be2-a185-3eba97454429\" (UID: \"bb0a53ca-554f-4be2-a185-3eba97454429\") " Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.213770 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb0a53ca-554f-4be2-a185-3eba97454429-combined-ca-bundle\") pod \"bb0a53ca-554f-4be2-a185-3eba97454429\" (UID: \"bb0a53ca-554f-4be2-a185-3eba97454429\") " Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.213816 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bb0a53ca-554f-4be2-a185-3eba97454429-config-data-custom\") pod \"bb0a53ca-554f-4be2-a185-3eba97454429\" (UID: \"bb0a53ca-554f-4be2-a185-3eba97454429\") " Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.213859 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gbp64\" (UniqueName: \"kubernetes.io/projected/bb0a53ca-554f-4be2-a185-3eba97454429-kube-api-access-gbp64\") pod \"bb0a53ca-554f-4be2-a185-3eba97454429\" (UID: \"bb0a53ca-554f-4be2-a185-3eba97454429\") " Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.213889 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb0a53ca-554f-4be2-a185-3eba97454429-config-data\") pod \"bb0a53ca-554f-4be2-a185-3eba97454429\" (UID: \"bb0a53ca-554f-4be2-a185-3eba97454429\") " Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.213983 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb0a53ca-554f-4be2-a185-3eba97454429-logs\") pod \"bb0a53ca-554f-4be2-a185-3eba97454429\" (UID: \"bb0a53ca-554f-4be2-a185-3eba97454429\") " Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.217624 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb0a53ca-554f-4be2-a185-3eba97454429-logs" (OuterVolumeSpecName: "logs") pod "bb0a53ca-554f-4be2-a185-3eba97454429" (UID: "bb0a53ca-554f-4be2-a185-3eba97454429"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.218954 4808 scope.go:117] "RemoveContainer" containerID="dab1c654217acba93cbe85ef948ea50d4d0076687aeb53ea5db8956f9dc60a1a" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.222344 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb0a53ca-554f-4be2-a185-3eba97454429-scripts" (OuterVolumeSpecName: "scripts") pod "bb0a53ca-554f-4be2-a185-3eba97454429" (UID: "bb0a53ca-554f-4be2-a185-3eba97454429"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.222856 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb0a53ca-554f-4be2-a185-3eba97454429-kube-api-access-gbp64" (OuterVolumeSpecName: "kube-api-access-gbp64") pod "bb0a53ca-554f-4be2-a185-3eba97454429" (UID: "bb0a53ca-554f-4be2-a185-3eba97454429"). InnerVolumeSpecName "kube-api-access-gbp64". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.223343 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb0a53ca-554f-4be2-a185-3eba97454429-certs" (OuterVolumeSpecName: "certs") pod "bb0a53ca-554f-4be2-a185-3eba97454429" (UID: "bb0a53ca-554f-4be2-a185-3eba97454429"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.224810 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.242441 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:15:34 crc kubenswrapper[4808]: E0217 16:15:34.243442 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce9fba55-1b70-4d39-a052-bff96bd8e93a" containerName="ceilometer-central-agent" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.243457 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce9fba55-1b70-4d39-a052-bff96bd8e93a" containerName="ceilometer-central-agent" Feb 17 16:15:34 crc kubenswrapper[4808]: E0217 16:15:34.243473 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb0a53ca-554f-4be2-a185-3eba97454429" containerName="cloudkitty-api" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.243479 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb0a53ca-554f-4be2-a185-3eba97454429" containerName="cloudkitty-api" Feb 17 16:15:34 crc kubenswrapper[4808]: E0217 16:15:34.243489 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebaafdbf-7612-40c9-b044-697f41e930e2" containerName="init" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.243495 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebaafdbf-7612-40c9-b044-697f41e930e2" containerName="init" Feb 17 16:15:34 crc kubenswrapper[4808]: E0217 16:15:34.243512 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb0a53ca-554f-4be2-a185-3eba97454429" containerName="cloudkitty-api-log" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.243518 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb0a53ca-554f-4be2-a185-3eba97454429" containerName="cloudkitty-api-log" Feb 17 16:15:34 crc kubenswrapper[4808]: E0217 16:15:34.243528 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce9fba55-1b70-4d39-a052-bff96bd8e93a" containerName="proxy-httpd" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.243533 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce9fba55-1b70-4d39-a052-bff96bd8e93a" containerName="proxy-httpd" Feb 17 16:15:34 crc kubenswrapper[4808]: E0217 16:15:34.243548 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebaafdbf-7612-40c9-b044-697f41e930e2" containerName="dnsmasq-dns" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.243554 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebaafdbf-7612-40c9-b044-697f41e930e2" containerName="dnsmasq-dns" Feb 17 16:15:34 crc kubenswrapper[4808]: E0217 16:15:34.243583 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce9fba55-1b70-4d39-a052-bff96bd8e93a" containerName="sg-core" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.243590 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce9fba55-1b70-4d39-a052-bff96bd8e93a" containerName="sg-core" Feb 17 16:15:34 crc kubenswrapper[4808]: E0217 16:15:34.243603 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce9fba55-1b70-4d39-a052-bff96bd8e93a" containerName="ceilometer-notification-agent" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.243609 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce9fba55-1b70-4d39-a052-bff96bd8e93a" containerName="ceilometer-notification-agent" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.244283 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce9fba55-1b70-4d39-a052-bff96bd8e93a" containerName="sg-core" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.244301 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce9fba55-1b70-4d39-a052-bff96bd8e93a" containerName="ceilometer-notification-agent" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.244312 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb0a53ca-554f-4be2-a185-3eba97454429" containerName="cloudkitty-api" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.244320 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce9fba55-1b70-4d39-a052-bff96bd8e93a" containerName="proxy-httpd" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.244330 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb0a53ca-554f-4be2-a185-3eba97454429" containerName="cloudkitty-api-log" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.244342 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebaafdbf-7612-40c9-b044-697f41e930e2" containerName="dnsmasq-dns" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.244354 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce9fba55-1b70-4d39-a052-bff96bd8e93a" containerName="ceilometer-central-agent" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.246534 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb0a53ca-554f-4be2-a185-3eba97454429-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "bb0a53ca-554f-4be2-a185-3eba97454429" (UID: "bb0a53ca-554f-4be2-a185-3eba97454429"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.246925 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.248403 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb0a53ca-554f-4be2-a185-3eba97454429-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bb0a53ca-554f-4be2-a185-3eba97454429" (UID: "bb0a53ca-554f-4be2-a185-3eba97454429"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.251814 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.252422 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.257689 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.267336 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb0a53ca-554f-4be2-a185-3eba97454429-config-data" (OuterVolumeSpecName: "config-data") pod "bb0a53ca-554f-4be2-a185-3eba97454429" (UID: "bb0a53ca-554f-4be2-a185-3eba97454429"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.315523 4808 scope.go:117] "RemoveContainer" containerID="0778140cec010c1252604b91cd534db0da28521dd85bdc49c1940e48ff51c5ad" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.317494 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gbp64\" (UniqueName: \"kubernetes.io/projected/bb0a53ca-554f-4be2-a185-3eba97454429-kube-api-access-gbp64\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.317531 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb0a53ca-554f-4be2-a185-3eba97454429-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.317547 4808 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb0a53ca-554f-4be2-a185-3eba97454429-logs\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.317559 4808 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb0a53ca-554f-4be2-a185-3eba97454429-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.317587 4808 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/projected/bb0a53ca-554f-4be2-a185-3eba97454429-certs\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.317601 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb0a53ca-554f-4be2-a185-3eba97454429-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.317613 4808 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bb0a53ca-554f-4be2-a185-3eba97454429-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.344202 4808 scope.go:117] "RemoveContainer" containerID="ff8c13248ed3bc6b83102bff59c9c6021e22f8698b1b6f41e54decc4c38650d2" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.359667 4808 scope.go:117] "RemoveContainer" containerID="0778140cec010c1252604b91cd534db0da28521dd85bdc49c1940e48ff51c5ad" Feb 17 16:15:34 crc kubenswrapper[4808]: E0217 16:15:34.360174 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0778140cec010c1252604b91cd534db0da28521dd85bdc49c1940e48ff51c5ad\": container with ID starting with 0778140cec010c1252604b91cd534db0da28521dd85bdc49c1940e48ff51c5ad not found: ID does not exist" containerID="0778140cec010c1252604b91cd534db0da28521dd85bdc49c1940e48ff51c5ad" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.360226 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0778140cec010c1252604b91cd534db0da28521dd85bdc49c1940e48ff51c5ad"} err="failed to get container status \"0778140cec010c1252604b91cd534db0da28521dd85bdc49c1940e48ff51c5ad\": rpc error: code = NotFound desc = could not find container \"0778140cec010c1252604b91cd534db0da28521dd85bdc49c1940e48ff51c5ad\": container with ID starting with 0778140cec010c1252604b91cd534db0da28521dd85bdc49c1940e48ff51c5ad not found: ID does not exist" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.360257 4808 scope.go:117] "RemoveContainer" containerID="ff8c13248ed3bc6b83102bff59c9c6021e22f8698b1b6f41e54decc4c38650d2" Feb 17 16:15:34 crc kubenswrapper[4808]: E0217 16:15:34.360682 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff8c13248ed3bc6b83102bff59c9c6021e22f8698b1b6f41e54decc4c38650d2\": container with ID starting with ff8c13248ed3bc6b83102bff59c9c6021e22f8698b1b6f41e54decc4c38650d2 not found: ID does not exist" containerID="ff8c13248ed3bc6b83102bff59c9c6021e22f8698b1b6f41e54decc4c38650d2" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.360742 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff8c13248ed3bc6b83102bff59c9c6021e22f8698b1b6f41e54decc4c38650d2"} err="failed to get container status \"ff8c13248ed3bc6b83102bff59c9c6021e22f8698b1b6f41e54decc4c38650d2\": rpc error: code = NotFound desc = could not find container \"ff8c13248ed3bc6b83102bff59c9c6021e22f8698b1b6f41e54decc4c38650d2\": container with ID starting with ff8c13248ed3bc6b83102bff59c9c6021e22f8698b1b6f41e54decc4c38650d2 not found: ID does not exist" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.360782 4808 scope.go:117] "RemoveContainer" containerID="0778140cec010c1252604b91cd534db0da28521dd85bdc49c1940e48ff51c5ad" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.361089 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0778140cec010c1252604b91cd534db0da28521dd85bdc49c1940e48ff51c5ad"} err="failed to get container status \"0778140cec010c1252604b91cd534db0da28521dd85bdc49c1940e48ff51c5ad\": rpc error: code = NotFound desc = could not find container \"0778140cec010c1252604b91cd534db0da28521dd85bdc49c1940e48ff51c5ad\": container with ID starting with 0778140cec010c1252604b91cd534db0da28521dd85bdc49c1940e48ff51c5ad not found: ID does not exist" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.361120 4808 scope.go:117] "RemoveContainer" containerID="ff8c13248ed3bc6b83102bff59c9c6021e22f8698b1b6f41e54decc4c38650d2" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.361476 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff8c13248ed3bc6b83102bff59c9c6021e22f8698b1b6f41e54decc4c38650d2"} err="failed to get container status \"ff8c13248ed3bc6b83102bff59c9c6021e22f8698b1b6f41e54decc4c38650d2\": rpc error: code = NotFound desc = could not find container \"ff8c13248ed3bc6b83102bff59c9c6021e22f8698b1b6f41e54decc4c38650d2\": container with ID starting with ff8c13248ed3bc6b83102bff59c9c6021e22f8698b1b6f41e54decc4c38650d2 not found: ID does not exist" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.448419 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ade95199-c613-4920-aa24-6cedde28dda6-config-data\") pod \"ceilometer-0\" (UID: \"ade95199-c613-4920-aa24-6cedde28dda6\") " pod="openstack/ceilometer-0" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.448502 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ade95199-c613-4920-aa24-6cedde28dda6-scripts\") pod \"ceilometer-0\" (UID: \"ade95199-c613-4920-aa24-6cedde28dda6\") " pod="openstack/ceilometer-0" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.448545 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ade95199-c613-4920-aa24-6cedde28dda6-run-httpd\") pod \"ceilometer-0\" (UID: \"ade95199-c613-4920-aa24-6cedde28dda6\") " pod="openstack/ceilometer-0" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.448630 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcrg4\" (UniqueName: \"kubernetes.io/projected/ade95199-c613-4920-aa24-6cedde28dda6-kube-api-access-rcrg4\") pod \"ceilometer-0\" (UID: \"ade95199-c613-4920-aa24-6cedde28dda6\") " pod="openstack/ceilometer-0" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.448674 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ade95199-c613-4920-aa24-6cedde28dda6-log-httpd\") pod \"ceilometer-0\" (UID: \"ade95199-c613-4920-aa24-6cedde28dda6\") " pod="openstack/ceilometer-0" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.448817 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ade95199-c613-4920-aa24-6cedde28dda6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ade95199-c613-4920-aa24-6cedde28dda6\") " pod="openstack/ceilometer-0" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.448838 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ade95199-c613-4920-aa24-6cedde28dda6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ade95199-c613-4920-aa24-6cedde28dda6\") " pod="openstack/ceilometer-0" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.484219 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-api-0"] Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.495746 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-api-0"] Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.507963 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-api-0"] Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.509756 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.514213 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cloudkitty-public-svc" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.514443 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-api-config-data" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.514557 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cloudkitty-internal-svc" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.522089 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-api-0"] Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.550264 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ade95199-c613-4920-aa24-6cedde28dda6-log-httpd\") pod \"ceilometer-0\" (UID: \"ade95199-c613-4920-aa24-6cedde28dda6\") " pod="openstack/ceilometer-0" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.550387 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ade95199-c613-4920-aa24-6cedde28dda6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ade95199-c613-4920-aa24-6cedde28dda6\") " pod="openstack/ceilometer-0" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.550409 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ade95199-c613-4920-aa24-6cedde28dda6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ade95199-c613-4920-aa24-6cedde28dda6\") " pod="openstack/ceilometer-0" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.550441 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ade95199-c613-4920-aa24-6cedde28dda6-config-data\") pod \"ceilometer-0\" (UID: \"ade95199-c613-4920-aa24-6cedde28dda6\") " pod="openstack/ceilometer-0" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.550463 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ade95199-c613-4920-aa24-6cedde28dda6-scripts\") pod \"ceilometer-0\" (UID: \"ade95199-c613-4920-aa24-6cedde28dda6\") " pod="openstack/ceilometer-0" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.550487 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ade95199-c613-4920-aa24-6cedde28dda6-run-httpd\") pod \"ceilometer-0\" (UID: \"ade95199-c613-4920-aa24-6cedde28dda6\") " pod="openstack/ceilometer-0" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.550523 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rcrg4\" (UniqueName: \"kubernetes.io/projected/ade95199-c613-4920-aa24-6cedde28dda6-kube-api-access-rcrg4\") pod \"ceilometer-0\" (UID: \"ade95199-c613-4920-aa24-6cedde28dda6\") " pod="openstack/ceilometer-0" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.552218 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ade95199-c613-4920-aa24-6cedde28dda6-run-httpd\") pod \"ceilometer-0\" (UID: \"ade95199-c613-4920-aa24-6cedde28dda6\") " pod="openstack/ceilometer-0" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.552505 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ade95199-c613-4920-aa24-6cedde28dda6-log-httpd\") pod \"ceilometer-0\" (UID: \"ade95199-c613-4920-aa24-6cedde28dda6\") " pod="openstack/ceilometer-0" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.555524 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ade95199-c613-4920-aa24-6cedde28dda6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ade95199-c613-4920-aa24-6cedde28dda6\") " pod="openstack/ceilometer-0" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.556086 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ade95199-c613-4920-aa24-6cedde28dda6-scripts\") pod \"ceilometer-0\" (UID: \"ade95199-c613-4920-aa24-6cedde28dda6\") " pod="openstack/ceilometer-0" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.556101 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ade95199-c613-4920-aa24-6cedde28dda6-config-data\") pod \"ceilometer-0\" (UID: \"ade95199-c613-4920-aa24-6cedde28dda6\") " pod="openstack/ceilometer-0" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.557143 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ade95199-c613-4920-aa24-6cedde28dda6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ade95199-c613-4920-aa24-6cedde28dda6\") " pod="openstack/ceilometer-0" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.566652 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcrg4\" (UniqueName: \"kubernetes.io/projected/ade95199-c613-4920-aa24-6cedde28dda6-kube-api-access-rcrg4\") pod \"ceilometer-0\" (UID: \"ade95199-c613-4920-aa24-6cedde28dda6\") " pod="openstack/ceilometer-0" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.606956 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.652528 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b35dce7b-8ffe-4981-8376-5db5a01dcf77-config-data\") pod \"cloudkitty-api-0\" (UID: \"b35dce7b-8ffe-4981-8376-5db5a01dcf77\") " pod="openstack/cloudkitty-api-0" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.652663 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ktxk\" (UniqueName: \"kubernetes.io/projected/b35dce7b-8ffe-4981-8376-5db5a01dcf77-kube-api-access-4ktxk\") pod \"cloudkitty-api-0\" (UID: \"b35dce7b-8ffe-4981-8376-5db5a01dcf77\") " pod="openstack/cloudkitty-api-0" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.652722 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/b35dce7b-8ffe-4981-8376-5db5a01dcf77-certs\") pod \"cloudkitty-api-0\" (UID: \"b35dce7b-8ffe-4981-8376-5db5a01dcf77\") " pod="openstack/cloudkitty-api-0" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.652757 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b35dce7b-8ffe-4981-8376-5db5a01dcf77-internal-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"b35dce7b-8ffe-4981-8376-5db5a01dcf77\") " pod="openstack/cloudkitty-api-0" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.652798 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b35dce7b-8ffe-4981-8376-5db5a01dcf77-logs\") pod \"cloudkitty-api-0\" (UID: \"b35dce7b-8ffe-4981-8376-5db5a01dcf77\") " pod="openstack/cloudkitty-api-0" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.652825 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b35dce7b-8ffe-4981-8376-5db5a01dcf77-public-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"b35dce7b-8ffe-4981-8376-5db5a01dcf77\") " pod="openstack/cloudkitty-api-0" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.652879 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b35dce7b-8ffe-4981-8376-5db5a01dcf77-config-data-custom\") pod \"cloudkitty-api-0\" (UID: \"b35dce7b-8ffe-4981-8376-5db5a01dcf77\") " pod="openstack/cloudkitty-api-0" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.652910 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b35dce7b-8ffe-4981-8376-5db5a01dcf77-scripts\") pod \"cloudkitty-api-0\" (UID: \"b35dce7b-8ffe-4981-8376-5db5a01dcf77\") " pod="openstack/cloudkitty-api-0" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.652932 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b35dce7b-8ffe-4981-8376-5db5a01dcf77-combined-ca-bundle\") pod \"cloudkitty-api-0\" (UID: \"b35dce7b-8ffe-4981-8376-5db5a01dcf77\") " pod="openstack/cloudkitty-api-0" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.754015 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b35dce7b-8ffe-4981-8376-5db5a01dcf77-config-data\") pod \"cloudkitty-api-0\" (UID: \"b35dce7b-8ffe-4981-8376-5db5a01dcf77\") " pod="openstack/cloudkitty-api-0" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.754118 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ktxk\" (UniqueName: \"kubernetes.io/projected/b35dce7b-8ffe-4981-8376-5db5a01dcf77-kube-api-access-4ktxk\") pod \"cloudkitty-api-0\" (UID: \"b35dce7b-8ffe-4981-8376-5db5a01dcf77\") " pod="openstack/cloudkitty-api-0" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.754163 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/b35dce7b-8ffe-4981-8376-5db5a01dcf77-certs\") pod \"cloudkitty-api-0\" (UID: \"b35dce7b-8ffe-4981-8376-5db5a01dcf77\") " pod="openstack/cloudkitty-api-0" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.754188 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b35dce7b-8ffe-4981-8376-5db5a01dcf77-internal-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"b35dce7b-8ffe-4981-8376-5db5a01dcf77\") " pod="openstack/cloudkitty-api-0" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.754229 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b35dce7b-8ffe-4981-8376-5db5a01dcf77-logs\") pod \"cloudkitty-api-0\" (UID: \"b35dce7b-8ffe-4981-8376-5db5a01dcf77\") " pod="openstack/cloudkitty-api-0" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.754257 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b35dce7b-8ffe-4981-8376-5db5a01dcf77-public-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"b35dce7b-8ffe-4981-8376-5db5a01dcf77\") " pod="openstack/cloudkitty-api-0" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.754311 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b35dce7b-8ffe-4981-8376-5db5a01dcf77-config-data-custom\") pod \"cloudkitty-api-0\" (UID: \"b35dce7b-8ffe-4981-8376-5db5a01dcf77\") " pod="openstack/cloudkitty-api-0" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.754345 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b35dce7b-8ffe-4981-8376-5db5a01dcf77-scripts\") pod \"cloudkitty-api-0\" (UID: \"b35dce7b-8ffe-4981-8376-5db5a01dcf77\") " pod="openstack/cloudkitty-api-0" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.754366 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b35dce7b-8ffe-4981-8376-5db5a01dcf77-combined-ca-bundle\") pod \"cloudkitty-api-0\" (UID: \"b35dce7b-8ffe-4981-8376-5db5a01dcf77\") " pod="openstack/cloudkitty-api-0" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.755201 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b35dce7b-8ffe-4981-8376-5db5a01dcf77-logs\") pod \"cloudkitty-api-0\" (UID: \"b35dce7b-8ffe-4981-8376-5db5a01dcf77\") " pod="openstack/cloudkitty-api-0" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.759055 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b35dce7b-8ffe-4981-8376-5db5a01dcf77-config-data-custom\") pod \"cloudkitty-api-0\" (UID: \"b35dce7b-8ffe-4981-8376-5db5a01dcf77\") " pod="openstack/cloudkitty-api-0" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.759533 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b35dce7b-8ffe-4981-8376-5db5a01dcf77-public-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"b35dce7b-8ffe-4981-8376-5db5a01dcf77\") " pod="openstack/cloudkitty-api-0" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.759791 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b35dce7b-8ffe-4981-8376-5db5a01dcf77-config-data\") pod \"cloudkitty-api-0\" (UID: \"b35dce7b-8ffe-4981-8376-5db5a01dcf77\") " pod="openstack/cloudkitty-api-0" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.760468 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b35dce7b-8ffe-4981-8376-5db5a01dcf77-scripts\") pod \"cloudkitty-api-0\" (UID: \"b35dce7b-8ffe-4981-8376-5db5a01dcf77\") " pod="openstack/cloudkitty-api-0" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.760682 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b35dce7b-8ffe-4981-8376-5db5a01dcf77-combined-ca-bundle\") pod \"cloudkitty-api-0\" (UID: \"b35dce7b-8ffe-4981-8376-5db5a01dcf77\") " pod="openstack/cloudkitty-api-0" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.762549 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/b35dce7b-8ffe-4981-8376-5db5a01dcf77-certs\") pod \"cloudkitty-api-0\" (UID: \"b35dce7b-8ffe-4981-8376-5db5a01dcf77\") " pod="openstack/cloudkitty-api-0" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.763338 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b35dce7b-8ffe-4981-8376-5db5a01dcf77-internal-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"b35dce7b-8ffe-4981-8376-5db5a01dcf77\") " pod="openstack/cloudkitty-api-0" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.773798 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ktxk\" (UniqueName: \"kubernetes.io/projected/b35dce7b-8ffe-4981-8376-5db5a01dcf77-kube-api-access-4ktxk\") pod \"cloudkitty-api-0\" (UID: \"b35dce7b-8ffe-4981-8376-5db5a01dcf77\") " pod="openstack/cloudkitty-api-0" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.834487 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Feb 17 16:15:35 crc kubenswrapper[4808]: W0217 16:15:35.067933 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podade95199_c613_4920_aa24_6cedde28dda6.slice/crio-356af2c8c1b6e4c7feb3f6d92a6b8bd00153587c6186bbe593c45d6ad9a2caaf WatchSource:0}: Error finding container 356af2c8c1b6e4c7feb3f6d92a6b8bd00153587c6186bbe593c45d6ad9a2caaf: Status 404 returned error can't find the container with id 356af2c8c1b6e4c7feb3f6d92a6b8bd00153587c6186bbe593c45d6ad9a2caaf Feb 17 16:15:35 crc kubenswrapper[4808]: I0217 16:15:35.068809 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:15:35 crc kubenswrapper[4808]: I0217 16:15:35.128768 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ade95199-c613-4920-aa24-6cedde28dda6","Type":"ContainerStarted","Data":"356af2c8c1b6e4c7feb3f6d92a6b8bd00153587c6186bbe593c45d6ad9a2caaf"} Feb 17 16:15:35 crc kubenswrapper[4808]: I0217 16:15:35.155333 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb0a53ca-554f-4be2-a185-3eba97454429" path="/var/lib/kubelet/pods/bb0a53ca-554f-4be2-a185-3eba97454429/volumes" Feb 17 16:15:35 crc kubenswrapper[4808]: I0217 16:15:35.156136 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce9fba55-1b70-4d39-a052-bff96bd8e93a" path="/var/lib/kubelet/pods/ce9fba55-1b70-4d39-a052-bff96bd8e93a/volumes" Feb 17 16:15:35 crc kubenswrapper[4808]: I0217 16:15:35.299141 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-api-0"] Feb 17 16:15:35 crc kubenswrapper[4808]: W0217 16:15:35.301964 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb35dce7b_8ffe_4981_8376_5db5a01dcf77.slice/crio-5aeee06cb2f420158a429d9e611bf17f623eb19a5c52d34b3b5288c68b008efd WatchSource:0}: Error finding container 5aeee06cb2f420158a429d9e611bf17f623eb19a5c52d34b3b5288c68b008efd: Status 404 returned error can't find the container with id 5aeee06cb2f420158a429d9e611bf17f623eb19a5c52d34b3b5288c68b008efd Feb 17 16:15:36 crc kubenswrapper[4808]: I0217 16:15:36.148999 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ade95199-c613-4920-aa24-6cedde28dda6","Type":"ContainerStarted","Data":"7026f52ab348147acdc0cc1845b030fe4c38003a827c4074efe539c2c13f73e8"} Feb 17 16:15:36 crc kubenswrapper[4808]: I0217 16:15:36.152527 4808 generic.go:334] "Generic (PLEG): container finished" podID="37da8fa5-9dda-4e98-9a63-a4c0036e0017" containerID="3e8a06d14230c2f33211006c669f2e9d81553a63563d9c660acf7efbe1266550" exitCode=0 Feb 17 16:15:36 crc kubenswrapper[4808]: I0217 16:15:36.152743 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"37da8fa5-9dda-4e98-9a63-a4c0036e0017","Type":"ContainerDied","Data":"3e8a06d14230c2f33211006c669f2e9d81553a63563d9c660acf7efbe1266550"} Feb 17 16:15:36 crc kubenswrapper[4808]: I0217 16:15:36.155016 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"b35dce7b-8ffe-4981-8376-5db5a01dcf77","Type":"ContainerStarted","Data":"435e7e168730fdbe635d838267298718859477108e0d4b40fcac3b5ef64e0fd4"} Feb 17 16:15:36 crc kubenswrapper[4808]: I0217 16:15:36.155154 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"b35dce7b-8ffe-4981-8376-5db5a01dcf77","Type":"ContainerStarted","Data":"35d8865441ee3117fccc57fcafb8ffc8b54527867783545174534182b937dbb1"} Feb 17 16:15:36 crc kubenswrapper[4808]: I0217 16:15:36.155255 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"b35dce7b-8ffe-4981-8376-5db5a01dcf77","Type":"ContainerStarted","Data":"5aeee06cb2f420158a429d9e611bf17f623eb19a5c52d34b3b5288c68b008efd"} Feb 17 16:15:36 crc kubenswrapper[4808]: I0217 16:15:36.155376 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-api-0" Feb 17 16:15:36 crc kubenswrapper[4808]: I0217 16:15:36.225211 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-api-0" podStartSLOduration=2.225190303 podStartE2EDuration="2.225190303s" podCreationTimestamp="2026-02-17 16:15:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:15:36.211924825 +0000 UTC m=+1299.728283928" watchObservedRunningTime="2026-02-17 16:15:36.225190303 +0000 UTC m=+1299.741549396" Feb 17 16:15:36 crc kubenswrapper[4808]: I0217 16:15:36.416536 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 17 16:15:36 crc kubenswrapper[4808]: I0217 16:15:36.504675 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lxm9g\" (UniqueName: \"kubernetes.io/projected/37da8fa5-9dda-4e98-9a63-a4c0036e0017-kube-api-access-lxm9g\") pod \"37da8fa5-9dda-4e98-9a63-a4c0036e0017\" (UID: \"37da8fa5-9dda-4e98-9a63-a4c0036e0017\") " Feb 17 16:15:36 crc kubenswrapper[4808]: I0217 16:15:36.506120 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/37da8fa5-9dda-4e98-9a63-a4c0036e0017-scripts\") pod \"37da8fa5-9dda-4e98-9a63-a4c0036e0017\" (UID: \"37da8fa5-9dda-4e98-9a63-a4c0036e0017\") " Feb 17 16:15:36 crc kubenswrapper[4808]: I0217 16:15:36.506166 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37da8fa5-9dda-4e98-9a63-a4c0036e0017-combined-ca-bundle\") pod \"37da8fa5-9dda-4e98-9a63-a4c0036e0017\" (UID: \"37da8fa5-9dda-4e98-9a63-a4c0036e0017\") " Feb 17 16:15:36 crc kubenswrapper[4808]: I0217 16:15:36.506199 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/37da8fa5-9dda-4e98-9a63-a4c0036e0017-config-data-custom\") pod \"37da8fa5-9dda-4e98-9a63-a4c0036e0017\" (UID: \"37da8fa5-9dda-4e98-9a63-a4c0036e0017\") " Feb 17 16:15:36 crc kubenswrapper[4808]: I0217 16:15:36.506229 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37da8fa5-9dda-4e98-9a63-a4c0036e0017-config-data\") pod \"37da8fa5-9dda-4e98-9a63-a4c0036e0017\" (UID: \"37da8fa5-9dda-4e98-9a63-a4c0036e0017\") " Feb 17 16:15:36 crc kubenswrapper[4808]: I0217 16:15:36.506255 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/37da8fa5-9dda-4e98-9a63-a4c0036e0017-etc-machine-id\") pod \"37da8fa5-9dda-4e98-9a63-a4c0036e0017\" (UID: \"37da8fa5-9dda-4e98-9a63-a4c0036e0017\") " Feb 17 16:15:36 crc kubenswrapper[4808]: I0217 16:15:36.506784 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/37da8fa5-9dda-4e98-9a63-a4c0036e0017-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "37da8fa5-9dda-4e98-9a63-a4c0036e0017" (UID: "37da8fa5-9dda-4e98-9a63-a4c0036e0017"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:15:36 crc kubenswrapper[4808]: I0217 16:15:36.508101 4808 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/37da8fa5-9dda-4e98-9a63-a4c0036e0017-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:36 crc kubenswrapper[4808]: I0217 16:15:36.515510 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37da8fa5-9dda-4e98-9a63-a4c0036e0017-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "37da8fa5-9dda-4e98-9a63-a4c0036e0017" (UID: "37da8fa5-9dda-4e98-9a63-a4c0036e0017"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:15:36 crc kubenswrapper[4808]: I0217 16:15:36.518500 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37da8fa5-9dda-4e98-9a63-a4c0036e0017-kube-api-access-lxm9g" (OuterVolumeSpecName: "kube-api-access-lxm9g") pod "37da8fa5-9dda-4e98-9a63-a4c0036e0017" (UID: "37da8fa5-9dda-4e98-9a63-a4c0036e0017"). InnerVolumeSpecName "kube-api-access-lxm9g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:15:36 crc kubenswrapper[4808]: I0217 16:15:36.519504 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37da8fa5-9dda-4e98-9a63-a4c0036e0017-scripts" (OuterVolumeSpecName: "scripts") pod "37da8fa5-9dda-4e98-9a63-a4c0036e0017" (UID: "37da8fa5-9dda-4e98-9a63-a4c0036e0017"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:15:36 crc kubenswrapper[4808]: I0217 16:15:36.597428 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37da8fa5-9dda-4e98-9a63-a4c0036e0017-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "37da8fa5-9dda-4e98-9a63-a4c0036e0017" (UID: "37da8fa5-9dda-4e98-9a63-a4c0036e0017"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:15:36 crc kubenswrapper[4808]: I0217 16:15:36.611861 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37da8fa5-9dda-4e98-9a63-a4c0036e0017-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:36 crc kubenswrapper[4808]: I0217 16:15:36.611893 4808 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/37da8fa5-9dda-4e98-9a63-a4c0036e0017-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:36 crc kubenswrapper[4808]: I0217 16:15:36.611902 4808 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/37da8fa5-9dda-4e98-9a63-a4c0036e0017-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:36 crc kubenswrapper[4808]: I0217 16:15:36.611911 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lxm9g\" (UniqueName: \"kubernetes.io/projected/37da8fa5-9dda-4e98-9a63-a4c0036e0017-kube-api-access-lxm9g\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:36 crc kubenswrapper[4808]: I0217 16:15:36.647484 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37da8fa5-9dda-4e98-9a63-a4c0036e0017-config-data" (OuterVolumeSpecName: "config-data") pod "37da8fa5-9dda-4e98-9a63-a4c0036e0017" (UID: "37da8fa5-9dda-4e98-9a63-a4c0036e0017"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:15:36 crc kubenswrapper[4808]: I0217 16:15:36.713146 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37da8fa5-9dda-4e98-9a63-a4c0036e0017-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:37 crc kubenswrapper[4808]: I0217 16:15:37.178305 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ade95199-c613-4920-aa24-6cedde28dda6","Type":"ContainerStarted","Data":"1475151fb2b9ec40ea170157633c4ee253f1d8d7d5da164ebda9104b80ecbb68"} Feb 17 16:15:37 crc kubenswrapper[4808]: I0217 16:15:37.190537 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"37da8fa5-9dda-4e98-9a63-a4c0036e0017","Type":"ContainerDied","Data":"5ac05208b68a6fcecfd3daeda1e831c1b6b22287e3316af8e4abbf40c7bb9c8b"} Feb 17 16:15:37 crc kubenswrapper[4808]: I0217 16:15:37.190617 4808 scope.go:117] "RemoveContainer" containerID="0299101d44d10b5033809e45bef98b67a9f7bed24aac135e1eb10a2b4058b95e" Feb 17 16:15:37 crc kubenswrapper[4808]: I0217 16:15:37.190841 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 17 16:15:37 crc kubenswrapper[4808]: I0217 16:15:37.234024 4808 scope.go:117] "RemoveContainer" containerID="3e8a06d14230c2f33211006c669f2e9d81553a63563d9c660acf7efbe1266550" Feb 17 16:15:37 crc kubenswrapper[4808]: I0217 16:15:37.249636 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 17 16:15:37 crc kubenswrapper[4808]: I0217 16:15:37.265187 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 17 16:15:37 crc kubenswrapper[4808]: I0217 16:15:37.289827 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 17 16:15:37 crc kubenswrapper[4808]: E0217 16:15:37.290938 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37da8fa5-9dda-4e98-9a63-a4c0036e0017" containerName="cinder-scheduler" Feb 17 16:15:37 crc kubenswrapper[4808]: I0217 16:15:37.290963 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="37da8fa5-9dda-4e98-9a63-a4c0036e0017" containerName="cinder-scheduler" Feb 17 16:15:37 crc kubenswrapper[4808]: E0217 16:15:37.290981 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37da8fa5-9dda-4e98-9a63-a4c0036e0017" containerName="probe" Feb 17 16:15:37 crc kubenswrapper[4808]: I0217 16:15:37.290990 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="37da8fa5-9dda-4e98-9a63-a4c0036e0017" containerName="probe" Feb 17 16:15:37 crc kubenswrapper[4808]: I0217 16:15:37.291219 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="37da8fa5-9dda-4e98-9a63-a4c0036e0017" containerName="probe" Feb 17 16:15:37 crc kubenswrapper[4808]: I0217 16:15:37.291299 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="37da8fa5-9dda-4e98-9a63-a4c0036e0017" containerName="cinder-scheduler" Feb 17 16:15:37 crc kubenswrapper[4808]: I0217 16:15:37.292905 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 17 16:15:37 crc kubenswrapper[4808]: I0217 16:15:37.300951 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 17 16:15:37 crc kubenswrapper[4808]: I0217 16:15:37.320248 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 17 16:15:37 crc kubenswrapper[4808]: I0217 16:15:37.439649 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fce98890-1299-4c07-8a3a-739241f0bf0d-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"fce98890-1299-4c07-8a3a-739241f0bf0d\") " pod="openstack/cinder-scheduler-0" Feb 17 16:15:37 crc kubenswrapper[4808]: I0217 16:15:37.439998 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdttm\" (UniqueName: \"kubernetes.io/projected/fce98890-1299-4c07-8a3a-739241f0bf0d-kube-api-access-kdttm\") pod \"cinder-scheduler-0\" (UID: \"fce98890-1299-4c07-8a3a-739241f0bf0d\") " pod="openstack/cinder-scheduler-0" Feb 17 16:15:37 crc kubenswrapper[4808]: I0217 16:15:37.440099 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fce98890-1299-4c07-8a3a-739241f0bf0d-scripts\") pod \"cinder-scheduler-0\" (UID: \"fce98890-1299-4c07-8a3a-739241f0bf0d\") " pod="openstack/cinder-scheduler-0" Feb 17 16:15:37 crc kubenswrapper[4808]: I0217 16:15:37.440219 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fce98890-1299-4c07-8a3a-739241f0bf0d-config-data\") pod \"cinder-scheduler-0\" (UID: \"fce98890-1299-4c07-8a3a-739241f0bf0d\") " pod="openstack/cinder-scheduler-0" Feb 17 16:15:37 crc kubenswrapper[4808]: I0217 16:15:37.440384 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fce98890-1299-4c07-8a3a-739241f0bf0d-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"fce98890-1299-4c07-8a3a-739241f0bf0d\") " pod="openstack/cinder-scheduler-0" Feb 17 16:15:37 crc kubenswrapper[4808]: I0217 16:15:37.440489 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fce98890-1299-4c07-8a3a-739241f0bf0d-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"fce98890-1299-4c07-8a3a-739241f0bf0d\") " pod="openstack/cinder-scheduler-0" Feb 17 16:15:37 crc kubenswrapper[4808]: I0217 16:15:37.542008 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fce98890-1299-4c07-8a3a-739241f0bf0d-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"fce98890-1299-4c07-8a3a-739241f0bf0d\") " pod="openstack/cinder-scheduler-0" Feb 17 16:15:37 crc kubenswrapper[4808]: I0217 16:15:37.542332 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kdttm\" (UniqueName: \"kubernetes.io/projected/fce98890-1299-4c07-8a3a-739241f0bf0d-kube-api-access-kdttm\") pod \"cinder-scheduler-0\" (UID: \"fce98890-1299-4c07-8a3a-739241f0bf0d\") " pod="openstack/cinder-scheduler-0" Feb 17 16:15:37 crc kubenswrapper[4808]: I0217 16:15:37.542432 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fce98890-1299-4c07-8a3a-739241f0bf0d-scripts\") pod \"cinder-scheduler-0\" (UID: \"fce98890-1299-4c07-8a3a-739241f0bf0d\") " pod="openstack/cinder-scheduler-0" Feb 17 16:15:37 crc kubenswrapper[4808]: I0217 16:15:37.542557 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fce98890-1299-4c07-8a3a-739241f0bf0d-config-data\") pod \"cinder-scheduler-0\" (UID: \"fce98890-1299-4c07-8a3a-739241f0bf0d\") " pod="openstack/cinder-scheduler-0" Feb 17 16:15:37 crc kubenswrapper[4808]: I0217 16:15:37.543102 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fce98890-1299-4c07-8a3a-739241f0bf0d-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"fce98890-1299-4c07-8a3a-739241f0bf0d\") " pod="openstack/cinder-scheduler-0" Feb 17 16:15:37 crc kubenswrapper[4808]: I0217 16:15:37.543210 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fce98890-1299-4c07-8a3a-739241f0bf0d-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"fce98890-1299-4c07-8a3a-739241f0bf0d\") " pod="openstack/cinder-scheduler-0" Feb 17 16:15:37 crc kubenswrapper[4808]: I0217 16:15:37.543444 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fce98890-1299-4c07-8a3a-739241f0bf0d-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"fce98890-1299-4c07-8a3a-739241f0bf0d\") " pod="openstack/cinder-scheduler-0" Feb 17 16:15:37 crc kubenswrapper[4808]: I0217 16:15:37.546996 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fce98890-1299-4c07-8a3a-739241f0bf0d-scripts\") pod \"cinder-scheduler-0\" (UID: \"fce98890-1299-4c07-8a3a-739241f0bf0d\") " pod="openstack/cinder-scheduler-0" Feb 17 16:15:37 crc kubenswrapper[4808]: I0217 16:15:37.547030 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fce98890-1299-4c07-8a3a-739241f0bf0d-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"fce98890-1299-4c07-8a3a-739241f0bf0d\") " pod="openstack/cinder-scheduler-0" Feb 17 16:15:37 crc kubenswrapper[4808]: I0217 16:15:37.547687 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fce98890-1299-4c07-8a3a-739241f0bf0d-config-data\") pod \"cinder-scheduler-0\" (UID: \"fce98890-1299-4c07-8a3a-739241f0bf0d\") " pod="openstack/cinder-scheduler-0" Feb 17 16:15:37 crc kubenswrapper[4808]: I0217 16:15:37.548263 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fce98890-1299-4c07-8a3a-739241f0bf0d-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"fce98890-1299-4c07-8a3a-739241f0bf0d\") " pod="openstack/cinder-scheduler-0" Feb 17 16:15:37 crc kubenswrapper[4808]: I0217 16:15:37.561510 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdttm\" (UniqueName: \"kubernetes.io/projected/fce98890-1299-4c07-8a3a-739241f0bf0d-kube-api-access-kdttm\") pod \"cinder-scheduler-0\" (UID: \"fce98890-1299-4c07-8a3a-739241f0bf0d\") " pod="openstack/cinder-scheduler-0" Feb 17 16:15:37 crc kubenswrapper[4808]: I0217 16:15:37.626385 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 17 16:15:38 crc kubenswrapper[4808]: I0217 16:15:38.176401 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 17 16:15:38 crc kubenswrapper[4808]: I0217 16:15:38.202854 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"fce98890-1299-4c07-8a3a-739241f0bf0d","Type":"ContainerStarted","Data":"24e16f7149940a0a18fedd25334887f70fc506f4c985b1c9251d82a4fb9739cc"} Feb 17 16:15:38 crc kubenswrapper[4808]: I0217 16:15:38.205855 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ade95199-c613-4920-aa24-6cedde28dda6","Type":"ContainerStarted","Data":"a6b58d8e1d61eb15475898662433c7b6ba1aca7c7f517ddedfbced3c5aaf2a61"} Feb 17 16:15:38 crc kubenswrapper[4808]: I0217 16:15:38.966851 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-67bdc55879-786qn" Feb 17 16:15:39 crc kubenswrapper[4808]: I0217 16:15:39.069353 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-7t4g9"] Feb 17 16:15:39 crc kubenswrapper[4808]: I0217 16:15:39.069595 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-55f844cf75-7t4g9" podUID="abaeb0d0-670e-4a6d-a583-b4885236c73d" containerName="dnsmasq-dns" containerID="cri-o://f93f51535ebc44c66de2583206f5226e2e1eace05189cb4e738809b8081ce7e1" gracePeriod=10 Feb 17 16:15:39 crc kubenswrapper[4808]: I0217 16:15:39.163258 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37da8fa5-9dda-4e98-9a63-a4c0036e0017" path="/var/lib/kubelet/pods/37da8fa5-9dda-4e98-9a63-a4c0036e0017/volumes" Feb 17 16:15:39 crc kubenswrapper[4808]: I0217 16:15:39.304875 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"fce98890-1299-4c07-8a3a-739241f0bf0d","Type":"ContainerStarted","Data":"740216c25dac67fe79b74559a81943d7b0edb6fa56bd4eaac977117b78b06d77"} Feb 17 16:15:39 crc kubenswrapper[4808]: I0217 16:15:39.363135 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ade95199-c613-4920-aa24-6cedde28dda6","Type":"ContainerStarted","Data":"f08bbc217988c1d4a683f5088b670b4d5a57e2fdbedee004dcb40bd4e6db140a"} Feb 17 16:15:39 crc kubenswrapper[4808]: I0217 16:15:39.363520 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 17 16:15:39 crc kubenswrapper[4808]: I0217 16:15:39.366859 4808 generic.go:334] "Generic (PLEG): container finished" podID="abaeb0d0-670e-4a6d-a583-b4885236c73d" containerID="f93f51535ebc44c66de2583206f5226e2e1eace05189cb4e738809b8081ce7e1" exitCode=0 Feb 17 16:15:39 crc kubenswrapper[4808]: I0217 16:15:39.366914 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-7t4g9" event={"ID":"abaeb0d0-670e-4a6d-a583-b4885236c73d","Type":"ContainerDied","Data":"f93f51535ebc44c66de2583206f5226e2e1eace05189cb4e738809b8081ce7e1"} Feb 17 16:15:39 crc kubenswrapper[4808]: I0217 16:15:39.389245 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.649568118 podStartE2EDuration="5.389226258s" podCreationTimestamp="2026-02-17 16:15:34 +0000 UTC" firstStartedPulling="2026-02-17 16:15:35.071421423 +0000 UTC m=+1298.587780496" lastFinishedPulling="2026-02-17 16:15:38.811079563 +0000 UTC m=+1302.327438636" observedRunningTime="2026-02-17 16:15:39.387799519 +0000 UTC m=+1302.904158592" watchObservedRunningTime="2026-02-17 16:15:39.389226258 +0000 UTC m=+1302.905585331" Feb 17 16:15:39 crc kubenswrapper[4808]: I0217 16:15:39.801837 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-7t4g9" Feb 17 16:15:39 crc kubenswrapper[4808]: I0217 16:15:39.929160 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/abaeb0d0-670e-4a6d-a583-b4885236c73d-config\") pod \"abaeb0d0-670e-4a6d-a583-b4885236c73d\" (UID: \"abaeb0d0-670e-4a6d-a583-b4885236c73d\") " Feb 17 16:15:39 crc kubenswrapper[4808]: I0217 16:15:39.929649 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/abaeb0d0-670e-4a6d-a583-b4885236c73d-ovsdbserver-nb\") pod \"abaeb0d0-670e-4a6d-a583-b4885236c73d\" (UID: \"abaeb0d0-670e-4a6d-a583-b4885236c73d\") " Feb 17 16:15:39 crc kubenswrapper[4808]: I0217 16:15:39.929731 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/abaeb0d0-670e-4a6d-a583-b4885236c73d-dns-swift-storage-0\") pod \"abaeb0d0-670e-4a6d-a583-b4885236c73d\" (UID: \"abaeb0d0-670e-4a6d-a583-b4885236c73d\") " Feb 17 16:15:39 crc kubenswrapper[4808]: I0217 16:15:39.929759 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/abaeb0d0-670e-4a6d-a583-b4885236c73d-ovsdbserver-sb\") pod \"abaeb0d0-670e-4a6d-a583-b4885236c73d\" (UID: \"abaeb0d0-670e-4a6d-a583-b4885236c73d\") " Feb 17 16:15:39 crc kubenswrapper[4808]: I0217 16:15:39.929797 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/abaeb0d0-670e-4a6d-a583-b4885236c73d-dns-svc\") pod \"abaeb0d0-670e-4a6d-a583-b4885236c73d\" (UID: \"abaeb0d0-670e-4a6d-a583-b4885236c73d\") " Feb 17 16:15:39 crc kubenswrapper[4808]: I0217 16:15:39.929836 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vpz7f\" (UniqueName: \"kubernetes.io/projected/abaeb0d0-670e-4a6d-a583-b4885236c73d-kube-api-access-vpz7f\") pod \"abaeb0d0-670e-4a6d-a583-b4885236c73d\" (UID: \"abaeb0d0-670e-4a6d-a583-b4885236c73d\") " Feb 17 16:15:39 crc kubenswrapper[4808]: I0217 16:15:39.941978 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Feb 17 16:15:39 crc kubenswrapper[4808]: I0217 16:15:39.948741 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abaeb0d0-670e-4a6d-a583-b4885236c73d-kube-api-access-vpz7f" (OuterVolumeSpecName: "kube-api-access-vpz7f") pod "abaeb0d0-670e-4a6d-a583-b4885236c73d" (UID: "abaeb0d0-670e-4a6d-a583-b4885236c73d"). InnerVolumeSpecName "kube-api-access-vpz7f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.019097 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/abaeb0d0-670e-4a6d-a583-b4885236c73d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "abaeb0d0-670e-4a6d-a583-b4885236c73d" (UID: "abaeb0d0-670e-4a6d-a583-b4885236c73d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.033290 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7vhzz\" (UniqueName: \"kubernetes.io/projected/23a1fa53-e668-4800-b54a-904f42d9eb5e-kube-api-access-7vhzz\") pod \"23a1fa53-e668-4800-b54a-904f42d9eb5e\" (UID: \"23a1fa53-e668-4800-b54a-904f42d9eb5e\") " Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.033686 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/23a1fa53-e668-4800-b54a-904f42d9eb5e-config-data-custom\") pod \"23a1fa53-e668-4800-b54a-904f42d9eb5e\" (UID: \"23a1fa53-e668-4800-b54a-904f42d9eb5e\") " Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.033816 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/23a1fa53-e668-4800-b54a-904f42d9eb5e-scripts\") pod \"23a1fa53-e668-4800-b54a-904f42d9eb5e\" (UID: \"23a1fa53-e668-4800-b54a-904f42d9eb5e\") " Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.033873 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/23a1fa53-e668-4800-b54a-904f42d9eb5e-certs\") pod \"23a1fa53-e668-4800-b54a-904f42d9eb5e\" (UID: \"23a1fa53-e668-4800-b54a-904f42d9eb5e\") " Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.033896 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23a1fa53-e668-4800-b54a-904f42d9eb5e-config-data\") pod \"23a1fa53-e668-4800-b54a-904f42d9eb5e\" (UID: \"23a1fa53-e668-4800-b54a-904f42d9eb5e\") " Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.033946 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23a1fa53-e668-4800-b54a-904f42d9eb5e-combined-ca-bundle\") pod \"23a1fa53-e668-4800-b54a-904f42d9eb5e\" (UID: \"23a1fa53-e668-4800-b54a-904f42d9eb5e\") " Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.034403 4808 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/abaeb0d0-670e-4a6d-a583-b4885236c73d-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.034434 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vpz7f\" (UniqueName: \"kubernetes.io/projected/abaeb0d0-670e-4a6d-a583-b4885236c73d-kube-api-access-vpz7f\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.038244 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23a1fa53-e668-4800-b54a-904f42d9eb5e-kube-api-access-7vhzz" (OuterVolumeSpecName: "kube-api-access-7vhzz") pod "23a1fa53-e668-4800-b54a-904f42d9eb5e" (UID: "23a1fa53-e668-4800-b54a-904f42d9eb5e"). InnerVolumeSpecName "kube-api-access-7vhzz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.047186 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23a1fa53-e668-4800-b54a-904f42d9eb5e-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "23a1fa53-e668-4800-b54a-904f42d9eb5e" (UID: "23a1fa53-e668-4800-b54a-904f42d9eb5e"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.059502 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23a1fa53-e668-4800-b54a-904f42d9eb5e-certs" (OuterVolumeSpecName: "certs") pod "23a1fa53-e668-4800-b54a-904f42d9eb5e" (UID: "23a1fa53-e668-4800-b54a-904f42d9eb5e"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.079817 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23a1fa53-e668-4800-b54a-904f42d9eb5e-scripts" (OuterVolumeSpecName: "scripts") pod "23a1fa53-e668-4800-b54a-904f42d9eb5e" (UID: "23a1fa53-e668-4800-b54a-904f42d9eb5e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.100731 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23a1fa53-e668-4800-b54a-904f42d9eb5e-config-data" (OuterVolumeSpecName: "config-data") pod "23a1fa53-e668-4800-b54a-904f42d9eb5e" (UID: "23a1fa53-e668-4800-b54a-904f42d9eb5e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.115454 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/abaeb0d0-670e-4a6d-a583-b4885236c73d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "abaeb0d0-670e-4a6d-a583-b4885236c73d" (UID: "abaeb0d0-670e-4a6d-a583-b4885236c73d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.135881 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/abaeb0d0-670e-4a6d-a583-b4885236c73d-config" (OuterVolumeSpecName: "config") pod "abaeb0d0-670e-4a6d-a583-b4885236c73d" (UID: "abaeb0d0-670e-4a6d-a583-b4885236c73d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.136700 4808 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/abaeb0d0-670e-4a6d-a583-b4885236c73d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.136725 4808 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/23a1fa53-e668-4800-b54a-904f42d9eb5e-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.136734 4808 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/projected/23a1fa53-e668-4800-b54a-904f42d9eb5e-certs\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.136742 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23a1fa53-e668-4800-b54a-904f42d9eb5e-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.136751 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/abaeb0d0-670e-4a6d-a583-b4885236c73d-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.136761 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7vhzz\" (UniqueName: \"kubernetes.io/projected/23a1fa53-e668-4800-b54a-904f42d9eb5e-kube-api-access-7vhzz\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.136770 4808 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/23a1fa53-e668-4800-b54a-904f42d9eb5e-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.149027 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/abaeb0d0-670e-4a6d-a583-b4885236c73d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "abaeb0d0-670e-4a6d-a583-b4885236c73d" (UID: "abaeb0d0-670e-4a6d-a583-b4885236c73d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.154012 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/abaeb0d0-670e-4a6d-a583-b4885236c73d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "abaeb0d0-670e-4a6d-a583-b4885236c73d" (UID: "abaeb0d0-670e-4a6d-a583-b4885236c73d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.154709 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23a1fa53-e668-4800-b54a-904f42d9eb5e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "23a1fa53-e668-4800-b54a-904f42d9eb5e" (UID: "23a1fa53-e668-4800-b54a-904f42d9eb5e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.238699 4808 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/abaeb0d0-670e-4a6d-a583-b4885236c73d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.238743 4808 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/abaeb0d0-670e-4a6d-a583-b4885236c73d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.238753 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23a1fa53-e668-4800-b54a-904f42d9eb5e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.300833 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5f445fb886-lsqq4" Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.377462 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-7t4g9" event={"ID":"abaeb0d0-670e-4a6d-a583-b4885236c73d","Type":"ContainerDied","Data":"673b376ab9a6f91954598ab4a63c75d818d8ff65e3bf87016ce8c6e162ed2846"} Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.377512 4808 scope.go:117] "RemoveContainer" containerID="f93f51535ebc44c66de2583206f5226e2e1eace05189cb4e738809b8081ce7e1" Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.377640 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-7t4g9" Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.392447 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"fce98890-1299-4c07-8a3a-739241f0bf0d","Type":"ContainerStarted","Data":"0fbcf3645a02878f7a06725e686b31632542cd58b240a9b71ac9ab3f75c960a2"} Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.399763 4808 generic.go:334] "Generic (PLEG): container finished" podID="23a1fa53-e668-4800-b54a-904f42d9eb5e" containerID="50f1247e3e06436abc5b877c08bbabce85a826f30dcdbef9ab02ea5e21f03a94" exitCode=0 Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.399873 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.399921 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"23a1fa53-e668-4800-b54a-904f42d9eb5e","Type":"ContainerDied","Data":"50f1247e3e06436abc5b877c08bbabce85a826f30dcdbef9ab02ea5e21f03a94"} Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.399951 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"23a1fa53-e668-4800-b54a-904f42d9eb5e","Type":"ContainerDied","Data":"d486a3a307b0de09a60edde55636666b3342a5903cc110cae3e17e9502f50af9"} Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.413163 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.413143763 podStartE2EDuration="3.413143763s" podCreationTimestamp="2026-02-17 16:15:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:15:40.410961734 +0000 UTC m=+1303.927320807" watchObservedRunningTime="2026-02-17 16:15:40.413143763 +0000 UTC m=+1303.929502836" Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.443721 4808 scope.go:117] "RemoveContainer" containerID="dddcaac247851948b323e115b84153bfcbcb71436b40ee468a0fbbfe54d676ae" Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.444986 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-7t4g9"] Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.473307 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-7t4g9"] Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.504642 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-proc-0"] Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.507103 4808 scope.go:117] "RemoveContainer" containerID="50f1247e3e06436abc5b877c08bbabce85a826f30dcdbef9ab02ea5e21f03a94" Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.554623 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-proc-0"] Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.557778 4808 scope.go:117] "RemoveContainer" containerID="50f1247e3e06436abc5b877c08bbabce85a826f30dcdbef9ab02ea5e21f03a94" Feb 17 16:15:40 crc kubenswrapper[4808]: E0217 16:15:40.558950 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50f1247e3e06436abc5b877c08bbabce85a826f30dcdbef9ab02ea5e21f03a94\": container with ID starting with 50f1247e3e06436abc5b877c08bbabce85a826f30dcdbef9ab02ea5e21f03a94 not found: ID does not exist" containerID="50f1247e3e06436abc5b877c08bbabce85a826f30dcdbef9ab02ea5e21f03a94" Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.559001 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50f1247e3e06436abc5b877c08bbabce85a826f30dcdbef9ab02ea5e21f03a94"} err="failed to get container status \"50f1247e3e06436abc5b877c08bbabce85a826f30dcdbef9ab02ea5e21f03a94\": rpc error: code = NotFound desc = could not find container \"50f1247e3e06436abc5b877c08bbabce85a826f30dcdbef9ab02ea5e21f03a94\": container with ID starting with 50f1247e3e06436abc5b877c08bbabce85a826f30dcdbef9ab02ea5e21f03a94 not found: ID does not exist" Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.560646 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-proc-0"] Feb 17 16:15:40 crc kubenswrapper[4808]: E0217 16:15:40.561074 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abaeb0d0-670e-4a6d-a583-b4885236c73d" containerName="init" Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.561092 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="abaeb0d0-670e-4a6d-a583-b4885236c73d" containerName="init" Feb 17 16:15:40 crc kubenswrapper[4808]: E0217 16:15:40.561107 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23a1fa53-e668-4800-b54a-904f42d9eb5e" containerName="cloudkitty-proc" Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.561116 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="23a1fa53-e668-4800-b54a-904f42d9eb5e" containerName="cloudkitty-proc" Feb 17 16:15:40 crc kubenswrapper[4808]: E0217 16:15:40.561150 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abaeb0d0-670e-4a6d-a583-b4885236c73d" containerName="dnsmasq-dns" Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.561158 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="abaeb0d0-670e-4a6d-a583-b4885236c73d" containerName="dnsmasq-dns" Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.561321 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="abaeb0d0-670e-4a6d-a583-b4885236c73d" containerName="dnsmasq-dns" Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.561349 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="23a1fa53-e668-4800-b54a-904f42d9eb5e" containerName="cloudkitty-proc" Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.562066 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.567001 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-proc-config-data" Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.571297 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-proc-0"] Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.645114 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14f49c04-388f-4eeb-be54-cbf3713606db-combined-ca-bundle\") pod \"cloudkitty-proc-0\" (UID: \"14f49c04-388f-4eeb-be54-cbf3713606db\") " pod="openstack/cloudkitty-proc-0" Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.645236 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14f49c04-388f-4eeb-be54-cbf3713606db-scripts\") pod \"cloudkitty-proc-0\" (UID: \"14f49c04-388f-4eeb-be54-cbf3713606db\") " pod="openstack/cloudkitty-proc-0" Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.645270 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzh4f\" (UniqueName: \"kubernetes.io/projected/14f49c04-388f-4eeb-be54-cbf3713606db-kube-api-access-nzh4f\") pod \"cloudkitty-proc-0\" (UID: \"14f49c04-388f-4eeb-be54-cbf3713606db\") " pod="openstack/cloudkitty-proc-0" Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.645566 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14f49c04-388f-4eeb-be54-cbf3713606db-config-data\") pod \"cloudkitty-proc-0\" (UID: \"14f49c04-388f-4eeb-be54-cbf3713606db\") " pod="openstack/cloudkitty-proc-0" Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.645721 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/14f49c04-388f-4eeb-be54-cbf3713606db-certs\") pod \"cloudkitty-proc-0\" (UID: \"14f49c04-388f-4eeb-be54-cbf3713606db\") " pod="openstack/cloudkitty-proc-0" Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.645799 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/14f49c04-388f-4eeb-be54-cbf3713606db-config-data-custom\") pod \"cloudkitty-proc-0\" (UID: \"14f49c04-388f-4eeb-be54-cbf3713606db\") " pod="openstack/cloudkitty-proc-0" Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.748524 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14f49c04-388f-4eeb-be54-cbf3713606db-config-data\") pod \"cloudkitty-proc-0\" (UID: \"14f49c04-388f-4eeb-be54-cbf3713606db\") " pod="openstack/cloudkitty-proc-0" Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.748688 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/14f49c04-388f-4eeb-be54-cbf3713606db-certs\") pod \"cloudkitty-proc-0\" (UID: \"14f49c04-388f-4eeb-be54-cbf3713606db\") " pod="openstack/cloudkitty-proc-0" Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.748738 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/14f49c04-388f-4eeb-be54-cbf3713606db-config-data-custom\") pod \"cloudkitty-proc-0\" (UID: \"14f49c04-388f-4eeb-be54-cbf3713606db\") " pod="openstack/cloudkitty-proc-0" Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.748811 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14f49c04-388f-4eeb-be54-cbf3713606db-combined-ca-bundle\") pod \"cloudkitty-proc-0\" (UID: \"14f49c04-388f-4eeb-be54-cbf3713606db\") " pod="openstack/cloudkitty-proc-0" Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.748864 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14f49c04-388f-4eeb-be54-cbf3713606db-scripts\") pod \"cloudkitty-proc-0\" (UID: \"14f49c04-388f-4eeb-be54-cbf3713606db\") " pod="openstack/cloudkitty-proc-0" Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.748886 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nzh4f\" (UniqueName: \"kubernetes.io/projected/14f49c04-388f-4eeb-be54-cbf3713606db-kube-api-access-nzh4f\") pod \"cloudkitty-proc-0\" (UID: \"14f49c04-388f-4eeb-be54-cbf3713606db\") " pod="openstack/cloudkitty-proc-0" Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.765348 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/14f49c04-388f-4eeb-be54-cbf3713606db-config-data-custom\") pod \"cloudkitty-proc-0\" (UID: \"14f49c04-388f-4eeb-be54-cbf3713606db\") " pod="openstack/cloudkitty-proc-0" Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.766197 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14f49c04-388f-4eeb-be54-cbf3713606db-config-data\") pod \"cloudkitty-proc-0\" (UID: \"14f49c04-388f-4eeb-be54-cbf3713606db\") " pod="openstack/cloudkitty-proc-0" Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.778433 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14f49c04-388f-4eeb-be54-cbf3713606db-combined-ca-bundle\") pod \"cloudkitty-proc-0\" (UID: \"14f49c04-388f-4eeb-be54-cbf3713606db\") " pod="openstack/cloudkitty-proc-0" Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.779007 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/14f49c04-388f-4eeb-be54-cbf3713606db-certs\") pod \"cloudkitty-proc-0\" (UID: \"14f49c04-388f-4eeb-be54-cbf3713606db\") " pod="openstack/cloudkitty-proc-0" Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.782013 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14f49c04-388f-4eeb-be54-cbf3713606db-scripts\") pod \"cloudkitty-proc-0\" (UID: \"14f49c04-388f-4eeb-be54-cbf3713606db\") " pod="openstack/cloudkitty-proc-0" Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.786044 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzh4f\" (UniqueName: \"kubernetes.io/projected/14f49c04-388f-4eeb-be54-cbf3713606db-kube-api-access-nzh4f\") pod \"cloudkitty-proc-0\" (UID: \"14f49c04-388f-4eeb-be54-cbf3713606db\") " pod="openstack/cloudkitty-proc-0" Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.791021 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-679dfcbbb9-npbsd" Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.880033 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Feb 17 16:15:41 crc kubenswrapper[4808]: I0217 16:15:41.171386 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23a1fa53-e668-4800-b54a-904f42d9eb5e" path="/var/lib/kubelet/pods/23a1fa53-e668-4800-b54a-904f42d9eb5e/volumes" Feb 17 16:15:41 crc kubenswrapper[4808]: I0217 16:15:41.174185 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="abaeb0d0-670e-4a6d-a583-b4885236c73d" path="/var/lib/kubelet/pods/abaeb0d0-670e-4a6d-a583-b4885236c73d/volumes" Feb 17 16:15:41 crc kubenswrapper[4808]: I0217 16:15:41.319038 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5f445fb886-lsqq4" Feb 17 16:15:41 crc kubenswrapper[4808]: I0217 16:15:41.390641 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-75bd7dcff4-tfcmj"] Feb 17 16:15:41 crc kubenswrapper[4808]: I0217 16:15:41.390850 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-75bd7dcff4-tfcmj" podUID="bd86efad-8ad2-4e38-b731-5f892d34a582" containerName="barbican-api-log" containerID="cri-o://8e81ed5ac5da2865c2bd786f6e608662f1f3114d1959d90beba10db5607a33f1" gracePeriod=30 Feb 17 16:15:41 crc kubenswrapper[4808]: I0217 16:15:41.391068 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-75bd7dcff4-tfcmj" podUID="bd86efad-8ad2-4e38-b731-5f892d34a582" containerName="barbican-api" containerID="cri-o://6b29334979377aae11d80c31ca2d701fe0397a6ebb1d0f68188d0b3c533f4e13" gracePeriod=30 Feb 17 16:15:41 crc kubenswrapper[4808]: I0217 16:15:41.569678 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-proc-0"] Feb 17 16:15:41 crc kubenswrapper[4808]: I0217 16:15:41.632625 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 17 16:15:41 crc kubenswrapper[4808]: I0217 16:15:41.633934 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 17 16:15:41 crc kubenswrapper[4808]: I0217 16:15:41.641460 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Feb 17 16:15:41 crc kubenswrapper[4808]: I0217 16:15:41.642135 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-zgf6f" Feb 17 16:15:41 crc kubenswrapper[4808]: I0217 16:15:41.642840 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Feb 17 16:15:41 crc kubenswrapper[4808]: I0217 16:15:41.660095 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 17 16:15:41 crc kubenswrapper[4808]: I0217 16:15:41.682737 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ce308e0-2ba0-41ae-8760-e749c8d04130-combined-ca-bundle\") pod \"openstackclient\" (UID: \"5ce308e0-2ba0-41ae-8760-e749c8d04130\") " pod="openstack/openstackclient" Feb 17 16:15:41 crc kubenswrapper[4808]: I0217 16:15:41.682982 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5ce308e0-2ba0-41ae-8760-e749c8d04130-openstack-config-secret\") pod \"openstackclient\" (UID: \"5ce308e0-2ba0-41ae-8760-e749c8d04130\") " pod="openstack/openstackclient" Feb 17 16:15:41 crc kubenswrapper[4808]: I0217 16:15:41.683120 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5ce308e0-2ba0-41ae-8760-e749c8d04130-openstack-config\") pod \"openstackclient\" (UID: \"5ce308e0-2ba0-41ae-8760-e749c8d04130\") " pod="openstack/openstackclient" Feb 17 16:15:41 crc kubenswrapper[4808]: I0217 16:15:41.683328 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbwbl\" (UniqueName: \"kubernetes.io/projected/5ce308e0-2ba0-41ae-8760-e749c8d04130-kube-api-access-rbwbl\") pod \"openstackclient\" (UID: \"5ce308e0-2ba0-41ae-8760-e749c8d04130\") " pod="openstack/openstackclient" Feb 17 16:15:41 crc kubenswrapper[4808]: I0217 16:15:41.786800 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ce308e0-2ba0-41ae-8760-e749c8d04130-combined-ca-bundle\") pod \"openstackclient\" (UID: \"5ce308e0-2ba0-41ae-8760-e749c8d04130\") " pod="openstack/openstackclient" Feb 17 16:15:41 crc kubenswrapper[4808]: I0217 16:15:41.786866 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5ce308e0-2ba0-41ae-8760-e749c8d04130-openstack-config-secret\") pod \"openstackclient\" (UID: \"5ce308e0-2ba0-41ae-8760-e749c8d04130\") " pod="openstack/openstackclient" Feb 17 16:15:41 crc kubenswrapper[4808]: I0217 16:15:41.786889 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5ce308e0-2ba0-41ae-8760-e749c8d04130-openstack-config\") pod \"openstackclient\" (UID: \"5ce308e0-2ba0-41ae-8760-e749c8d04130\") " pod="openstack/openstackclient" Feb 17 16:15:41 crc kubenswrapper[4808]: I0217 16:15:41.786942 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbwbl\" (UniqueName: \"kubernetes.io/projected/5ce308e0-2ba0-41ae-8760-e749c8d04130-kube-api-access-rbwbl\") pod \"openstackclient\" (UID: \"5ce308e0-2ba0-41ae-8760-e749c8d04130\") " pod="openstack/openstackclient" Feb 17 16:15:41 crc kubenswrapper[4808]: I0217 16:15:41.792660 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ce308e0-2ba0-41ae-8760-e749c8d04130-combined-ca-bundle\") pod \"openstackclient\" (UID: \"5ce308e0-2ba0-41ae-8760-e749c8d04130\") " pod="openstack/openstackclient" Feb 17 16:15:41 crc kubenswrapper[4808]: I0217 16:15:41.792767 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5ce308e0-2ba0-41ae-8760-e749c8d04130-openstack-config\") pod \"openstackclient\" (UID: \"5ce308e0-2ba0-41ae-8760-e749c8d04130\") " pod="openstack/openstackclient" Feb 17 16:15:41 crc kubenswrapper[4808]: I0217 16:15:41.796700 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5ce308e0-2ba0-41ae-8760-e749c8d04130-openstack-config-secret\") pod \"openstackclient\" (UID: \"5ce308e0-2ba0-41ae-8760-e749c8d04130\") " pod="openstack/openstackclient" Feb 17 16:15:41 crc kubenswrapper[4808]: I0217 16:15:41.811351 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbwbl\" (UniqueName: \"kubernetes.io/projected/5ce308e0-2ba0-41ae-8760-e749c8d04130-kube-api-access-rbwbl\") pod \"openstackclient\" (UID: \"5ce308e0-2ba0-41ae-8760-e749c8d04130\") " pod="openstack/openstackclient" Feb 17 16:15:41 crc kubenswrapper[4808]: I0217 16:15:41.829389 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-76b995d5cb-7xs25" Feb 17 16:15:41 crc kubenswrapper[4808]: I0217 16:15:41.920835 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-76b995d5cb-7xs25" Feb 17 16:15:41 crc kubenswrapper[4808]: I0217 16:15:41.960517 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 17 16:15:42 crc kubenswrapper[4808]: I0217 16:15:42.436885 4808 generic.go:334] "Generic (PLEG): container finished" podID="bd86efad-8ad2-4e38-b731-5f892d34a582" containerID="8e81ed5ac5da2865c2bd786f6e608662f1f3114d1959d90beba10db5607a33f1" exitCode=143 Feb 17 16:15:42 crc kubenswrapper[4808]: I0217 16:15:42.437330 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-75bd7dcff4-tfcmj" event={"ID":"bd86efad-8ad2-4e38-b731-5f892d34a582","Type":"ContainerDied","Data":"8e81ed5ac5da2865c2bd786f6e608662f1f3114d1959d90beba10db5607a33f1"} Feb 17 16:15:42 crc kubenswrapper[4808]: I0217 16:15:42.440783 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"14f49c04-388f-4eeb-be54-cbf3713606db","Type":"ContainerStarted","Data":"1cbdb125da22ef63042e5aa9e2d4e26a8cd2f8c72f544f58ee1d82a4a0ba7b17"} Feb 17 16:15:42 crc kubenswrapper[4808]: I0217 16:15:42.440814 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"14f49c04-388f-4eeb-be54-cbf3713606db","Type":"ContainerStarted","Data":"231b7739e843cae1aa504dfabcdb94cf556cd3fc4ee799cde98951ab165c4bf7"} Feb 17 16:15:42 crc kubenswrapper[4808]: I0217 16:15:42.491211 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-proc-0" podStartSLOduration=2.491194532 podStartE2EDuration="2.491194532s" podCreationTimestamp="2026-02-17 16:15:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:15:42.453648545 +0000 UTC m=+1305.970007618" watchObservedRunningTime="2026-02-17 16:15:42.491194532 +0000 UTC m=+1306.007553605" Feb 17 16:15:42 crc kubenswrapper[4808]: I0217 16:15:42.552756 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 17 16:15:42 crc kubenswrapper[4808]: I0217 16:15:42.606250 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 17 16:15:42 crc kubenswrapper[4808]: I0217 16:15:42.630741 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 17 16:15:43 crc kubenswrapper[4808]: I0217 16:15:43.456927 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"5ce308e0-2ba0-41ae-8760-e749c8d04130","Type":"ContainerStarted","Data":"842197a478f5f020ab22c11d7648ef4ee7379a947af34e2df48b686f2efc6dd2"} Feb 17 16:15:45 crc kubenswrapper[4808]: I0217 16:15:45.215501 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-75bd7dcff4-tfcmj" Feb 17 16:15:45 crc kubenswrapper[4808]: I0217 16:15:45.258756 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd86efad-8ad2-4e38-b731-5f892d34a582-config-data\") pod \"bd86efad-8ad2-4e38-b731-5f892d34a582\" (UID: \"bd86efad-8ad2-4e38-b731-5f892d34a582\") " Feb 17 16:15:45 crc kubenswrapper[4808]: I0217 16:15:45.258842 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd86efad-8ad2-4e38-b731-5f892d34a582-logs\") pod \"bd86efad-8ad2-4e38-b731-5f892d34a582\" (UID: \"bd86efad-8ad2-4e38-b731-5f892d34a582\") " Feb 17 16:15:45 crc kubenswrapper[4808]: I0217 16:15:45.258883 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd86efad-8ad2-4e38-b731-5f892d34a582-combined-ca-bundle\") pod \"bd86efad-8ad2-4e38-b731-5f892d34a582\" (UID: \"bd86efad-8ad2-4e38-b731-5f892d34a582\") " Feb 17 16:15:45 crc kubenswrapper[4808]: I0217 16:15:45.258950 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-krq8t\" (UniqueName: \"kubernetes.io/projected/bd86efad-8ad2-4e38-b731-5f892d34a582-kube-api-access-krq8t\") pod \"bd86efad-8ad2-4e38-b731-5f892d34a582\" (UID: \"bd86efad-8ad2-4e38-b731-5f892d34a582\") " Feb 17 16:15:45 crc kubenswrapper[4808]: I0217 16:15:45.259079 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bd86efad-8ad2-4e38-b731-5f892d34a582-config-data-custom\") pod \"bd86efad-8ad2-4e38-b731-5f892d34a582\" (UID: \"bd86efad-8ad2-4e38-b731-5f892d34a582\") " Feb 17 16:15:45 crc kubenswrapper[4808]: I0217 16:15:45.262961 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd86efad-8ad2-4e38-b731-5f892d34a582-logs" (OuterVolumeSpecName: "logs") pod "bd86efad-8ad2-4e38-b731-5f892d34a582" (UID: "bd86efad-8ad2-4e38-b731-5f892d34a582"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:15:45 crc kubenswrapper[4808]: I0217 16:15:45.265477 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd86efad-8ad2-4e38-b731-5f892d34a582-kube-api-access-krq8t" (OuterVolumeSpecName: "kube-api-access-krq8t") pod "bd86efad-8ad2-4e38-b731-5f892d34a582" (UID: "bd86efad-8ad2-4e38-b731-5f892d34a582"). InnerVolumeSpecName "kube-api-access-krq8t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:15:45 crc kubenswrapper[4808]: I0217 16:15:45.269727 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd86efad-8ad2-4e38-b731-5f892d34a582-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "bd86efad-8ad2-4e38-b731-5f892d34a582" (UID: "bd86efad-8ad2-4e38-b731-5f892d34a582"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:15:45 crc kubenswrapper[4808]: I0217 16:15:45.318790 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd86efad-8ad2-4e38-b731-5f892d34a582-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bd86efad-8ad2-4e38-b731-5f892d34a582" (UID: "bd86efad-8ad2-4e38-b731-5f892d34a582"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:15:45 crc kubenswrapper[4808]: I0217 16:15:45.362547 4808 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd86efad-8ad2-4e38-b731-5f892d34a582-logs\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:45 crc kubenswrapper[4808]: I0217 16:15:45.362591 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd86efad-8ad2-4e38-b731-5f892d34a582-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:45 crc kubenswrapper[4808]: I0217 16:15:45.362602 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-krq8t\" (UniqueName: \"kubernetes.io/projected/bd86efad-8ad2-4e38-b731-5f892d34a582-kube-api-access-krq8t\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:45 crc kubenswrapper[4808]: I0217 16:15:45.362612 4808 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bd86efad-8ad2-4e38-b731-5f892d34a582-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:45 crc kubenswrapper[4808]: I0217 16:15:45.379885 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd86efad-8ad2-4e38-b731-5f892d34a582-config-data" (OuterVolumeSpecName: "config-data") pod "bd86efad-8ad2-4e38-b731-5f892d34a582" (UID: "bd86efad-8ad2-4e38-b731-5f892d34a582"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:15:45 crc kubenswrapper[4808]: I0217 16:15:45.463955 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd86efad-8ad2-4e38-b731-5f892d34a582-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:45 crc kubenswrapper[4808]: I0217 16:15:45.480183 4808 generic.go:334] "Generic (PLEG): container finished" podID="bd86efad-8ad2-4e38-b731-5f892d34a582" containerID="6b29334979377aae11d80c31ca2d701fe0397a6ebb1d0f68188d0b3c533f4e13" exitCode=0 Feb 17 16:15:45 crc kubenswrapper[4808]: I0217 16:15:45.480235 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-75bd7dcff4-tfcmj" event={"ID":"bd86efad-8ad2-4e38-b731-5f892d34a582","Type":"ContainerDied","Data":"6b29334979377aae11d80c31ca2d701fe0397a6ebb1d0f68188d0b3c533f4e13"} Feb 17 16:15:45 crc kubenswrapper[4808]: I0217 16:15:45.480253 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-75bd7dcff4-tfcmj" Feb 17 16:15:45 crc kubenswrapper[4808]: I0217 16:15:45.480275 4808 scope.go:117] "RemoveContainer" containerID="6b29334979377aae11d80c31ca2d701fe0397a6ebb1d0f68188d0b3c533f4e13" Feb 17 16:15:45 crc kubenswrapper[4808]: I0217 16:15:45.480263 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-75bd7dcff4-tfcmj" event={"ID":"bd86efad-8ad2-4e38-b731-5f892d34a582","Type":"ContainerDied","Data":"5dc94be747fd1b78b9a66a8cfe5962566975f11bb39b1a72c4640a142fb1468d"} Feb 17 16:15:45 crc kubenswrapper[4808]: I0217 16:15:45.518269 4808 scope.go:117] "RemoveContainer" containerID="8e81ed5ac5da2865c2bd786f6e608662f1f3114d1959d90beba10db5607a33f1" Feb 17 16:15:45 crc kubenswrapper[4808]: I0217 16:15:45.523196 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-75bd7dcff4-tfcmj"] Feb 17 16:15:45 crc kubenswrapper[4808]: I0217 16:15:45.533272 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-75bd7dcff4-tfcmj"] Feb 17 16:15:45 crc kubenswrapper[4808]: I0217 16:15:45.553824 4808 scope.go:117] "RemoveContainer" containerID="6b29334979377aae11d80c31ca2d701fe0397a6ebb1d0f68188d0b3c533f4e13" Feb 17 16:15:45 crc kubenswrapper[4808]: E0217 16:15:45.554350 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b29334979377aae11d80c31ca2d701fe0397a6ebb1d0f68188d0b3c533f4e13\": container with ID starting with 6b29334979377aae11d80c31ca2d701fe0397a6ebb1d0f68188d0b3c533f4e13 not found: ID does not exist" containerID="6b29334979377aae11d80c31ca2d701fe0397a6ebb1d0f68188d0b3c533f4e13" Feb 17 16:15:45 crc kubenswrapper[4808]: I0217 16:15:45.554388 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b29334979377aae11d80c31ca2d701fe0397a6ebb1d0f68188d0b3c533f4e13"} err="failed to get container status \"6b29334979377aae11d80c31ca2d701fe0397a6ebb1d0f68188d0b3c533f4e13\": rpc error: code = NotFound desc = could not find container \"6b29334979377aae11d80c31ca2d701fe0397a6ebb1d0f68188d0b3c533f4e13\": container with ID starting with 6b29334979377aae11d80c31ca2d701fe0397a6ebb1d0f68188d0b3c533f4e13 not found: ID does not exist" Feb 17 16:15:45 crc kubenswrapper[4808]: I0217 16:15:45.554416 4808 scope.go:117] "RemoveContainer" containerID="8e81ed5ac5da2865c2bd786f6e608662f1f3114d1959d90beba10db5607a33f1" Feb 17 16:15:45 crc kubenswrapper[4808]: E0217 16:15:45.555024 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e81ed5ac5da2865c2bd786f6e608662f1f3114d1959d90beba10db5607a33f1\": container with ID starting with 8e81ed5ac5da2865c2bd786f6e608662f1f3114d1959d90beba10db5607a33f1 not found: ID does not exist" containerID="8e81ed5ac5da2865c2bd786f6e608662f1f3114d1959d90beba10db5607a33f1" Feb 17 16:15:45 crc kubenswrapper[4808]: I0217 16:15:45.555076 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e81ed5ac5da2865c2bd786f6e608662f1f3114d1959d90beba10db5607a33f1"} err="failed to get container status \"8e81ed5ac5da2865c2bd786f6e608662f1f3114d1959d90beba10db5607a33f1\": rpc error: code = NotFound desc = could not find container \"8e81ed5ac5da2865c2bd786f6e608662f1f3114d1959d90beba10db5607a33f1\": container with ID starting with 8e81ed5ac5da2865c2bd786f6e608662f1f3114d1959d90beba10db5607a33f1 not found: ID does not exist" Feb 17 16:15:46 crc kubenswrapper[4808]: I0217 16:15:46.728963 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-6c6489dbc7-2ddnw" Feb 17 16:15:46 crc kubenswrapper[4808]: I0217 16:15:46.787892 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-tmj75"] Feb 17 16:15:46 crc kubenswrapper[4808]: E0217 16:15:46.799686 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd86efad-8ad2-4e38-b731-5f892d34a582" containerName="barbican-api-log" Feb 17 16:15:46 crc kubenswrapper[4808]: I0217 16:15:46.799730 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd86efad-8ad2-4e38-b731-5f892d34a582" containerName="barbican-api-log" Feb 17 16:15:46 crc kubenswrapper[4808]: E0217 16:15:46.799776 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd86efad-8ad2-4e38-b731-5f892d34a582" containerName="barbican-api" Feb 17 16:15:46 crc kubenswrapper[4808]: I0217 16:15:46.799785 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd86efad-8ad2-4e38-b731-5f892d34a582" containerName="barbican-api" Feb 17 16:15:46 crc kubenswrapper[4808]: I0217 16:15:46.800156 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd86efad-8ad2-4e38-b731-5f892d34a582" containerName="barbican-api" Feb 17 16:15:46 crc kubenswrapper[4808]: I0217 16:15:46.800172 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd86efad-8ad2-4e38-b731-5f892d34a582" containerName="barbican-api-log" Feb 17 16:15:46 crc kubenswrapper[4808]: I0217 16:15:46.801085 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-tmj75" Feb 17 16:15:46 crc kubenswrapper[4808]: I0217 16:15:46.811288 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5c8b8554dd-86wnt"] Feb 17 16:15:46 crc kubenswrapper[4808]: I0217 16:15:46.812257 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5c8b8554dd-86wnt" podUID="b4b8e73f-b7b0-4580-8e0f-44eef84624e4" containerName="neutron-api" containerID="cri-o://f3f7fd1ba085d42fb2a1208d784040ea1e2e45a48ec8b1c70c8122235d3614aa" gracePeriod=30 Feb 17 16:15:46 crc kubenswrapper[4808]: I0217 16:15:46.812522 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5c8b8554dd-86wnt" podUID="b4b8e73f-b7b0-4580-8e0f-44eef84624e4" containerName="neutron-httpd" containerID="cri-o://6fb4ffeac0605961472d3b2de8b2dce4344cba69b4920dc698cb1b861244c6eb" gracePeriod=30 Feb 17 16:15:46 crc kubenswrapper[4808]: I0217 16:15:46.870830 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-tmj75"] Feb 17 16:15:46 crc kubenswrapper[4808]: I0217 16:15:46.890900 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8jmx\" (UniqueName: \"kubernetes.io/projected/785bc852-9af8-4d44-9c07-a7b501efb72c-kube-api-access-g8jmx\") pod \"nova-api-db-create-tmj75\" (UID: \"785bc852-9af8-4d44-9c07-a7b501efb72c\") " pod="openstack/nova-api-db-create-tmj75" Feb 17 16:15:46 crc kubenswrapper[4808]: I0217 16:15:46.890966 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/785bc852-9af8-4d44-9c07-a7b501efb72c-operator-scripts\") pod \"nova-api-db-create-tmj75\" (UID: \"785bc852-9af8-4d44-9c07-a7b501efb72c\") " pod="openstack/nova-api-db-create-tmj75" Feb 17 16:15:46 crc kubenswrapper[4808]: I0217 16:15:46.906016 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-bmg4x"] Feb 17 16:15:46 crc kubenswrapper[4808]: I0217 16:15:46.907311 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-bmg4x" Feb 17 16:15:46 crc kubenswrapper[4808]: I0217 16:15:46.937992 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-bmg4x"] Feb 17 16:15:46 crc kubenswrapper[4808]: I0217 16:15:46.971987 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-7e6f-account-create-update-zcm7d"] Feb 17 16:15:46 crc kubenswrapper[4808]: I0217 16:15:46.973428 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-7e6f-account-create-update-zcm7d" Feb 17 16:15:46 crc kubenswrapper[4808]: I0217 16:15:46.977181 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 17 16:15:46 crc kubenswrapper[4808]: I0217 16:15:46.993535 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhqjl\" (UniqueName: \"kubernetes.io/projected/adb98158-8a64-4a24-9d8a-5c7308881c79-kube-api-access-qhqjl\") pod \"nova-api-7e6f-account-create-update-zcm7d\" (UID: \"adb98158-8a64-4a24-9d8a-5c7308881c79\") " pod="openstack/nova-api-7e6f-account-create-update-zcm7d" Feb 17 16:15:46 crc kubenswrapper[4808]: I0217 16:15:46.993665 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/adb98158-8a64-4a24-9d8a-5c7308881c79-operator-scripts\") pod \"nova-api-7e6f-account-create-update-zcm7d\" (UID: \"adb98158-8a64-4a24-9d8a-5c7308881c79\") " pod="openstack/nova-api-7e6f-account-create-update-zcm7d" Feb 17 16:15:46 crc kubenswrapper[4808]: I0217 16:15:46.993725 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8jmx\" (UniqueName: \"kubernetes.io/projected/785bc852-9af8-4d44-9c07-a7b501efb72c-kube-api-access-g8jmx\") pod \"nova-api-db-create-tmj75\" (UID: \"785bc852-9af8-4d44-9c07-a7b501efb72c\") " pod="openstack/nova-api-db-create-tmj75" Feb 17 16:15:46 crc kubenswrapper[4808]: I0217 16:15:46.993778 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmqtm\" (UniqueName: \"kubernetes.io/projected/84bc7003-1a29-41b6-af75-956706dd0efe-kube-api-access-pmqtm\") pod \"nova-cell0-db-create-bmg4x\" (UID: \"84bc7003-1a29-41b6-af75-956706dd0efe\") " pod="openstack/nova-cell0-db-create-bmg4x" Feb 17 16:15:46 crc kubenswrapper[4808]: I0217 16:15:46.993877 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/84bc7003-1a29-41b6-af75-956706dd0efe-operator-scripts\") pod \"nova-cell0-db-create-bmg4x\" (UID: \"84bc7003-1a29-41b6-af75-956706dd0efe\") " pod="openstack/nova-cell0-db-create-bmg4x" Feb 17 16:15:46 crc kubenswrapper[4808]: I0217 16:15:46.993926 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/785bc852-9af8-4d44-9c07-a7b501efb72c-operator-scripts\") pod \"nova-api-db-create-tmj75\" (UID: \"785bc852-9af8-4d44-9c07-a7b501efb72c\") " pod="openstack/nova-api-db-create-tmj75" Feb 17 16:15:46 crc kubenswrapper[4808]: I0217 16:15:46.994717 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/785bc852-9af8-4d44-9c07-a7b501efb72c-operator-scripts\") pod \"nova-api-db-create-tmj75\" (UID: \"785bc852-9af8-4d44-9c07-a7b501efb72c\") " pod="openstack/nova-api-db-create-tmj75" Feb 17 16:15:46 crc kubenswrapper[4808]: I0217 16:15:46.995019 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-7e6f-account-create-update-zcm7d"] Feb 17 16:15:47 crc kubenswrapper[4808]: I0217 16:15:47.016761 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8jmx\" (UniqueName: \"kubernetes.io/projected/785bc852-9af8-4d44-9c07-a7b501efb72c-kube-api-access-g8jmx\") pod \"nova-api-db-create-tmj75\" (UID: \"785bc852-9af8-4d44-9c07-a7b501efb72c\") " pod="openstack/nova-api-db-create-tmj75" Feb 17 16:15:47 crc kubenswrapper[4808]: I0217 16:15:47.095224 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhqjl\" (UniqueName: \"kubernetes.io/projected/adb98158-8a64-4a24-9d8a-5c7308881c79-kube-api-access-qhqjl\") pod \"nova-api-7e6f-account-create-update-zcm7d\" (UID: \"adb98158-8a64-4a24-9d8a-5c7308881c79\") " pod="openstack/nova-api-7e6f-account-create-update-zcm7d" Feb 17 16:15:47 crc kubenswrapper[4808]: I0217 16:15:47.095277 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/adb98158-8a64-4a24-9d8a-5c7308881c79-operator-scripts\") pod \"nova-api-7e6f-account-create-update-zcm7d\" (UID: \"adb98158-8a64-4a24-9d8a-5c7308881c79\") " pod="openstack/nova-api-7e6f-account-create-update-zcm7d" Feb 17 16:15:47 crc kubenswrapper[4808]: I0217 16:15:47.095354 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmqtm\" (UniqueName: \"kubernetes.io/projected/84bc7003-1a29-41b6-af75-956706dd0efe-kube-api-access-pmqtm\") pod \"nova-cell0-db-create-bmg4x\" (UID: \"84bc7003-1a29-41b6-af75-956706dd0efe\") " pod="openstack/nova-cell0-db-create-bmg4x" Feb 17 16:15:47 crc kubenswrapper[4808]: I0217 16:15:47.095375 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/84bc7003-1a29-41b6-af75-956706dd0efe-operator-scripts\") pod \"nova-cell0-db-create-bmg4x\" (UID: \"84bc7003-1a29-41b6-af75-956706dd0efe\") " pod="openstack/nova-cell0-db-create-bmg4x" Feb 17 16:15:47 crc kubenswrapper[4808]: I0217 16:15:47.096076 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/84bc7003-1a29-41b6-af75-956706dd0efe-operator-scripts\") pod \"nova-cell0-db-create-bmg4x\" (UID: \"84bc7003-1a29-41b6-af75-956706dd0efe\") " pod="openstack/nova-cell0-db-create-bmg4x" Feb 17 16:15:47 crc kubenswrapper[4808]: I0217 16:15:47.096103 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-drbdx"] Feb 17 16:15:47 crc kubenswrapper[4808]: I0217 16:15:47.098063 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/adb98158-8a64-4a24-9d8a-5c7308881c79-operator-scripts\") pod \"nova-api-7e6f-account-create-update-zcm7d\" (UID: \"adb98158-8a64-4a24-9d8a-5c7308881c79\") " pod="openstack/nova-api-7e6f-account-create-update-zcm7d" Feb 17 16:15:47 crc kubenswrapper[4808]: I0217 16:15:47.099015 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-drbdx" Feb 17 16:15:47 crc kubenswrapper[4808]: I0217 16:15:47.107917 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-0369-account-create-update-hd6gb"] Feb 17 16:15:47 crc kubenswrapper[4808]: I0217 16:15:47.109193 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-0369-account-create-update-hd6gb" Feb 17 16:15:47 crc kubenswrapper[4808]: I0217 16:15:47.111970 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 17 16:15:47 crc kubenswrapper[4808]: I0217 16:15:47.114506 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhqjl\" (UniqueName: \"kubernetes.io/projected/adb98158-8a64-4a24-9d8a-5c7308881c79-kube-api-access-qhqjl\") pod \"nova-api-7e6f-account-create-update-zcm7d\" (UID: \"adb98158-8a64-4a24-9d8a-5c7308881c79\") " pod="openstack/nova-api-7e6f-account-create-update-zcm7d" Feb 17 16:15:47 crc kubenswrapper[4808]: I0217 16:15:47.115042 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmqtm\" (UniqueName: \"kubernetes.io/projected/84bc7003-1a29-41b6-af75-956706dd0efe-kube-api-access-pmqtm\") pod \"nova-cell0-db-create-bmg4x\" (UID: \"84bc7003-1a29-41b6-af75-956706dd0efe\") " pod="openstack/nova-cell0-db-create-bmg4x" Feb 17 16:15:47 crc kubenswrapper[4808]: I0217 16:15:47.129426 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-drbdx"] Feb 17 16:15:47 crc kubenswrapper[4808]: I0217 16:15:47.135157 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-tmj75" Feb 17 16:15:47 crc kubenswrapper[4808]: I0217 16:15:47.144267 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-0369-account-create-update-hd6gb"] Feb 17 16:15:47 crc kubenswrapper[4808]: I0217 16:15:47.168607 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd86efad-8ad2-4e38-b731-5f892d34a582" path="/var/lib/kubelet/pods/bd86efad-8ad2-4e38-b731-5f892d34a582/volumes" Feb 17 16:15:47 crc kubenswrapper[4808]: I0217 16:15:47.197753 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c6cd1abe-7b23-494f-b22f-b355f5937f82-operator-scripts\") pod \"nova-cell0-0369-account-create-update-hd6gb\" (UID: \"c6cd1abe-7b23-494f-b22f-b355f5937f82\") " pod="openstack/nova-cell0-0369-account-create-update-hd6gb" Feb 17 16:15:47 crc kubenswrapper[4808]: I0217 16:15:47.198004 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27rbd\" (UniqueName: \"kubernetes.io/projected/b6543f3f-c70d-4258-b1f3-b74458b60153-kube-api-access-27rbd\") pod \"nova-cell1-db-create-drbdx\" (UID: \"b6543f3f-c70d-4258-b1f3-b74458b60153\") " pod="openstack/nova-cell1-db-create-drbdx" Feb 17 16:15:47 crc kubenswrapper[4808]: I0217 16:15:47.198082 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njp8m\" (UniqueName: \"kubernetes.io/projected/c6cd1abe-7b23-494f-b22f-b355f5937f82-kube-api-access-njp8m\") pod \"nova-cell0-0369-account-create-update-hd6gb\" (UID: \"c6cd1abe-7b23-494f-b22f-b355f5937f82\") " pod="openstack/nova-cell0-0369-account-create-update-hd6gb" Feb 17 16:15:47 crc kubenswrapper[4808]: I0217 16:15:47.198109 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b6543f3f-c70d-4258-b1f3-b74458b60153-operator-scripts\") pod \"nova-cell1-db-create-drbdx\" (UID: \"b6543f3f-c70d-4258-b1f3-b74458b60153\") " pod="openstack/nova-cell1-db-create-drbdx" Feb 17 16:15:47 crc kubenswrapper[4808]: I0217 16:15:47.226099 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-bmg4x" Feb 17 16:15:47 crc kubenswrapper[4808]: I0217 16:15:47.300108 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-490b-account-create-update-7wjkg"] Feb 17 16:15:47 crc kubenswrapper[4808]: I0217 16:15:47.300328 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27rbd\" (UniqueName: \"kubernetes.io/projected/b6543f3f-c70d-4258-b1f3-b74458b60153-kube-api-access-27rbd\") pod \"nova-cell1-db-create-drbdx\" (UID: \"b6543f3f-c70d-4258-b1f3-b74458b60153\") " pod="openstack/nova-cell1-db-create-drbdx" Feb 17 16:15:47 crc kubenswrapper[4808]: I0217 16:15:47.300493 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njp8m\" (UniqueName: \"kubernetes.io/projected/c6cd1abe-7b23-494f-b22f-b355f5937f82-kube-api-access-njp8m\") pod \"nova-cell0-0369-account-create-update-hd6gb\" (UID: \"c6cd1abe-7b23-494f-b22f-b355f5937f82\") " pod="openstack/nova-cell0-0369-account-create-update-hd6gb" Feb 17 16:15:47 crc kubenswrapper[4808]: I0217 16:15:47.300593 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b6543f3f-c70d-4258-b1f3-b74458b60153-operator-scripts\") pod \"nova-cell1-db-create-drbdx\" (UID: \"b6543f3f-c70d-4258-b1f3-b74458b60153\") " pod="openstack/nova-cell1-db-create-drbdx" Feb 17 16:15:47 crc kubenswrapper[4808]: I0217 16:15:47.300677 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c6cd1abe-7b23-494f-b22f-b355f5937f82-operator-scripts\") pod \"nova-cell0-0369-account-create-update-hd6gb\" (UID: \"c6cd1abe-7b23-494f-b22f-b355f5937f82\") " pod="openstack/nova-cell0-0369-account-create-update-hd6gb" Feb 17 16:15:47 crc kubenswrapper[4808]: I0217 16:15:47.301424 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b6543f3f-c70d-4258-b1f3-b74458b60153-operator-scripts\") pod \"nova-cell1-db-create-drbdx\" (UID: \"b6543f3f-c70d-4258-b1f3-b74458b60153\") " pod="openstack/nova-cell1-db-create-drbdx" Feb 17 16:15:47 crc kubenswrapper[4808]: I0217 16:15:47.301681 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c6cd1abe-7b23-494f-b22f-b355f5937f82-operator-scripts\") pod \"nova-cell0-0369-account-create-update-hd6gb\" (UID: \"c6cd1abe-7b23-494f-b22f-b355f5937f82\") " pod="openstack/nova-cell0-0369-account-create-update-hd6gb" Feb 17 16:15:47 crc kubenswrapper[4808]: I0217 16:15:47.302302 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-490b-account-create-update-7wjkg" Feb 17 16:15:47 crc kubenswrapper[4808]: I0217 16:15:47.304495 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 17 16:15:47 crc kubenswrapper[4808]: I0217 16:15:47.326105 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njp8m\" (UniqueName: \"kubernetes.io/projected/c6cd1abe-7b23-494f-b22f-b355f5937f82-kube-api-access-njp8m\") pod \"nova-cell0-0369-account-create-update-hd6gb\" (UID: \"c6cd1abe-7b23-494f-b22f-b355f5937f82\") " pod="openstack/nova-cell0-0369-account-create-update-hd6gb" Feb 17 16:15:47 crc kubenswrapper[4808]: I0217 16:15:47.326893 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27rbd\" (UniqueName: \"kubernetes.io/projected/b6543f3f-c70d-4258-b1f3-b74458b60153-kube-api-access-27rbd\") pod \"nova-cell1-db-create-drbdx\" (UID: \"b6543f3f-c70d-4258-b1f3-b74458b60153\") " pod="openstack/nova-cell1-db-create-drbdx" Feb 17 16:15:47 crc kubenswrapper[4808]: I0217 16:15:47.329366 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-490b-account-create-update-7wjkg"] Feb 17 16:15:47 crc kubenswrapper[4808]: I0217 16:15:47.402781 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bad0fdf2-2880-4568-87b0-6319f864c348-operator-scripts\") pod \"nova-cell1-490b-account-create-update-7wjkg\" (UID: \"bad0fdf2-2880-4568-87b0-6319f864c348\") " pod="openstack/nova-cell1-490b-account-create-update-7wjkg" Feb 17 16:15:47 crc kubenswrapper[4808]: I0217 16:15:47.403546 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w296r\" (UniqueName: \"kubernetes.io/projected/bad0fdf2-2880-4568-87b0-6319f864c348-kube-api-access-w296r\") pod \"nova-cell1-490b-account-create-update-7wjkg\" (UID: \"bad0fdf2-2880-4568-87b0-6319f864c348\") " pod="openstack/nova-cell1-490b-account-create-update-7wjkg" Feb 17 16:15:47 crc kubenswrapper[4808]: I0217 16:15:47.408724 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-7e6f-account-create-update-zcm7d" Feb 17 16:15:47 crc kubenswrapper[4808]: I0217 16:15:47.506267 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bad0fdf2-2880-4568-87b0-6319f864c348-operator-scripts\") pod \"nova-cell1-490b-account-create-update-7wjkg\" (UID: \"bad0fdf2-2880-4568-87b0-6319f864c348\") " pod="openstack/nova-cell1-490b-account-create-update-7wjkg" Feb 17 16:15:47 crc kubenswrapper[4808]: I0217 16:15:47.506561 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w296r\" (UniqueName: \"kubernetes.io/projected/bad0fdf2-2880-4568-87b0-6319f864c348-kube-api-access-w296r\") pod \"nova-cell1-490b-account-create-update-7wjkg\" (UID: \"bad0fdf2-2880-4568-87b0-6319f864c348\") " pod="openstack/nova-cell1-490b-account-create-update-7wjkg" Feb 17 16:15:47 crc kubenswrapper[4808]: I0217 16:15:47.509081 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bad0fdf2-2880-4568-87b0-6319f864c348-operator-scripts\") pod \"nova-cell1-490b-account-create-update-7wjkg\" (UID: \"bad0fdf2-2880-4568-87b0-6319f864c348\") " pod="openstack/nova-cell1-490b-account-create-update-7wjkg" Feb 17 16:15:47 crc kubenswrapper[4808]: I0217 16:15:47.522925 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w296r\" (UniqueName: \"kubernetes.io/projected/bad0fdf2-2880-4568-87b0-6319f864c348-kube-api-access-w296r\") pod \"nova-cell1-490b-account-create-update-7wjkg\" (UID: \"bad0fdf2-2880-4568-87b0-6319f864c348\") " pod="openstack/nova-cell1-490b-account-create-update-7wjkg" Feb 17 16:15:47 crc kubenswrapper[4808]: I0217 16:15:47.543711 4808 generic.go:334] "Generic (PLEG): container finished" podID="b4b8e73f-b7b0-4580-8e0f-44eef84624e4" containerID="6fb4ffeac0605961472d3b2de8b2dce4344cba69b4920dc698cb1b861244c6eb" exitCode=0 Feb 17 16:15:47 crc kubenswrapper[4808]: I0217 16:15:47.543842 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5c8b8554dd-86wnt" event={"ID":"b4b8e73f-b7b0-4580-8e0f-44eef84624e4","Type":"ContainerDied","Data":"6fb4ffeac0605961472d3b2de8b2dce4344cba69b4920dc698cb1b861244c6eb"} Feb 17 16:15:47 crc kubenswrapper[4808]: I0217 16:15:47.566124 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-drbdx" Feb 17 16:15:47 crc kubenswrapper[4808]: I0217 16:15:47.592093 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-0369-account-create-update-hd6gb" Feb 17 16:15:47 crc kubenswrapper[4808]: I0217 16:15:47.634675 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-tmj75"] Feb 17 16:15:47 crc kubenswrapper[4808]: I0217 16:15:47.639176 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-490b-account-create-update-7wjkg" Feb 17 16:15:47 crc kubenswrapper[4808]: I0217 16:15:47.767749 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-bmg4x"] Feb 17 16:15:47 crc kubenswrapper[4808]: I0217 16:15:47.999445 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-7e6f-account-create-update-zcm7d"] Feb 17 16:15:48 crc kubenswrapper[4808]: I0217 16:15:48.104762 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 17 16:15:48 crc kubenswrapper[4808]: I0217 16:15:48.168142 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-drbdx"] Feb 17 16:15:48 crc kubenswrapper[4808]: I0217 16:15:48.235790 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-0369-account-create-update-hd6gb"] Feb 17 16:15:48 crc kubenswrapper[4808]: W0217 16:15:48.242803 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc6cd1abe_7b23_494f_b22f_b355f5937f82.slice/crio-6fcf5c8c9a435e82fce69581ddd3ecd326525abf323b41292990f134a973e737 WatchSource:0}: Error finding container 6fcf5c8c9a435e82fce69581ddd3ecd326525abf323b41292990f134a973e737: Status 404 returned error can't find the container with id 6fcf5c8c9a435e82fce69581ddd3ecd326525abf323b41292990f134a973e737 Feb 17 16:15:48 crc kubenswrapper[4808]: I0217 16:15:48.280285 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-490b-account-create-update-7wjkg"] Feb 17 16:15:48 crc kubenswrapper[4808]: I0217 16:15:48.558730 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-drbdx" event={"ID":"b6543f3f-c70d-4258-b1f3-b74458b60153","Type":"ContainerStarted","Data":"51791c7cf2f261447e50c08d9d3c4f313629f6102c4610a772dc3de95d2aa336"} Feb 17 16:15:48 crc kubenswrapper[4808]: I0217 16:15:48.558821 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-drbdx" event={"ID":"b6543f3f-c70d-4258-b1f3-b74458b60153","Type":"ContainerStarted","Data":"8a75933f3031c6b1f8cf8ff6b1411acfe98718f81345fbaa18024575af0bf6ba"} Feb 17 16:15:48 crc kubenswrapper[4808]: I0217 16:15:48.561342 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-0369-account-create-update-hd6gb" event={"ID":"c6cd1abe-7b23-494f-b22f-b355f5937f82","Type":"ContainerStarted","Data":"4239c263afa33d8fe9b5e50780a3b457b698315d00933f6d44bd070b105665ca"} Feb 17 16:15:48 crc kubenswrapper[4808]: I0217 16:15:48.561387 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-0369-account-create-update-hd6gb" event={"ID":"c6cd1abe-7b23-494f-b22f-b355f5937f82","Type":"ContainerStarted","Data":"6fcf5c8c9a435e82fce69581ddd3ecd326525abf323b41292990f134a973e737"} Feb 17 16:15:48 crc kubenswrapper[4808]: I0217 16:15:48.563194 4808 generic.go:334] "Generic (PLEG): container finished" podID="adb98158-8a64-4a24-9d8a-5c7308881c79" containerID="24b6cca39f7f0539540e703e695312278dead1c9fbed89b92d1978c2b31592d9" exitCode=0 Feb 17 16:15:48 crc kubenswrapper[4808]: I0217 16:15:48.563238 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-7e6f-account-create-update-zcm7d" event={"ID":"adb98158-8a64-4a24-9d8a-5c7308881c79","Type":"ContainerDied","Data":"24b6cca39f7f0539540e703e695312278dead1c9fbed89b92d1978c2b31592d9"} Feb 17 16:15:48 crc kubenswrapper[4808]: I0217 16:15:48.563258 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-7e6f-account-create-update-zcm7d" event={"ID":"adb98158-8a64-4a24-9d8a-5c7308881c79","Type":"ContainerStarted","Data":"0dc09ac306fc7e2b364ea4b44d5d09a138003a1e81f7a44ecd2f51ed4b1d1b89"} Feb 17 16:15:48 crc kubenswrapper[4808]: I0217 16:15:48.565314 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-490b-account-create-update-7wjkg" event={"ID":"bad0fdf2-2880-4568-87b0-6319f864c348","Type":"ContainerStarted","Data":"75d3a237cde61df2195413fb2a62d4c02235666e74a55328045b62f08820fc28"} Feb 17 16:15:48 crc kubenswrapper[4808]: I0217 16:15:48.565340 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-490b-account-create-update-7wjkg" event={"ID":"bad0fdf2-2880-4568-87b0-6319f864c348","Type":"ContainerStarted","Data":"3e57bebfb95b0d9d4f461957a8bd1f2f06012fd271323ebe71abc58fa6b4937e"} Feb 17 16:15:48 crc kubenswrapper[4808]: I0217 16:15:48.568368 4808 generic.go:334] "Generic (PLEG): container finished" podID="84bc7003-1a29-41b6-af75-956706dd0efe" containerID="8a03cfda6ba1482551fb43a88bb0d456e3e357369b1e584649fa69312e5fe7ab" exitCode=0 Feb 17 16:15:48 crc kubenswrapper[4808]: I0217 16:15:48.568449 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-bmg4x" event={"ID":"84bc7003-1a29-41b6-af75-956706dd0efe","Type":"ContainerDied","Data":"8a03cfda6ba1482551fb43a88bb0d456e3e357369b1e584649fa69312e5fe7ab"} Feb 17 16:15:48 crc kubenswrapper[4808]: I0217 16:15:48.568482 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-bmg4x" event={"ID":"84bc7003-1a29-41b6-af75-956706dd0efe","Type":"ContainerStarted","Data":"cf5220fed618b3508a0f2ed78390fae1a7cb088c433552f6ee16c31271e9f9f4"} Feb 17 16:15:48 crc kubenswrapper[4808]: I0217 16:15:48.572645 4808 generic.go:334] "Generic (PLEG): container finished" podID="785bc852-9af8-4d44-9c07-a7b501efb72c" containerID="202121dae9bdf398a0c42e540c49f3bde76321b020f7cab3e7250c352d974480" exitCode=0 Feb 17 16:15:48 crc kubenswrapper[4808]: I0217 16:15:48.572696 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-tmj75" event={"ID":"785bc852-9af8-4d44-9c07-a7b501efb72c","Type":"ContainerDied","Data":"202121dae9bdf398a0c42e540c49f3bde76321b020f7cab3e7250c352d974480"} Feb 17 16:15:48 crc kubenswrapper[4808]: I0217 16:15:48.572720 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-tmj75" event={"ID":"785bc852-9af8-4d44-9c07-a7b501efb72c","Type":"ContainerStarted","Data":"39a847653b65f7a910542af7c8bf6279189cd0c6dc3f5a9660574c5fd3b57fa7"} Feb 17 16:15:48 crc kubenswrapper[4808]: I0217 16:15:48.582128 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-drbdx" podStartSLOduration=1.5821075580000001 podStartE2EDuration="1.582107558s" podCreationTimestamp="2026-02-17 16:15:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:15:48.580478265 +0000 UTC m=+1312.096837338" watchObservedRunningTime="2026-02-17 16:15:48.582107558 +0000 UTC m=+1312.098466631" Feb 17 16:15:48 crc kubenswrapper[4808]: I0217 16:15:48.614786 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-0369-account-create-update-hd6gb" podStartSLOduration=1.614769713 podStartE2EDuration="1.614769713s" podCreationTimestamp="2026-02-17 16:15:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:15:48.610218409 +0000 UTC m=+1312.126577482" watchObservedRunningTime="2026-02-17 16:15:48.614769713 +0000 UTC m=+1312.131128786" Feb 17 16:15:48 crc kubenswrapper[4808]: I0217 16:15:48.647966 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-490b-account-create-update-7wjkg" podStartSLOduration=1.6479470809999999 podStartE2EDuration="1.647947081s" podCreationTimestamp="2026-02-17 16:15:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:15:48.645812663 +0000 UTC m=+1312.162171746" watchObservedRunningTime="2026-02-17 16:15:48.647947081 +0000 UTC m=+1312.164306154" Feb 17 16:15:48 crc kubenswrapper[4808]: E0217 16:15:48.936703 4808 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb6543f3f_c70d_4258_b1f3_b74458b60153.slice/crio-conmon-51791c7cf2f261447e50c08d9d3c4f313629f6102c4610a772dc3de95d2aa336.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb6543f3f_c70d_4258_b1f3_b74458b60153.slice/crio-51791c7cf2f261447e50c08d9d3c4f313629f6102c4610a772dc3de95d2aa336.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbad0fdf2_2880_4568_87b0_6319f864c348.slice/crio-conmon-75d3a237cde61df2195413fb2a62d4c02235666e74a55328045b62f08820fc28.scope\": RecentStats: unable to find data in memory cache]" Feb 17 16:15:49 crc kubenswrapper[4808]: I0217 16:15:49.583543 4808 generic.go:334] "Generic (PLEG): container finished" podID="bad0fdf2-2880-4568-87b0-6319f864c348" containerID="75d3a237cde61df2195413fb2a62d4c02235666e74a55328045b62f08820fc28" exitCode=0 Feb 17 16:15:49 crc kubenswrapper[4808]: I0217 16:15:49.584145 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-490b-account-create-update-7wjkg" event={"ID":"bad0fdf2-2880-4568-87b0-6319f864c348","Type":"ContainerDied","Data":"75d3a237cde61df2195413fb2a62d4c02235666e74a55328045b62f08820fc28"} Feb 17 16:15:49 crc kubenswrapper[4808]: I0217 16:15:49.586944 4808 generic.go:334] "Generic (PLEG): container finished" podID="b6543f3f-c70d-4258-b1f3-b74458b60153" containerID="51791c7cf2f261447e50c08d9d3c4f313629f6102c4610a772dc3de95d2aa336" exitCode=0 Feb 17 16:15:49 crc kubenswrapper[4808]: I0217 16:15:49.587066 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-drbdx" event={"ID":"b6543f3f-c70d-4258-b1f3-b74458b60153","Type":"ContainerDied","Data":"51791c7cf2f261447e50c08d9d3c4f313629f6102c4610a772dc3de95d2aa336"} Feb 17 16:15:49 crc kubenswrapper[4808]: I0217 16:15:49.588653 4808 generic.go:334] "Generic (PLEG): container finished" podID="c6cd1abe-7b23-494f-b22f-b355f5937f82" containerID="4239c263afa33d8fe9b5e50780a3b457b698315d00933f6d44bd070b105665ca" exitCode=0 Feb 17 16:15:49 crc kubenswrapper[4808]: I0217 16:15:49.588794 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-0369-account-create-update-hd6gb" event={"ID":"c6cd1abe-7b23-494f-b22f-b355f5937f82","Type":"ContainerDied","Data":"4239c263afa33d8fe9b5e50780a3b457b698315d00933f6d44bd070b105665ca"} Feb 17 16:15:50 crc kubenswrapper[4808]: I0217 16:15:50.490807 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:15:50 crc kubenswrapper[4808]: I0217 16:15:50.491314 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ade95199-c613-4920-aa24-6cedde28dda6" containerName="ceilometer-central-agent" containerID="cri-o://7026f52ab348147acdc0cc1845b030fe4c38003a827c4074efe539c2c13f73e8" gracePeriod=30 Feb 17 16:15:50 crc kubenswrapper[4808]: I0217 16:15:50.491423 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ade95199-c613-4920-aa24-6cedde28dda6" containerName="ceilometer-notification-agent" containerID="cri-o://1475151fb2b9ec40ea170157633c4ee253f1d8d7d5da164ebda9104b80ecbb68" gracePeriod=30 Feb 17 16:15:50 crc kubenswrapper[4808]: I0217 16:15:50.491418 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ade95199-c613-4920-aa24-6cedde28dda6" containerName="sg-core" containerID="cri-o://a6b58d8e1d61eb15475898662433c7b6ba1aca7c7f517ddedfbced3c5aaf2a61" gracePeriod=30 Feb 17 16:15:50 crc kubenswrapper[4808]: I0217 16:15:50.491473 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ade95199-c613-4920-aa24-6cedde28dda6" containerName="proxy-httpd" containerID="cri-o://f08bbc217988c1d4a683f5088b670b4d5a57e2fdbedee004dcb40bd4e6db140a" gracePeriod=30 Feb 17 16:15:50 crc kubenswrapper[4808]: I0217 16:15:50.500758 4808 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="ade95199-c613-4920-aa24-6cedde28dda6" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.189:3000/\": EOF" Feb 17 16:15:50 crc kubenswrapper[4808]: I0217 16:15:50.649161 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-dcfbdc547-54spv"] Feb 17 16:15:50 crc kubenswrapper[4808]: I0217 16:15:50.651077 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-dcfbdc547-54spv" Feb 17 16:15:50 crc kubenswrapper[4808]: I0217 16:15:50.654464 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Feb 17 16:15:50 crc kubenswrapper[4808]: I0217 16:15:50.654609 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Feb 17 16:15:50 crc kubenswrapper[4808]: I0217 16:15:50.655115 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 17 16:15:50 crc kubenswrapper[4808]: I0217 16:15:50.669796 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-dcfbdc547-54spv"] Feb 17 16:15:50 crc kubenswrapper[4808]: I0217 16:15:50.709993 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/45097e1f-e6c7-40c1-8338-3f1ac506c3fe-run-httpd\") pod \"swift-proxy-dcfbdc547-54spv\" (UID: \"45097e1f-e6c7-40c1-8338-3f1ac506c3fe\") " pod="openstack/swift-proxy-dcfbdc547-54spv" Feb 17 16:15:50 crc kubenswrapper[4808]: I0217 16:15:50.710048 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5zvx\" (UniqueName: \"kubernetes.io/projected/45097e1f-e6c7-40c1-8338-3f1ac506c3fe-kube-api-access-g5zvx\") pod \"swift-proxy-dcfbdc547-54spv\" (UID: \"45097e1f-e6c7-40c1-8338-3f1ac506c3fe\") " pod="openstack/swift-proxy-dcfbdc547-54spv" Feb 17 16:15:50 crc kubenswrapper[4808]: I0217 16:15:50.710085 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/45097e1f-e6c7-40c1-8338-3f1ac506c3fe-etc-swift\") pod \"swift-proxy-dcfbdc547-54spv\" (UID: \"45097e1f-e6c7-40c1-8338-3f1ac506c3fe\") " pod="openstack/swift-proxy-dcfbdc547-54spv" Feb 17 16:15:50 crc kubenswrapper[4808]: I0217 16:15:50.710108 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45097e1f-e6c7-40c1-8338-3f1ac506c3fe-config-data\") pod \"swift-proxy-dcfbdc547-54spv\" (UID: \"45097e1f-e6c7-40c1-8338-3f1ac506c3fe\") " pod="openstack/swift-proxy-dcfbdc547-54spv" Feb 17 16:15:50 crc kubenswrapper[4808]: I0217 16:15:50.710167 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/45097e1f-e6c7-40c1-8338-3f1ac506c3fe-internal-tls-certs\") pod \"swift-proxy-dcfbdc547-54spv\" (UID: \"45097e1f-e6c7-40c1-8338-3f1ac506c3fe\") " pod="openstack/swift-proxy-dcfbdc547-54spv" Feb 17 16:15:50 crc kubenswrapper[4808]: I0217 16:15:50.710188 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45097e1f-e6c7-40c1-8338-3f1ac506c3fe-combined-ca-bundle\") pod \"swift-proxy-dcfbdc547-54spv\" (UID: \"45097e1f-e6c7-40c1-8338-3f1ac506c3fe\") " pod="openstack/swift-proxy-dcfbdc547-54spv" Feb 17 16:15:50 crc kubenswrapper[4808]: I0217 16:15:50.710207 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/45097e1f-e6c7-40c1-8338-3f1ac506c3fe-log-httpd\") pod \"swift-proxy-dcfbdc547-54spv\" (UID: \"45097e1f-e6c7-40c1-8338-3f1ac506c3fe\") " pod="openstack/swift-proxy-dcfbdc547-54spv" Feb 17 16:15:50 crc kubenswrapper[4808]: I0217 16:15:50.710250 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/45097e1f-e6c7-40c1-8338-3f1ac506c3fe-public-tls-certs\") pod \"swift-proxy-dcfbdc547-54spv\" (UID: \"45097e1f-e6c7-40c1-8338-3f1ac506c3fe\") " pod="openstack/swift-proxy-dcfbdc547-54spv" Feb 17 16:15:50 crc kubenswrapper[4808]: I0217 16:15:50.813713 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45097e1f-e6c7-40c1-8338-3f1ac506c3fe-combined-ca-bundle\") pod \"swift-proxy-dcfbdc547-54spv\" (UID: \"45097e1f-e6c7-40c1-8338-3f1ac506c3fe\") " pod="openstack/swift-proxy-dcfbdc547-54spv" Feb 17 16:15:50 crc kubenswrapper[4808]: I0217 16:15:50.813760 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/45097e1f-e6c7-40c1-8338-3f1ac506c3fe-log-httpd\") pod \"swift-proxy-dcfbdc547-54spv\" (UID: \"45097e1f-e6c7-40c1-8338-3f1ac506c3fe\") " pod="openstack/swift-proxy-dcfbdc547-54spv" Feb 17 16:15:50 crc kubenswrapper[4808]: I0217 16:15:50.813814 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/45097e1f-e6c7-40c1-8338-3f1ac506c3fe-public-tls-certs\") pod \"swift-proxy-dcfbdc547-54spv\" (UID: \"45097e1f-e6c7-40c1-8338-3f1ac506c3fe\") " pod="openstack/swift-proxy-dcfbdc547-54spv" Feb 17 16:15:50 crc kubenswrapper[4808]: I0217 16:15:50.813868 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/45097e1f-e6c7-40c1-8338-3f1ac506c3fe-run-httpd\") pod \"swift-proxy-dcfbdc547-54spv\" (UID: \"45097e1f-e6c7-40c1-8338-3f1ac506c3fe\") " pod="openstack/swift-proxy-dcfbdc547-54spv" Feb 17 16:15:50 crc kubenswrapper[4808]: I0217 16:15:50.813896 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5zvx\" (UniqueName: \"kubernetes.io/projected/45097e1f-e6c7-40c1-8338-3f1ac506c3fe-kube-api-access-g5zvx\") pod \"swift-proxy-dcfbdc547-54spv\" (UID: \"45097e1f-e6c7-40c1-8338-3f1ac506c3fe\") " pod="openstack/swift-proxy-dcfbdc547-54spv" Feb 17 16:15:50 crc kubenswrapper[4808]: I0217 16:15:50.813929 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/45097e1f-e6c7-40c1-8338-3f1ac506c3fe-etc-swift\") pod \"swift-proxy-dcfbdc547-54spv\" (UID: \"45097e1f-e6c7-40c1-8338-3f1ac506c3fe\") " pod="openstack/swift-proxy-dcfbdc547-54spv" Feb 17 16:15:50 crc kubenswrapper[4808]: I0217 16:15:50.813951 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45097e1f-e6c7-40c1-8338-3f1ac506c3fe-config-data\") pod \"swift-proxy-dcfbdc547-54spv\" (UID: \"45097e1f-e6c7-40c1-8338-3f1ac506c3fe\") " pod="openstack/swift-proxy-dcfbdc547-54spv" Feb 17 16:15:50 crc kubenswrapper[4808]: I0217 16:15:50.814008 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/45097e1f-e6c7-40c1-8338-3f1ac506c3fe-internal-tls-certs\") pod \"swift-proxy-dcfbdc547-54spv\" (UID: \"45097e1f-e6c7-40c1-8338-3f1ac506c3fe\") " pod="openstack/swift-proxy-dcfbdc547-54spv" Feb 17 16:15:50 crc kubenswrapper[4808]: I0217 16:15:50.815284 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/45097e1f-e6c7-40c1-8338-3f1ac506c3fe-run-httpd\") pod \"swift-proxy-dcfbdc547-54spv\" (UID: \"45097e1f-e6c7-40c1-8338-3f1ac506c3fe\") " pod="openstack/swift-proxy-dcfbdc547-54spv" Feb 17 16:15:50 crc kubenswrapper[4808]: I0217 16:15:50.820530 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/45097e1f-e6c7-40c1-8338-3f1ac506c3fe-log-httpd\") pod \"swift-proxy-dcfbdc547-54spv\" (UID: \"45097e1f-e6c7-40c1-8338-3f1ac506c3fe\") " pod="openstack/swift-proxy-dcfbdc547-54spv" Feb 17 16:15:50 crc kubenswrapper[4808]: I0217 16:15:50.822181 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/45097e1f-e6c7-40c1-8338-3f1ac506c3fe-internal-tls-certs\") pod \"swift-proxy-dcfbdc547-54spv\" (UID: \"45097e1f-e6c7-40c1-8338-3f1ac506c3fe\") " pod="openstack/swift-proxy-dcfbdc547-54spv" Feb 17 16:15:50 crc kubenswrapper[4808]: I0217 16:15:50.823878 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45097e1f-e6c7-40c1-8338-3f1ac506c3fe-combined-ca-bundle\") pod \"swift-proxy-dcfbdc547-54spv\" (UID: \"45097e1f-e6c7-40c1-8338-3f1ac506c3fe\") " pod="openstack/swift-proxy-dcfbdc547-54spv" Feb 17 16:15:50 crc kubenswrapper[4808]: I0217 16:15:50.825082 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45097e1f-e6c7-40c1-8338-3f1ac506c3fe-config-data\") pod \"swift-proxy-dcfbdc547-54spv\" (UID: \"45097e1f-e6c7-40c1-8338-3f1ac506c3fe\") " pod="openstack/swift-proxy-dcfbdc547-54spv" Feb 17 16:15:50 crc kubenswrapper[4808]: I0217 16:15:50.834961 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/45097e1f-e6c7-40c1-8338-3f1ac506c3fe-public-tls-certs\") pod \"swift-proxy-dcfbdc547-54spv\" (UID: \"45097e1f-e6c7-40c1-8338-3f1ac506c3fe\") " pod="openstack/swift-proxy-dcfbdc547-54spv" Feb 17 16:15:50 crc kubenswrapper[4808]: I0217 16:15:50.836281 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/45097e1f-e6c7-40c1-8338-3f1ac506c3fe-etc-swift\") pod \"swift-proxy-dcfbdc547-54spv\" (UID: \"45097e1f-e6c7-40c1-8338-3f1ac506c3fe\") " pod="openstack/swift-proxy-dcfbdc547-54spv" Feb 17 16:15:50 crc kubenswrapper[4808]: I0217 16:15:50.840290 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5zvx\" (UniqueName: \"kubernetes.io/projected/45097e1f-e6c7-40c1-8338-3f1ac506c3fe-kube-api-access-g5zvx\") pod \"swift-proxy-dcfbdc547-54spv\" (UID: \"45097e1f-e6c7-40c1-8338-3f1ac506c3fe\") " pod="openstack/swift-proxy-dcfbdc547-54spv" Feb 17 16:15:50 crc kubenswrapper[4808]: I0217 16:15:50.972177 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-dcfbdc547-54spv" Feb 17 16:15:51 crc kubenswrapper[4808]: I0217 16:15:51.591777 4808 patch_prober.go:28] interesting pod/machine-config-daemon-k8v8k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:15:51 crc kubenswrapper[4808]: I0217 16:15:51.592112 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:15:51 crc kubenswrapper[4808]: I0217 16:15:51.611887 4808 generic.go:334] "Generic (PLEG): container finished" podID="b4b8e73f-b7b0-4580-8e0f-44eef84624e4" containerID="f3f7fd1ba085d42fb2a1208d784040ea1e2e45a48ec8b1c70c8122235d3614aa" exitCode=0 Feb 17 16:15:51 crc kubenswrapper[4808]: I0217 16:15:51.611957 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5c8b8554dd-86wnt" event={"ID":"b4b8e73f-b7b0-4580-8e0f-44eef84624e4","Type":"ContainerDied","Data":"f3f7fd1ba085d42fb2a1208d784040ea1e2e45a48ec8b1c70c8122235d3614aa"} Feb 17 16:15:51 crc kubenswrapper[4808]: I0217 16:15:51.615944 4808 generic.go:334] "Generic (PLEG): container finished" podID="ade95199-c613-4920-aa24-6cedde28dda6" containerID="f08bbc217988c1d4a683f5088b670b4d5a57e2fdbedee004dcb40bd4e6db140a" exitCode=0 Feb 17 16:15:51 crc kubenswrapper[4808]: I0217 16:15:51.615970 4808 generic.go:334] "Generic (PLEG): container finished" podID="ade95199-c613-4920-aa24-6cedde28dda6" containerID="a6b58d8e1d61eb15475898662433c7b6ba1aca7c7f517ddedfbced3c5aaf2a61" exitCode=2 Feb 17 16:15:51 crc kubenswrapper[4808]: I0217 16:15:51.615979 4808 generic.go:334] "Generic (PLEG): container finished" podID="ade95199-c613-4920-aa24-6cedde28dda6" containerID="1475151fb2b9ec40ea170157633c4ee253f1d8d7d5da164ebda9104b80ecbb68" exitCode=0 Feb 17 16:15:51 crc kubenswrapper[4808]: I0217 16:15:51.615988 4808 generic.go:334] "Generic (PLEG): container finished" podID="ade95199-c613-4920-aa24-6cedde28dda6" containerID="7026f52ab348147acdc0cc1845b030fe4c38003a827c4074efe539c2c13f73e8" exitCode=0 Feb 17 16:15:51 crc kubenswrapper[4808]: I0217 16:15:51.615971 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ade95199-c613-4920-aa24-6cedde28dda6","Type":"ContainerDied","Data":"f08bbc217988c1d4a683f5088b670b4d5a57e2fdbedee004dcb40bd4e6db140a"} Feb 17 16:15:51 crc kubenswrapper[4808]: I0217 16:15:51.616020 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ade95199-c613-4920-aa24-6cedde28dda6","Type":"ContainerDied","Data":"a6b58d8e1d61eb15475898662433c7b6ba1aca7c7f517ddedfbced3c5aaf2a61"} Feb 17 16:15:51 crc kubenswrapper[4808]: I0217 16:15:51.616033 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ade95199-c613-4920-aa24-6cedde28dda6","Type":"ContainerDied","Data":"1475151fb2b9ec40ea170157633c4ee253f1d8d7d5da164ebda9104b80ecbb68"} Feb 17 16:15:51 crc kubenswrapper[4808]: I0217 16:15:51.616042 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ade95199-c613-4920-aa24-6cedde28dda6","Type":"ContainerDied","Data":"7026f52ab348147acdc0cc1845b030fe4c38003a827c4074efe539c2c13f73e8"} Feb 17 16:15:55 crc kubenswrapper[4808]: I0217 16:15:55.656435 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-drbdx" event={"ID":"b6543f3f-c70d-4258-b1f3-b74458b60153","Type":"ContainerDied","Data":"8a75933f3031c6b1f8cf8ff6b1411acfe98718f81345fbaa18024575af0bf6ba"} Feb 17 16:15:55 crc kubenswrapper[4808]: I0217 16:15:55.657192 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a75933f3031c6b1f8cf8ff6b1411acfe98718f81345fbaa18024575af0bf6ba" Feb 17 16:15:55 crc kubenswrapper[4808]: I0217 16:15:55.660517 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-0369-account-create-update-hd6gb" event={"ID":"c6cd1abe-7b23-494f-b22f-b355f5937f82","Type":"ContainerDied","Data":"6fcf5c8c9a435e82fce69581ddd3ecd326525abf323b41292990f134a973e737"} Feb 17 16:15:55 crc kubenswrapper[4808]: I0217 16:15:55.660711 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6fcf5c8c9a435e82fce69581ddd3ecd326525abf323b41292990f134a973e737" Feb 17 16:15:55 crc kubenswrapper[4808]: I0217 16:15:55.665351 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-7e6f-account-create-update-zcm7d" event={"ID":"adb98158-8a64-4a24-9d8a-5c7308881c79","Type":"ContainerDied","Data":"0dc09ac306fc7e2b364ea4b44d5d09a138003a1e81f7a44ecd2f51ed4b1d1b89"} Feb 17 16:15:55 crc kubenswrapper[4808]: I0217 16:15:55.665392 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0dc09ac306fc7e2b364ea4b44d5d09a138003a1e81f7a44ecd2f51ed4b1d1b89" Feb 17 16:15:55 crc kubenswrapper[4808]: I0217 16:15:55.668131 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-490b-account-create-update-7wjkg" event={"ID":"bad0fdf2-2880-4568-87b0-6319f864c348","Type":"ContainerDied","Data":"3e57bebfb95b0d9d4f461957a8bd1f2f06012fd271323ebe71abc58fa6b4937e"} Feb 17 16:15:55 crc kubenswrapper[4808]: I0217 16:15:55.668164 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3e57bebfb95b0d9d4f461957a8bd1f2f06012fd271323ebe71abc58fa6b4937e" Feb 17 16:15:55 crc kubenswrapper[4808]: I0217 16:15:55.895414 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-0369-account-create-update-hd6gb" Feb 17 16:15:55 crc kubenswrapper[4808]: I0217 16:15:55.910986 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-490b-account-create-update-7wjkg" Feb 17 16:15:55 crc kubenswrapper[4808]: I0217 16:15:55.918108 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-drbdx" Feb 17 16:15:55 crc kubenswrapper[4808]: I0217 16:15:55.923230 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-7e6f-account-create-update-zcm7d" Feb 17 16:15:55 crc kubenswrapper[4808]: I0217 16:15:55.931190 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-bmg4x" Feb 17 16:15:55 crc kubenswrapper[4808]: I0217 16:15:55.933866 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-njp8m\" (UniqueName: \"kubernetes.io/projected/c6cd1abe-7b23-494f-b22f-b355f5937f82-kube-api-access-njp8m\") pod \"c6cd1abe-7b23-494f-b22f-b355f5937f82\" (UID: \"c6cd1abe-7b23-494f-b22f-b355f5937f82\") " Feb 17 16:15:55 crc kubenswrapper[4808]: I0217 16:15:55.933966 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c6cd1abe-7b23-494f-b22f-b355f5937f82-operator-scripts\") pod \"c6cd1abe-7b23-494f-b22f-b355f5937f82\" (UID: \"c6cd1abe-7b23-494f-b22f-b355f5937f82\") " Feb 17 16:15:55 crc kubenswrapper[4808]: I0217 16:15:55.936096 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6cd1abe-7b23-494f-b22f-b355f5937f82-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c6cd1abe-7b23-494f-b22f-b355f5937f82" (UID: "c6cd1abe-7b23-494f-b22f-b355f5937f82"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:15:55 crc kubenswrapper[4808]: I0217 16:15:55.938793 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6cd1abe-7b23-494f-b22f-b355f5937f82-kube-api-access-njp8m" (OuterVolumeSpecName: "kube-api-access-njp8m") pod "c6cd1abe-7b23-494f-b22f-b355f5937f82" (UID: "c6cd1abe-7b23-494f-b22f-b355f5937f82"). InnerVolumeSpecName "kube-api-access-njp8m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:15:55 crc kubenswrapper[4808]: I0217 16:15:55.941643 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-tmj75" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.035727 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b6543f3f-c70d-4258-b1f3-b74458b60153-operator-scripts\") pod \"b6543f3f-c70d-4258-b1f3-b74458b60153\" (UID: \"b6543f3f-c70d-4258-b1f3-b74458b60153\") " Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.036081 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w296r\" (UniqueName: \"kubernetes.io/projected/bad0fdf2-2880-4568-87b0-6319f864c348-kube-api-access-w296r\") pod \"bad0fdf2-2880-4568-87b0-6319f864c348\" (UID: \"bad0fdf2-2880-4568-87b0-6319f864c348\") " Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.036140 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6543f3f-c70d-4258-b1f3-b74458b60153-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b6543f3f-c70d-4258-b1f3-b74458b60153" (UID: "b6543f3f-c70d-4258-b1f3-b74458b60153"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.036155 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g8jmx\" (UniqueName: \"kubernetes.io/projected/785bc852-9af8-4d44-9c07-a7b501efb72c-kube-api-access-g8jmx\") pod \"785bc852-9af8-4d44-9c07-a7b501efb72c\" (UID: \"785bc852-9af8-4d44-9c07-a7b501efb72c\") " Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.036297 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/adb98158-8a64-4a24-9d8a-5c7308881c79-operator-scripts\") pod \"adb98158-8a64-4a24-9d8a-5c7308881c79\" (UID: \"adb98158-8a64-4a24-9d8a-5c7308881c79\") " Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.036490 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pmqtm\" (UniqueName: \"kubernetes.io/projected/84bc7003-1a29-41b6-af75-956706dd0efe-kube-api-access-pmqtm\") pod \"84bc7003-1a29-41b6-af75-956706dd0efe\" (UID: \"84bc7003-1a29-41b6-af75-956706dd0efe\") " Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.036512 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/785bc852-9af8-4d44-9c07-a7b501efb72c-operator-scripts\") pod \"785bc852-9af8-4d44-9c07-a7b501efb72c\" (UID: \"785bc852-9af8-4d44-9c07-a7b501efb72c\") " Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.036541 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qhqjl\" (UniqueName: \"kubernetes.io/projected/adb98158-8a64-4a24-9d8a-5c7308881c79-kube-api-access-qhqjl\") pod \"adb98158-8a64-4a24-9d8a-5c7308881c79\" (UID: \"adb98158-8a64-4a24-9d8a-5c7308881c79\") " Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.036666 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bad0fdf2-2880-4568-87b0-6319f864c348-operator-scripts\") pod \"bad0fdf2-2880-4568-87b0-6319f864c348\" (UID: \"bad0fdf2-2880-4568-87b0-6319f864c348\") " Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.036723 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-27rbd\" (UniqueName: \"kubernetes.io/projected/b6543f3f-c70d-4258-b1f3-b74458b60153-kube-api-access-27rbd\") pod \"b6543f3f-c70d-4258-b1f3-b74458b60153\" (UID: \"b6543f3f-c70d-4258-b1f3-b74458b60153\") " Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.036766 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/84bc7003-1a29-41b6-af75-956706dd0efe-operator-scripts\") pod \"84bc7003-1a29-41b6-af75-956706dd0efe\" (UID: \"84bc7003-1a29-41b6-af75-956706dd0efe\") " Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.037648 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-njp8m\" (UniqueName: \"kubernetes.io/projected/c6cd1abe-7b23-494f-b22f-b355f5937f82-kube-api-access-njp8m\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.037668 4808 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c6cd1abe-7b23-494f-b22f-b355f5937f82-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.037677 4808 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b6543f3f-c70d-4258-b1f3-b74458b60153-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.038084 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/adb98158-8a64-4a24-9d8a-5c7308881c79-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "adb98158-8a64-4a24-9d8a-5c7308881c79" (UID: "adb98158-8a64-4a24-9d8a-5c7308881c79"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.038167 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84bc7003-1a29-41b6-af75-956706dd0efe-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "84bc7003-1a29-41b6-af75-956706dd0efe" (UID: "84bc7003-1a29-41b6-af75-956706dd0efe"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.039175 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/785bc852-9af8-4d44-9c07-a7b501efb72c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "785bc852-9af8-4d44-9c07-a7b501efb72c" (UID: "785bc852-9af8-4d44-9c07-a7b501efb72c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.039222 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bad0fdf2-2880-4568-87b0-6319f864c348-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bad0fdf2-2880-4568-87b0-6319f864c348" (UID: "bad0fdf2-2880-4568-87b0-6319f864c348"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.042169 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/adb98158-8a64-4a24-9d8a-5c7308881c79-kube-api-access-qhqjl" (OuterVolumeSpecName: "kube-api-access-qhqjl") pod "adb98158-8a64-4a24-9d8a-5c7308881c79" (UID: "adb98158-8a64-4a24-9d8a-5c7308881c79"). InnerVolumeSpecName "kube-api-access-qhqjl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.043794 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bad0fdf2-2880-4568-87b0-6319f864c348-kube-api-access-w296r" (OuterVolumeSpecName: "kube-api-access-w296r") pod "bad0fdf2-2880-4568-87b0-6319f864c348" (UID: "bad0fdf2-2880-4568-87b0-6319f864c348"). InnerVolumeSpecName "kube-api-access-w296r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.044919 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6543f3f-c70d-4258-b1f3-b74458b60153-kube-api-access-27rbd" (OuterVolumeSpecName: "kube-api-access-27rbd") pod "b6543f3f-c70d-4258-b1f3-b74458b60153" (UID: "b6543f3f-c70d-4258-b1f3-b74458b60153"). InnerVolumeSpecName "kube-api-access-27rbd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.045024 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/785bc852-9af8-4d44-9c07-a7b501efb72c-kube-api-access-g8jmx" (OuterVolumeSpecName: "kube-api-access-g8jmx") pod "785bc852-9af8-4d44-9c07-a7b501efb72c" (UID: "785bc852-9af8-4d44-9c07-a7b501efb72c"). InnerVolumeSpecName "kube-api-access-g8jmx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.046830 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84bc7003-1a29-41b6-af75-956706dd0efe-kube-api-access-pmqtm" (OuterVolumeSpecName: "kube-api-access-pmqtm") pod "84bc7003-1a29-41b6-af75-956706dd0efe" (UID: "84bc7003-1a29-41b6-af75-956706dd0efe"). InnerVolumeSpecName "kube-api-access-pmqtm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.140370 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pmqtm\" (UniqueName: \"kubernetes.io/projected/84bc7003-1a29-41b6-af75-956706dd0efe-kube-api-access-pmqtm\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.140704 4808 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/785bc852-9af8-4d44-9c07-a7b501efb72c-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.140838 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qhqjl\" (UniqueName: \"kubernetes.io/projected/adb98158-8a64-4a24-9d8a-5c7308881c79-kube-api-access-qhqjl\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.141129 4808 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bad0fdf2-2880-4568-87b0-6319f864c348-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.141268 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-27rbd\" (UniqueName: \"kubernetes.io/projected/b6543f3f-c70d-4258-b1f3-b74458b60153-kube-api-access-27rbd\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.141435 4808 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/84bc7003-1a29-41b6-af75-956706dd0efe-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.141470 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w296r\" (UniqueName: \"kubernetes.io/projected/bad0fdf2-2880-4568-87b0-6319f864c348-kube-api-access-w296r\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.141494 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g8jmx\" (UniqueName: \"kubernetes.io/projected/785bc852-9af8-4d44-9c07-a7b501efb72c-kube-api-access-g8jmx\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.141518 4808 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/adb98158-8a64-4a24-9d8a-5c7308881c79-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.219675 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.244023 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ade95199-c613-4920-aa24-6cedde28dda6-combined-ca-bundle\") pod \"ade95199-c613-4920-aa24-6cedde28dda6\" (UID: \"ade95199-c613-4920-aa24-6cedde28dda6\") " Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.244069 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ade95199-c613-4920-aa24-6cedde28dda6-run-httpd\") pod \"ade95199-c613-4920-aa24-6cedde28dda6\" (UID: \"ade95199-c613-4920-aa24-6cedde28dda6\") " Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.244103 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ade95199-c613-4920-aa24-6cedde28dda6-scripts\") pod \"ade95199-c613-4920-aa24-6cedde28dda6\" (UID: \"ade95199-c613-4920-aa24-6cedde28dda6\") " Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.244159 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ade95199-c613-4920-aa24-6cedde28dda6-log-httpd\") pod \"ade95199-c613-4920-aa24-6cedde28dda6\" (UID: \"ade95199-c613-4920-aa24-6cedde28dda6\") " Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.244180 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ade95199-c613-4920-aa24-6cedde28dda6-sg-core-conf-yaml\") pod \"ade95199-c613-4920-aa24-6cedde28dda6\" (UID: \"ade95199-c613-4920-aa24-6cedde28dda6\") " Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.244223 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ade95199-c613-4920-aa24-6cedde28dda6-config-data\") pod \"ade95199-c613-4920-aa24-6cedde28dda6\" (UID: \"ade95199-c613-4920-aa24-6cedde28dda6\") " Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.244294 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rcrg4\" (UniqueName: \"kubernetes.io/projected/ade95199-c613-4920-aa24-6cedde28dda6-kube-api-access-rcrg4\") pod \"ade95199-c613-4920-aa24-6cedde28dda6\" (UID: \"ade95199-c613-4920-aa24-6cedde28dda6\") " Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.248296 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ade95199-c613-4920-aa24-6cedde28dda6-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "ade95199-c613-4920-aa24-6cedde28dda6" (UID: "ade95199-c613-4920-aa24-6cedde28dda6"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.251767 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ade95199-c613-4920-aa24-6cedde28dda6-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "ade95199-c613-4920-aa24-6cedde28dda6" (UID: "ade95199-c613-4920-aa24-6cedde28dda6"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.252040 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ade95199-c613-4920-aa24-6cedde28dda6-scripts" (OuterVolumeSpecName: "scripts") pod "ade95199-c613-4920-aa24-6cedde28dda6" (UID: "ade95199-c613-4920-aa24-6cedde28dda6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.254838 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ade95199-c613-4920-aa24-6cedde28dda6-kube-api-access-rcrg4" (OuterVolumeSpecName: "kube-api-access-rcrg4") pod "ade95199-c613-4920-aa24-6cedde28dda6" (UID: "ade95199-c613-4920-aa24-6cedde28dda6"). InnerVolumeSpecName "kube-api-access-rcrg4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.305755 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5c8b8554dd-86wnt" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.306022 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ade95199-c613-4920-aa24-6cedde28dda6-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "ade95199-c613-4920-aa24-6cedde28dda6" (UID: "ade95199-c613-4920-aa24-6cedde28dda6"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.348282 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b4b8e73f-b7b0-4580-8e0f-44eef84624e4-config\") pod \"b4b8e73f-b7b0-4580-8e0f-44eef84624e4\" (UID: \"b4b8e73f-b7b0-4580-8e0f-44eef84624e4\") " Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.348368 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wnm4z\" (UniqueName: \"kubernetes.io/projected/b4b8e73f-b7b0-4580-8e0f-44eef84624e4-kube-api-access-wnm4z\") pod \"b4b8e73f-b7b0-4580-8e0f-44eef84624e4\" (UID: \"b4b8e73f-b7b0-4580-8e0f-44eef84624e4\") " Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.348389 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4b8e73f-b7b0-4580-8e0f-44eef84624e4-combined-ca-bundle\") pod \"b4b8e73f-b7b0-4580-8e0f-44eef84624e4\" (UID: \"b4b8e73f-b7b0-4580-8e0f-44eef84624e4\") " Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.348500 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4b8e73f-b7b0-4580-8e0f-44eef84624e4-ovndb-tls-certs\") pod \"b4b8e73f-b7b0-4580-8e0f-44eef84624e4\" (UID: \"b4b8e73f-b7b0-4580-8e0f-44eef84624e4\") " Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.348566 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b4b8e73f-b7b0-4580-8e0f-44eef84624e4-httpd-config\") pod \"b4b8e73f-b7b0-4580-8e0f-44eef84624e4\" (UID: \"b4b8e73f-b7b0-4580-8e0f-44eef84624e4\") " Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.349139 4808 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ade95199-c613-4920-aa24-6cedde28dda6-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.349158 4808 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ade95199-c613-4920-aa24-6cedde28dda6-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.349170 4808 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ade95199-c613-4920-aa24-6cedde28dda6-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.349185 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rcrg4\" (UniqueName: \"kubernetes.io/projected/ade95199-c613-4920-aa24-6cedde28dda6-kube-api-access-rcrg4\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.349196 4808 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ade95199-c613-4920-aa24-6cedde28dda6-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.366092 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4b8e73f-b7b0-4580-8e0f-44eef84624e4-kube-api-access-wnm4z" (OuterVolumeSpecName: "kube-api-access-wnm4z") pod "b4b8e73f-b7b0-4580-8e0f-44eef84624e4" (UID: "b4b8e73f-b7b0-4580-8e0f-44eef84624e4"). InnerVolumeSpecName "kube-api-access-wnm4z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.366099 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4b8e73f-b7b0-4580-8e0f-44eef84624e4-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "b4b8e73f-b7b0-4580-8e0f-44eef84624e4" (UID: "b4b8e73f-b7b0-4580-8e0f-44eef84624e4"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.402598 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ade95199-c613-4920-aa24-6cedde28dda6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ade95199-c613-4920-aa24-6cedde28dda6" (UID: "ade95199-c613-4920-aa24-6cedde28dda6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.421755 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ade95199-c613-4920-aa24-6cedde28dda6-config-data" (OuterVolumeSpecName: "config-data") pod "ade95199-c613-4920-aa24-6cedde28dda6" (UID: "ade95199-c613-4920-aa24-6cedde28dda6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.434489 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4b8e73f-b7b0-4580-8e0f-44eef84624e4-config" (OuterVolumeSpecName: "config") pod "b4b8e73f-b7b0-4580-8e0f-44eef84624e4" (UID: "b4b8e73f-b7b0-4580-8e0f-44eef84624e4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.438771 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4b8e73f-b7b0-4580-8e0f-44eef84624e4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b4b8e73f-b7b0-4580-8e0f-44eef84624e4" (UID: "b4b8e73f-b7b0-4580-8e0f-44eef84624e4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.450262 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4b8e73f-b7b0-4580-8e0f-44eef84624e4-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "b4b8e73f-b7b0-4580-8e0f-44eef84624e4" (UID: "b4b8e73f-b7b0-4580-8e0f-44eef84624e4"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.451986 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wnm4z\" (UniqueName: \"kubernetes.io/projected/b4b8e73f-b7b0-4580-8e0f-44eef84624e4-kube-api-access-wnm4z\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.452012 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4b8e73f-b7b0-4580-8e0f-44eef84624e4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.452020 4808 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4b8e73f-b7b0-4580-8e0f-44eef84624e4-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.452031 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ade95199-c613-4920-aa24-6cedde28dda6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.452041 4808 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b4b8e73f-b7b0-4580-8e0f-44eef84624e4-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.452054 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ade95199-c613-4920-aa24-6cedde28dda6-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.452062 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/b4b8e73f-b7b0-4580-8e0f-44eef84624e4-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.510529 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-dcfbdc547-54spv"] Feb 17 16:15:56 crc kubenswrapper[4808]: W0217 16:15:56.515494 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod45097e1f_e6c7_40c1_8338_3f1ac506c3fe.slice/crio-b0dbab620023f457e61bb422dc35d5955af6d5e8f4821b2d804b7dd5cc9caab5 WatchSource:0}: Error finding container b0dbab620023f457e61bb422dc35d5955af6d5e8f4821b2d804b7dd5cc9caab5: Status 404 returned error can't find the container with id b0dbab620023f457e61bb422dc35d5955af6d5e8f4821b2d804b7dd5cc9caab5 Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.688303 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-tmj75" event={"ID":"785bc852-9af8-4d44-9c07-a7b501efb72c","Type":"ContainerDied","Data":"39a847653b65f7a910542af7c8bf6279189cd0c6dc3f5a9660574c5fd3b57fa7"} Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.688560 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="39a847653b65f7a910542af7c8bf6279189cd0c6dc3f5a9660574c5fd3b57fa7" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.688713 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-tmj75" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.692863 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5c8b8554dd-86wnt" event={"ID":"b4b8e73f-b7b0-4580-8e0f-44eef84624e4","Type":"ContainerDied","Data":"37ecb8a325939b5e585da0c83aac7cd196aa16f8c7e46e0941abecb0dea07a08"} Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.692912 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5c8b8554dd-86wnt" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.692921 4808 scope.go:117] "RemoveContainer" containerID="6fb4ffeac0605961472d3b2de8b2dce4344cba69b4920dc698cb1b861244c6eb" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.696626 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ade95199-c613-4920-aa24-6cedde28dda6","Type":"ContainerDied","Data":"356af2c8c1b6e4c7feb3f6d92a6b8bd00153587c6186bbe593c45d6ad9a2caaf"} Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.696673 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.699360 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"5ce308e0-2ba0-41ae-8760-e749c8d04130","Type":"ContainerStarted","Data":"0439c1b605810f673e651f06c93177fa20814d1c29ae34ee315d15a1a316426a"} Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.700336 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-dcfbdc547-54spv" event={"ID":"45097e1f-e6c7-40c1-8338-3f1ac506c3fe","Type":"ContainerStarted","Data":"b0dbab620023f457e61bb422dc35d5955af6d5e8f4821b2d804b7dd5cc9caab5"} Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.701520 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-0369-account-create-update-hd6gb" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.701580 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-bmg4x" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.701597 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-drbdx" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.701613 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-bmg4x" event={"ID":"84bc7003-1a29-41b6-af75-956706dd0efe","Type":"ContainerDied","Data":"cf5220fed618b3508a0f2ed78390fae1a7cb088c433552f6ee16c31271e9f9f4"} Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.701630 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cf5220fed618b3508a0f2ed78390fae1a7cb088c433552f6ee16c31271e9f9f4" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.701632 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-490b-account-create-update-7wjkg" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.701662 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-7e6f-account-create-update-zcm7d" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.720245 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.523250086 podStartE2EDuration="15.720211597s" podCreationTimestamp="2026-02-17 16:15:41 +0000 UTC" firstStartedPulling="2026-02-17 16:15:42.55023311 +0000 UTC m=+1306.066592173" lastFinishedPulling="2026-02-17 16:15:55.747194611 +0000 UTC m=+1319.263553684" observedRunningTime="2026-02-17 16:15:56.715782518 +0000 UTC m=+1320.232141591" watchObservedRunningTime="2026-02-17 16:15:56.720211597 +0000 UTC m=+1320.236570670" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.746311 4808 scope.go:117] "RemoveContainer" containerID="f3f7fd1ba085d42fb2a1208d784040ea1e2e45a48ec8b1c70c8122235d3614aa" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.804544 4808 scope.go:117] "RemoveContainer" containerID="f08bbc217988c1d4a683f5088b670b4d5a57e2fdbedee004dcb40bd4e6db140a" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.808419 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5c8b8554dd-86wnt"] Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.834626 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-5c8b8554dd-86wnt"] Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.847421 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.853141 4808 scope.go:117] "RemoveContainer" containerID="a6b58d8e1d61eb15475898662433c7b6ba1aca7c7f517ddedfbced3c5aaf2a61" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.865705 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.874407 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:15:56 crc kubenswrapper[4808]: E0217 16:15:56.874827 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ade95199-c613-4920-aa24-6cedde28dda6" containerName="ceilometer-central-agent" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.874846 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="ade95199-c613-4920-aa24-6cedde28dda6" containerName="ceilometer-central-agent" Feb 17 16:15:56 crc kubenswrapper[4808]: E0217 16:15:56.874861 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4b8e73f-b7b0-4580-8e0f-44eef84624e4" containerName="neutron-httpd" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.874868 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4b8e73f-b7b0-4580-8e0f-44eef84624e4" containerName="neutron-httpd" Feb 17 16:15:56 crc kubenswrapper[4808]: E0217 16:15:56.874886 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4b8e73f-b7b0-4580-8e0f-44eef84624e4" containerName="neutron-api" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.874892 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4b8e73f-b7b0-4580-8e0f-44eef84624e4" containerName="neutron-api" Feb 17 16:15:56 crc kubenswrapper[4808]: E0217 16:15:56.874906 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="785bc852-9af8-4d44-9c07-a7b501efb72c" containerName="mariadb-database-create" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.874912 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="785bc852-9af8-4d44-9c07-a7b501efb72c" containerName="mariadb-database-create" Feb 17 16:15:56 crc kubenswrapper[4808]: E0217 16:15:56.874920 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adb98158-8a64-4a24-9d8a-5c7308881c79" containerName="mariadb-account-create-update" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.874928 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="adb98158-8a64-4a24-9d8a-5c7308881c79" containerName="mariadb-account-create-update" Feb 17 16:15:56 crc kubenswrapper[4808]: E0217 16:15:56.874942 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84bc7003-1a29-41b6-af75-956706dd0efe" containerName="mariadb-database-create" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.874949 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="84bc7003-1a29-41b6-af75-956706dd0efe" containerName="mariadb-database-create" Feb 17 16:15:56 crc kubenswrapper[4808]: E0217 16:15:56.874962 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6543f3f-c70d-4258-b1f3-b74458b60153" containerName="mariadb-database-create" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.874967 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6543f3f-c70d-4258-b1f3-b74458b60153" containerName="mariadb-database-create" Feb 17 16:15:56 crc kubenswrapper[4808]: E0217 16:15:56.874980 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ade95199-c613-4920-aa24-6cedde28dda6" containerName="ceilometer-notification-agent" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.874986 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="ade95199-c613-4920-aa24-6cedde28dda6" containerName="ceilometer-notification-agent" Feb 17 16:15:56 crc kubenswrapper[4808]: E0217 16:15:56.874997 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ade95199-c613-4920-aa24-6cedde28dda6" containerName="proxy-httpd" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.875003 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="ade95199-c613-4920-aa24-6cedde28dda6" containerName="proxy-httpd" Feb 17 16:15:56 crc kubenswrapper[4808]: E0217 16:15:56.875012 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bad0fdf2-2880-4568-87b0-6319f864c348" containerName="mariadb-account-create-update" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.875019 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="bad0fdf2-2880-4568-87b0-6319f864c348" containerName="mariadb-account-create-update" Feb 17 16:15:56 crc kubenswrapper[4808]: E0217 16:15:56.875027 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ade95199-c613-4920-aa24-6cedde28dda6" containerName="sg-core" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.875034 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="ade95199-c613-4920-aa24-6cedde28dda6" containerName="sg-core" Feb 17 16:15:56 crc kubenswrapper[4808]: E0217 16:15:56.875043 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6cd1abe-7b23-494f-b22f-b355f5937f82" containerName="mariadb-account-create-update" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.875049 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6cd1abe-7b23-494f-b22f-b355f5937f82" containerName="mariadb-account-create-update" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.875235 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="bad0fdf2-2880-4568-87b0-6319f864c348" containerName="mariadb-account-create-update" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.875246 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4b8e73f-b7b0-4580-8e0f-44eef84624e4" containerName="neutron-api" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.875258 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="ade95199-c613-4920-aa24-6cedde28dda6" containerName="proxy-httpd" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.875270 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4b8e73f-b7b0-4580-8e0f-44eef84624e4" containerName="neutron-httpd" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.875283 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="ade95199-c613-4920-aa24-6cedde28dda6" containerName="ceilometer-notification-agent" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.875290 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6cd1abe-7b23-494f-b22f-b355f5937f82" containerName="mariadb-account-create-update" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.875298 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="ade95199-c613-4920-aa24-6cedde28dda6" containerName="ceilometer-central-agent" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.875308 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="ade95199-c613-4920-aa24-6cedde28dda6" containerName="sg-core" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.875318 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="adb98158-8a64-4a24-9d8a-5c7308881c79" containerName="mariadb-account-create-update" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.875329 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="84bc7003-1a29-41b6-af75-956706dd0efe" containerName="mariadb-database-create" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.875335 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="785bc852-9af8-4d44-9c07-a7b501efb72c" containerName="mariadb-database-create" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.875344 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6543f3f-c70d-4258-b1f3-b74458b60153" containerName="mariadb-database-create" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.877365 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.879607 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.879776 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.883750 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.893490 4808 scope.go:117] "RemoveContainer" containerID="1475151fb2b9ec40ea170157633c4ee253f1d8d7d5da164ebda9104b80ecbb68" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.923202 4808 scope.go:117] "RemoveContainer" containerID="7026f52ab348147acdc0cc1845b030fe4c38003a827c4074efe539c2c13f73e8" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.963133 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b26053b6-532d-42e0-84a8-9ad29e1168d3-log-httpd\") pod \"ceilometer-0\" (UID: \"b26053b6-532d-42e0-84a8-9ad29e1168d3\") " pod="openstack/ceilometer-0" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.963196 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b26053b6-532d-42e0-84a8-9ad29e1168d3-run-httpd\") pod \"ceilometer-0\" (UID: \"b26053b6-532d-42e0-84a8-9ad29e1168d3\") " pod="openstack/ceilometer-0" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.963246 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b26053b6-532d-42e0-84a8-9ad29e1168d3-scripts\") pod \"ceilometer-0\" (UID: \"b26053b6-532d-42e0-84a8-9ad29e1168d3\") " pod="openstack/ceilometer-0" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.963309 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b26053b6-532d-42e0-84a8-9ad29e1168d3-config-data\") pod \"ceilometer-0\" (UID: \"b26053b6-532d-42e0-84a8-9ad29e1168d3\") " pod="openstack/ceilometer-0" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.963547 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wc4cr\" (UniqueName: \"kubernetes.io/projected/b26053b6-532d-42e0-84a8-9ad29e1168d3-kube-api-access-wc4cr\") pod \"ceilometer-0\" (UID: \"b26053b6-532d-42e0-84a8-9ad29e1168d3\") " pod="openstack/ceilometer-0" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.963724 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b26053b6-532d-42e0-84a8-9ad29e1168d3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b26053b6-532d-42e0-84a8-9ad29e1168d3\") " pod="openstack/ceilometer-0" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.963782 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b26053b6-532d-42e0-84a8-9ad29e1168d3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b26053b6-532d-42e0-84a8-9ad29e1168d3\") " pod="openstack/ceilometer-0" Feb 17 16:15:57 crc kubenswrapper[4808]: I0217 16:15:57.065308 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wc4cr\" (UniqueName: \"kubernetes.io/projected/b26053b6-532d-42e0-84a8-9ad29e1168d3-kube-api-access-wc4cr\") pod \"ceilometer-0\" (UID: \"b26053b6-532d-42e0-84a8-9ad29e1168d3\") " pod="openstack/ceilometer-0" Feb 17 16:15:57 crc kubenswrapper[4808]: I0217 16:15:57.065685 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b26053b6-532d-42e0-84a8-9ad29e1168d3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b26053b6-532d-42e0-84a8-9ad29e1168d3\") " pod="openstack/ceilometer-0" Feb 17 16:15:57 crc kubenswrapper[4808]: I0217 16:15:57.065706 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b26053b6-532d-42e0-84a8-9ad29e1168d3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b26053b6-532d-42e0-84a8-9ad29e1168d3\") " pod="openstack/ceilometer-0" Feb 17 16:15:57 crc kubenswrapper[4808]: I0217 16:15:57.065799 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b26053b6-532d-42e0-84a8-9ad29e1168d3-log-httpd\") pod \"ceilometer-0\" (UID: \"b26053b6-532d-42e0-84a8-9ad29e1168d3\") " pod="openstack/ceilometer-0" Feb 17 16:15:57 crc kubenswrapper[4808]: I0217 16:15:57.065821 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b26053b6-532d-42e0-84a8-9ad29e1168d3-run-httpd\") pod \"ceilometer-0\" (UID: \"b26053b6-532d-42e0-84a8-9ad29e1168d3\") " pod="openstack/ceilometer-0" Feb 17 16:15:57 crc kubenswrapper[4808]: I0217 16:15:57.065849 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b26053b6-532d-42e0-84a8-9ad29e1168d3-scripts\") pod \"ceilometer-0\" (UID: \"b26053b6-532d-42e0-84a8-9ad29e1168d3\") " pod="openstack/ceilometer-0" Feb 17 16:15:57 crc kubenswrapper[4808]: I0217 16:15:57.065871 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b26053b6-532d-42e0-84a8-9ad29e1168d3-config-data\") pod \"ceilometer-0\" (UID: \"b26053b6-532d-42e0-84a8-9ad29e1168d3\") " pod="openstack/ceilometer-0" Feb 17 16:15:57 crc kubenswrapper[4808]: I0217 16:15:57.066310 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b26053b6-532d-42e0-84a8-9ad29e1168d3-log-httpd\") pod \"ceilometer-0\" (UID: \"b26053b6-532d-42e0-84a8-9ad29e1168d3\") " pod="openstack/ceilometer-0" Feb 17 16:15:57 crc kubenswrapper[4808]: I0217 16:15:57.066510 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b26053b6-532d-42e0-84a8-9ad29e1168d3-run-httpd\") pod \"ceilometer-0\" (UID: \"b26053b6-532d-42e0-84a8-9ad29e1168d3\") " pod="openstack/ceilometer-0" Feb 17 16:15:57 crc kubenswrapper[4808]: I0217 16:15:57.071724 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b26053b6-532d-42e0-84a8-9ad29e1168d3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b26053b6-532d-42e0-84a8-9ad29e1168d3\") " pod="openstack/ceilometer-0" Feb 17 16:15:57 crc kubenswrapper[4808]: I0217 16:15:57.072341 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b26053b6-532d-42e0-84a8-9ad29e1168d3-scripts\") pod \"ceilometer-0\" (UID: \"b26053b6-532d-42e0-84a8-9ad29e1168d3\") " pod="openstack/ceilometer-0" Feb 17 16:15:57 crc kubenswrapper[4808]: I0217 16:15:57.073235 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b26053b6-532d-42e0-84a8-9ad29e1168d3-config-data\") pod \"ceilometer-0\" (UID: \"b26053b6-532d-42e0-84a8-9ad29e1168d3\") " pod="openstack/ceilometer-0" Feb 17 16:15:57 crc kubenswrapper[4808]: I0217 16:15:57.078799 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b26053b6-532d-42e0-84a8-9ad29e1168d3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b26053b6-532d-42e0-84a8-9ad29e1168d3\") " pod="openstack/ceilometer-0" Feb 17 16:15:57 crc kubenswrapper[4808]: I0217 16:15:57.083845 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wc4cr\" (UniqueName: \"kubernetes.io/projected/b26053b6-532d-42e0-84a8-9ad29e1168d3-kube-api-access-wc4cr\") pod \"ceilometer-0\" (UID: \"b26053b6-532d-42e0-84a8-9ad29e1168d3\") " pod="openstack/ceilometer-0" Feb 17 16:15:57 crc kubenswrapper[4808]: I0217 16:15:57.165566 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ade95199-c613-4920-aa24-6cedde28dda6" path="/var/lib/kubelet/pods/ade95199-c613-4920-aa24-6cedde28dda6/volumes" Feb 17 16:15:57 crc kubenswrapper[4808]: I0217 16:15:57.166519 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4b8e73f-b7b0-4580-8e0f-44eef84624e4" path="/var/lib/kubelet/pods/b4b8e73f-b7b0-4580-8e0f-44eef84624e4/volumes" Feb 17 16:15:57 crc kubenswrapper[4808]: I0217 16:15:57.194310 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:15:57 crc kubenswrapper[4808]: I0217 16:15:57.698853 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:15:57 crc kubenswrapper[4808]: W0217 16:15:57.700449 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb26053b6_532d_42e0_84a8_9ad29e1168d3.slice/crio-a81d691f61912aaa98c6eb558cf89221dca2d88f6d8316dfd3364666d1a3bef8 WatchSource:0}: Error finding container a81d691f61912aaa98c6eb558cf89221dca2d88f6d8316dfd3364666d1a3bef8: Status 404 returned error can't find the container with id a81d691f61912aaa98c6eb558cf89221dca2d88f6d8316dfd3364666d1a3bef8 Feb 17 16:15:57 crc kubenswrapper[4808]: I0217 16:15:57.703769 4808 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 16:15:57 crc kubenswrapper[4808]: I0217 16:15:57.738470 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b26053b6-532d-42e0-84a8-9ad29e1168d3","Type":"ContainerStarted","Data":"a81d691f61912aaa98c6eb558cf89221dca2d88f6d8316dfd3364666d1a3bef8"} Feb 17 16:15:57 crc kubenswrapper[4808]: I0217 16:15:57.744720 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-dcfbdc547-54spv" event={"ID":"45097e1f-e6c7-40c1-8338-3f1ac506c3fe","Type":"ContainerStarted","Data":"7792b065ae0edf8db1757f3f3b9f6fbd9960bdac27171c26a8590ad7277582da"} Feb 17 16:15:57 crc kubenswrapper[4808]: I0217 16:15:57.744768 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-dcfbdc547-54spv" event={"ID":"45097e1f-e6c7-40c1-8338-3f1ac506c3fe","Type":"ContainerStarted","Data":"dec1f5b8a7b4d282b15f0cb2e044c9ba55004eb023fff21ce9494f27f7d32dd6"} Feb 17 16:15:57 crc kubenswrapper[4808]: I0217 16:15:57.744826 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-dcfbdc547-54spv" Feb 17 16:15:57 crc kubenswrapper[4808]: I0217 16:15:57.747895 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-dcfbdc547-54spv" Feb 17 16:15:57 crc kubenswrapper[4808]: I0217 16:15:57.781622 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-dcfbdc547-54spv" podStartSLOduration=7.781597437 podStartE2EDuration="7.781597437s" podCreationTimestamp="2026-02-17 16:15:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:15:57.774418843 +0000 UTC m=+1321.290777966" watchObservedRunningTime="2026-02-17 16:15:57.781597437 +0000 UTC m=+1321.297956520" Feb 17 16:15:59 crc kubenswrapper[4808]: I0217 16:15:59.239443 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:15:59 crc kubenswrapper[4808]: I0217 16:15:59.776874 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b26053b6-532d-42e0-84a8-9ad29e1168d3","Type":"ContainerStarted","Data":"26452d6ca1aa9de491489e0904eac549f1df8fca08d5c4e57d5f1ca767c331fd"} Feb 17 16:16:02 crc kubenswrapper[4808]: I0217 16:16:02.170203 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-zrx8j"] Feb 17 16:16:02 crc kubenswrapper[4808]: I0217 16:16:02.172002 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-zrx8j" Feb 17 16:16:02 crc kubenswrapper[4808]: I0217 16:16:02.181043 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 17 16:16:02 crc kubenswrapper[4808]: I0217 16:16:02.181303 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-tcmz6" Feb 17 16:16:02 crc kubenswrapper[4808]: I0217 16:16:02.181554 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Feb 17 16:16:02 crc kubenswrapper[4808]: I0217 16:16:02.191784 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-zrx8j"] Feb 17 16:16:02 crc kubenswrapper[4808]: I0217 16:16:02.285968 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a276997e-b8ab-4b5a-ac5f-c21a8114d673-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-zrx8j\" (UID: \"a276997e-b8ab-4b5a-ac5f-c21a8114d673\") " pod="openstack/nova-cell0-conductor-db-sync-zrx8j" Feb 17 16:16:02 crc kubenswrapper[4808]: I0217 16:16:02.286083 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a276997e-b8ab-4b5a-ac5f-c21a8114d673-scripts\") pod \"nova-cell0-conductor-db-sync-zrx8j\" (UID: \"a276997e-b8ab-4b5a-ac5f-c21a8114d673\") " pod="openstack/nova-cell0-conductor-db-sync-zrx8j" Feb 17 16:16:02 crc kubenswrapper[4808]: I0217 16:16:02.286133 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fwrj\" (UniqueName: \"kubernetes.io/projected/a276997e-b8ab-4b5a-ac5f-c21a8114d673-kube-api-access-2fwrj\") pod \"nova-cell0-conductor-db-sync-zrx8j\" (UID: \"a276997e-b8ab-4b5a-ac5f-c21a8114d673\") " pod="openstack/nova-cell0-conductor-db-sync-zrx8j" Feb 17 16:16:02 crc kubenswrapper[4808]: I0217 16:16:02.286204 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a276997e-b8ab-4b5a-ac5f-c21a8114d673-config-data\") pod \"nova-cell0-conductor-db-sync-zrx8j\" (UID: \"a276997e-b8ab-4b5a-ac5f-c21a8114d673\") " pod="openstack/nova-cell0-conductor-db-sync-zrx8j" Feb 17 16:16:02 crc kubenswrapper[4808]: I0217 16:16:02.388549 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a276997e-b8ab-4b5a-ac5f-c21a8114d673-config-data\") pod \"nova-cell0-conductor-db-sync-zrx8j\" (UID: \"a276997e-b8ab-4b5a-ac5f-c21a8114d673\") " pod="openstack/nova-cell0-conductor-db-sync-zrx8j" Feb 17 16:16:02 crc kubenswrapper[4808]: I0217 16:16:02.389003 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a276997e-b8ab-4b5a-ac5f-c21a8114d673-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-zrx8j\" (UID: \"a276997e-b8ab-4b5a-ac5f-c21a8114d673\") " pod="openstack/nova-cell0-conductor-db-sync-zrx8j" Feb 17 16:16:02 crc kubenswrapper[4808]: I0217 16:16:02.389203 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a276997e-b8ab-4b5a-ac5f-c21a8114d673-scripts\") pod \"nova-cell0-conductor-db-sync-zrx8j\" (UID: \"a276997e-b8ab-4b5a-ac5f-c21a8114d673\") " pod="openstack/nova-cell0-conductor-db-sync-zrx8j" Feb 17 16:16:02 crc kubenswrapper[4808]: I0217 16:16:02.389357 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fwrj\" (UniqueName: \"kubernetes.io/projected/a276997e-b8ab-4b5a-ac5f-c21a8114d673-kube-api-access-2fwrj\") pod \"nova-cell0-conductor-db-sync-zrx8j\" (UID: \"a276997e-b8ab-4b5a-ac5f-c21a8114d673\") " pod="openstack/nova-cell0-conductor-db-sync-zrx8j" Feb 17 16:16:02 crc kubenswrapper[4808]: I0217 16:16:02.395168 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a276997e-b8ab-4b5a-ac5f-c21a8114d673-scripts\") pod \"nova-cell0-conductor-db-sync-zrx8j\" (UID: \"a276997e-b8ab-4b5a-ac5f-c21a8114d673\") " pod="openstack/nova-cell0-conductor-db-sync-zrx8j" Feb 17 16:16:02 crc kubenswrapper[4808]: I0217 16:16:02.395622 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a276997e-b8ab-4b5a-ac5f-c21a8114d673-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-zrx8j\" (UID: \"a276997e-b8ab-4b5a-ac5f-c21a8114d673\") " pod="openstack/nova-cell0-conductor-db-sync-zrx8j" Feb 17 16:16:02 crc kubenswrapper[4808]: I0217 16:16:02.395666 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a276997e-b8ab-4b5a-ac5f-c21a8114d673-config-data\") pod \"nova-cell0-conductor-db-sync-zrx8j\" (UID: \"a276997e-b8ab-4b5a-ac5f-c21a8114d673\") " pod="openstack/nova-cell0-conductor-db-sync-zrx8j" Feb 17 16:16:02 crc kubenswrapper[4808]: I0217 16:16:02.406075 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fwrj\" (UniqueName: \"kubernetes.io/projected/a276997e-b8ab-4b5a-ac5f-c21a8114d673-kube-api-access-2fwrj\") pod \"nova-cell0-conductor-db-sync-zrx8j\" (UID: \"a276997e-b8ab-4b5a-ac5f-c21a8114d673\") " pod="openstack/nova-cell0-conductor-db-sync-zrx8j" Feb 17 16:16:02 crc kubenswrapper[4808]: I0217 16:16:02.494129 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-zrx8j" Feb 17 16:16:02 crc kubenswrapper[4808]: I0217 16:16:02.832833 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b26053b6-532d-42e0-84a8-9ad29e1168d3","Type":"ContainerStarted","Data":"0859f5931b4f6911204f39fb8dca910ef06274861a3a534de924c3a3792b5888"} Feb 17 16:16:03 crc kubenswrapper[4808]: W0217 16:16:03.026276 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda276997e_b8ab_4b5a_ac5f_c21a8114d673.slice/crio-268e843d688bb610fddbc979618a94257055f1aecd4284dda615a689b1e070c5 WatchSource:0}: Error finding container 268e843d688bb610fddbc979618a94257055f1aecd4284dda615a689b1e070c5: Status 404 returned error can't find the container with id 268e843d688bb610fddbc979618a94257055f1aecd4284dda615a689b1e070c5 Feb 17 16:16:03 crc kubenswrapper[4808]: I0217 16:16:03.026296 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-zrx8j"] Feb 17 16:16:03 crc kubenswrapper[4808]: I0217 16:16:03.859274 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-zrx8j" event={"ID":"a276997e-b8ab-4b5a-ac5f-c21a8114d673","Type":"ContainerStarted","Data":"268e843d688bb610fddbc979618a94257055f1aecd4284dda615a689b1e070c5"} Feb 17 16:16:04 crc kubenswrapper[4808]: I0217 16:16:04.870011 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b26053b6-532d-42e0-84a8-9ad29e1168d3","Type":"ContainerStarted","Data":"aae377a74573763676b86b70c1c3f0564761605238764edc050e4bcbb700450d"} Feb 17 16:16:05 crc kubenswrapper[4808]: I0217 16:16:05.980818 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-dcfbdc547-54spv" Feb 17 16:16:05 crc kubenswrapper[4808]: I0217 16:16:05.984032 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-dcfbdc547-54spv" Feb 17 16:16:06 crc kubenswrapper[4808]: I0217 16:16:06.888955 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b26053b6-532d-42e0-84a8-9ad29e1168d3","Type":"ContainerStarted","Data":"e2ccf9ff3f670d7de30bfa9163b03233d4d4a71f4581fbec22a47c8d402ebd58"} Feb 17 16:16:06 crc kubenswrapper[4808]: I0217 16:16:06.889339 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b26053b6-532d-42e0-84a8-9ad29e1168d3" containerName="proxy-httpd" containerID="cri-o://e2ccf9ff3f670d7de30bfa9163b03233d4d4a71f4581fbec22a47c8d402ebd58" gracePeriod=30 Feb 17 16:16:06 crc kubenswrapper[4808]: I0217 16:16:06.889270 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b26053b6-532d-42e0-84a8-9ad29e1168d3" containerName="ceilometer-central-agent" containerID="cri-o://26452d6ca1aa9de491489e0904eac549f1df8fca08d5c4e57d5f1ca767c331fd" gracePeriod=30 Feb 17 16:16:06 crc kubenswrapper[4808]: I0217 16:16:06.889343 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b26053b6-532d-42e0-84a8-9ad29e1168d3" containerName="sg-core" containerID="cri-o://aae377a74573763676b86b70c1c3f0564761605238764edc050e4bcbb700450d" gracePeriod=30 Feb 17 16:16:06 crc kubenswrapper[4808]: I0217 16:16:06.889374 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b26053b6-532d-42e0-84a8-9ad29e1168d3" containerName="ceilometer-notification-agent" containerID="cri-o://0859f5931b4f6911204f39fb8dca910ef06274861a3a534de924c3a3792b5888" gracePeriod=30 Feb 17 16:16:06 crc kubenswrapper[4808]: I0217 16:16:06.919158 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.692136971 podStartE2EDuration="10.919143108s" podCreationTimestamp="2026-02-17 16:15:56 +0000 UTC" firstStartedPulling="2026-02-17 16:15:57.703397649 +0000 UTC m=+1321.219756722" lastFinishedPulling="2026-02-17 16:16:05.930403786 +0000 UTC m=+1329.446762859" observedRunningTime="2026-02-17 16:16:06.917396411 +0000 UTC m=+1330.433755484" watchObservedRunningTime="2026-02-17 16:16:06.919143108 +0000 UTC m=+1330.435502181" Feb 17 16:16:07 crc kubenswrapper[4808]: I0217 16:16:07.209843 4808 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod37da8fa5-9dda-4e98-9a63-a4c0036e0017"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod37da8fa5-9dda-4e98-9a63-a4c0036e0017] : Timed out while waiting for systemd to remove kubepods-besteffort-pod37da8fa5_9dda_4e98_9a63_a4c0036e0017.slice" Feb 17 16:16:07 crc kubenswrapper[4808]: I0217 16:16:07.900398 4808 generic.go:334] "Generic (PLEG): container finished" podID="b26053b6-532d-42e0-84a8-9ad29e1168d3" containerID="e2ccf9ff3f670d7de30bfa9163b03233d4d4a71f4581fbec22a47c8d402ebd58" exitCode=0 Feb 17 16:16:07 crc kubenswrapper[4808]: I0217 16:16:07.900428 4808 generic.go:334] "Generic (PLEG): container finished" podID="b26053b6-532d-42e0-84a8-9ad29e1168d3" containerID="aae377a74573763676b86b70c1c3f0564761605238764edc050e4bcbb700450d" exitCode=2 Feb 17 16:16:07 crc kubenswrapper[4808]: I0217 16:16:07.900459 4808 generic.go:334] "Generic (PLEG): container finished" podID="b26053b6-532d-42e0-84a8-9ad29e1168d3" containerID="0859f5931b4f6911204f39fb8dca910ef06274861a3a534de924c3a3792b5888" exitCode=0 Feb 17 16:16:07 crc kubenswrapper[4808]: I0217 16:16:07.900427 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b26053b6-532d-42e0-84a8-9ad29e1168d3","Type":"ContainerDied","Data":"e2ccf9ff3f670d7de30bfa9163b03233d4d4a71f4581fbec22a47c8d402ebd58"} Feb 17 16:16:07 crc kubenswrapper[4808]: I0217 16:16:07.900489 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b26053b6-532d-42e0-84a8-9ad29e1168d3","Type":"ContainerDied","Data":"aae377a74573763676b86b70c1c3f0564761605238764edc050e4bcbb700450d"} Feb 17 16:16:07 crc kubenswrapper[4808]: I0217 16:16:07.900500 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b26053b6-532d-42e0-84a8-9ad29e1168d3","Type":"ContainerDied","Data":"0859f5931b4f6911204f39fb8dca910ef06274861a3a534de924c3a3792b5888"} Feb 17 16:16:09 crc kubenswrapper[4808]: I0217 16:16:09.933891 4808 generic.go:334] "Generic (PLEG): container finished" podID="b26053b6-532d-42e0-84a8-9ad29e1168d3" containerID="26452d6ca1aa9de491489e0904eac549f1df8fca08d5c4e57d5f1ca767c331fd" exitCode=0 Feb 17 16:16:09 crc kubenswrapper[4808]: I0217 16:16:09.933991 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b26053b6-532d-42e0-84a8-9ad29e1168d3","Type":"ContainerDied","Data":"26452d6ca1aa9de491489e0904eac549f1df8fca08d5c4e57d5f1ca767c331fd"} Feb 17 16:16:13 crc kubenswrapper[4808]: I0217 16:16:13.188198 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-api-0" Feb 17 16:16:14 crc kubenswrapper[4808]: I0217 16:16:14.175082 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:16:14 crc kubenswrapper[4808]: I0217 16:16:14.305344 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b26053b6-532d-42e0-84a8-9ad29e1168d3-log-httpd\") pod \"b26053b6-532d-42e0-84a8-9ad29e1168d3\" (UID: \"b26053b6-532d-42e0-84a8-9ad29e1168d3\") " Feb 17 16:16:14 crc kubenswrapper[4808]: I0217 16:16:14.305402 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b26053b6-532d-42e0-84a8-9ad29e1168d3-scripts\") pod \"b26053b6-532d-42e0-84a8-9ad29e1168d3\" (UID: \"b26053b6-532d-42e0-84a8-9ad29e1168d3\") " Feb 17 16:16:14 crc kubenswrapper[4808]: I0217 16:16:14.305785 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b26053b6-532d-42e0-84a8-9ad29e1168d3-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "b26053b6-532d-42e0-84a8-9ad29e1168d3" (UID: "b26053b6-532d-42e0-84a8-9ad29e1168d3"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:16:14 crc kubenswrapper[4808]: I0217 16:16:14.306266 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b26053b6-532d-42e0-84a8-9ad29e1168d3-config-data\") pod \"b26053b6-532d-42e0-84a8-9ad29e1168d3\" (UID: \"b26053b6-532d-42e0-84a8-9ad29e1168d3\") " Feb 17 16:16:14 crc kubenswrapper[4808]: I0217 16:16:14.306350 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b26053b6-532d-42e0-84a8-9ad29e1168d3-run-httpd\") pod \"b26053b6-532d-42e0-84a8-9ad29e1168d3\" (UID: \"b26053b6-532d-42e0-84a8-9ad29e1168d3\") " Feb 17 16:16:14 crc kubenswrapper[4808]: I0217 16:16:14.306372 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b26053b6-532d-42e0-84a8-9ad29e1168d3-sg-core-conf-yaml\") pod \"b26053b6-532d-42e0-84a8-9ad29e1168d3\" (UID: \"b26053b6-532d-42e0-84a8-9ad29e1168d3\") " Feb 17 16:16:14 crc kubenswrapper[4808]: I0217 16:16:14.306407 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wc4cr\" (UniqueName: \"kubernetes.io/projected/b26053b6-532d-42e0-84a8-9ad29e1168d3-kube-api-access-wc4cr\") pod \"b26053b6-532d-42e0-84a8-9ad29e1168d3\" (UID: \"b26053b6-532d-42e0-84a8-9ad29e1168d3\") " Feb 17 16:16:14 crc kubenswrapper[4808]: I0217 16:16:14.306496 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b26053b6-532d-42e0-84a8-9ad29e1168d3-combined-ca-bundle\") pod \"b26053b6-532d-42e0-84a8-9ad29e1168d3\" (UID: \"b26053b6-532d-42e0-84a8-9ad29e1168d3\") " Feb 17 16:16:14 crc kubenswrapper[4808]: I0217 16:16:14.306675 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b26053b6-532d-42e0-84a8-9ad29e1168d3-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "b26053b6-532d-42e0-84a8-9ad29e1168d3" (UID: "b26053b6-532d-42e0-84a8-9ad29e1168d3"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:16:14 crc kubenswrapper[4808]: I0217 16:16:14.307494 4808 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b26053b6-532d-42e0-84a8-9ad29e1168d3-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:14 crc kubenswrapper[4808]: I0217 16:16:14.307519 4808 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b26053b6-532d-42e0-84a8-9ad29e1168d3-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:14 crc kubenswrapper[4808]: I0217 16:16:14.310831 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b26053b6-532d-42e0-84a8-9ad29e1168d3-kube-api-access-wc4cr" (OuterVolumeSpecName: "kube-api-access-wc4cr") pod "b26053b6-532d-42e0-84a8-9ad29e1168d3" (UID: "b26053b6-532d-42e0-84a8-9ad29e1168d3"). InnerVolumeSpecName "kube-api-access-wc4cr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:16:14 crc kubenswrapper[4808]: I0217 16:16:14.316798 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b26053b6-532d-42e0-84a8-9ad29e1168d3-scripts" (OuterVolumeSpecName: "scripts") pod "b26053b6-532d-42e0-84a8-9ad29e1168d3" (UID: "b26053b6-532d-42e0-84a8-9ad29e1168d3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:16:14 crc kubenswrapper[4808]: I0217 16:16:14.340528 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b26053b6-532d-42e0-84a8-9ad29e1168d3-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "b26053b6-532d-42e0-84a8-9ad29e1168d3" (UID: "b26053b6-532d-42e0-84a8-9ad29e1168d3"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:16:14 crc kubenswrapper[4808]: I0217 16:16:14.403328 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b26053b6-532d-42e0-84a8-9ad29e1168d3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b26053b6-532d-42e0-84a8-9ad29e1168d3" (UID: "b26053b6-532d-42e0-84a8-9ad29e1168d3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:16:14 crc kubenswrapper[4808]: I0217 16:16:14.410044 4808 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b26053b6-532d-42e0-84a8-9ad29e1168d3-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:14 crc kubenswrapper[4808]: I0217 16:16:14.410164 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wc4cr\" (UniqueName: \"kubernetes.io/projected/b26053b6-532d-42e0-84a8-9ad29e1168d3-kube-api-access-wc4cr\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:14 crc kubenswrapper[4808]: I0217 16:16:14.410256 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b26053b6-532d-42e0-84a8-9ad29e1168d3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:14 crc kubenswrapper[4808]: I0217 16:16:14.410323 4808 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b26053b6-532d-42e0-84a8-9ad29e1168d3-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:14 crc kubenswrapper[4808]: I0217 16:16:14.413949 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b26053b6-532d-42e0-84a8-9ad29e1168d3-config-data" (OuterVolumeSpecName: "config-data") pod "b26053b6-532d-42e0-84a8-9ad29e1168d3" (UID: "b26053b6-532d-42e0-84a8-9ad29e1168d3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:16:14 crc kubenswrapper[4808]: I0217 16:16:14.511961 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b26053b6-532d-42e0-84a8-9ad29e1168d3-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:14 crc kubenswrapper[4808]: I0217 16:16:14.986021 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-zrx8j" event={"ID":"a276997e-b8ab-4b5a-ac5f-c21a8114d673","Type":"ContainerStarted","Data":"03dd27d0072c98b182eebc081f82c18296cd4cef8a9626830d097fc0caa3a09f"} Feb 17 16:16:14 crc kubenswrapper[4808]: I0217 16:16:14.991634 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b26053b6-532d-42e0-84a8-9ad29e1168d3","Type":"ContainerDied","Data":"a81d691f61912aaa98c6eb558cf89221dca2d88f6d8316dfd3364666d1a3bef8"} Feb 17 16:16:14 crc kubenswrapper[4808]: I0217 16:16:14.991750 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:16:14 crc kubenswrapper[4808]: I0217 16:16:14.991853 4808 scope.go:117] "RemoveContainer" containerID="e2ccf9ff3f670d7de30bfa9163b03233d4d4a71f4581fbec22a47c8d402ebd58" Feb 17 16:16:15 crc kubenswrapper[4808]: I0217 16:16:15.005961 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-zrx8j" podStartSLOduration=2.251788694 podStartE2EDuration="13.005945489s" podCreationTimestamp="2026-02-17 16:16:02 +0000 UTC" firstStartedPulling="2026-02-17 16:16:03.029026694 +0000 UTC m=+1326.545385767" lastFinishedPulling="2026-02-17 16:16:13.783183489 +0000 UTC m=+1337.299542562" observedRunningTime="2026-02-17 16:16:15.004223882 +0000 UTC m=+1338.520582955" watchObservedRunningTime="2026-02-17 16:16:15.005945489 +0000 UTC m=+1338.522304562" Feb 17 16:16:15 crc kubenswrapper[4808]: I0217 16:16:15.019112 4808 scope.go:117] "RemoveContainer" containerID="aae377a74573763676b86b70c1c3f0564761605238764edc050e4bcbb700450d" Feb 17 16:16:15 crc kubenswrapper[4808]: I0217 16:16:15.042854 4808 scope.go:117] "RemoveContainer" containerID="0859f5931b4f6911204f39fb8dca910ef06274861a3a534de924c3a3792b5888" Feb 17 16:16:15 crc kubenswrapper[4808]: I0217 16:16:15.048645 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:16:15 crc kubenswrapper[4808]: I0217 16:16:15.059670 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:16:15 crc kubenswrapper[4808]: I0217 16:16:15.070274 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:16:15 crc kubenswrapper[4808]: E0217 16:16:15.070794 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b26053b6-532d-42e0-84a8-9ad29e1168d3" containerName="ceilometer-notification-agent" Feb 17 16:16:15 crc kubenswrapper[4808]: I0217 16:16:15.070817 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="b26053b6-532d-42e0-84a8-9ad29e1168d3" containerName="ceilometer-notification-agent" Feb 17 16:16:15 crc kubenswrapper[4808]: E0217 16:16:15.070851 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b26053b6-532d-42e0-84a8-9ad29e1168d3" containerName="ceilometer-central-agent" Feb 17 16:16:15 crc kubenswrapper[4808]: I0217 16:16:15.070862 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="b26053b6-532d-42e0-84a8-9ad29e1168d3" containerName="ceilometer-central-agent" Feb 17 16:16:15 crc kubenswrapper[4808]: E0217 16:16:15.070876 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b26053b6-532d-42e0-84a8-9ad29e1168d3" containerName="proxy-httpd" Feb 17 16:16:15 crc kubenswrapper[4808]: I0217 16:16:15.070884 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="b26053b6-532d-42e0-84a8-9ad29e1168d3" containerName="proxy-httpd" Feb 17 16:16:15 crc kubenswrapper[4808]: E0217 16:16:15.070916 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b26053b6-532d-42e0-84a8-9ad29e1168d3" containerName="sg-core" Feb 17 16:16:15 crc kubenswrapper[4808]: I0217 16:16:15.070925 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="b26053b6-532d-42e0-84a8-9ad29e1168d3" containerName="sg-core" Feb 17 16:16:15 crc kubenswrapper[4808]: I0217 16:16:15.071146 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="b26053b6-532d-42e0-84a8-9ad29e1168d3" containerName="proxy-httpd" Feb 17 16:16:15 crc kubenswrapper[4808]: I0217 16:16:15.071176 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="b26053b6-532d-42e0-84a8-9ad29e1168d3" containerName="ceilometer-central-agent" Feb 17 16:16:15 crc kubenswrapper[4808]: I0217 16:16:15.071193 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="b26053b6-532d-42e0-84a8-9ad29e1168d3" containerName="sg-core" Feb 17 16:16:15 crc kubenswrapper[4808]: I0217 16:16:15.071205 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="b26053b6-532d-42e0-84a8-9ad29e1168d3" containerName="ceilometer-notification-agent" Feb 17 16:16:15 crc kubenswrapper[4808]: I0217 16:16:15.073336 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:16:15 crc kubenswrapper[4808]: I0217 16:16:15.076494 4808 scope.go:117] "RemoveContainer" containerID="26452d6ca1aa9de491489e0904eac549f1df8fca08d5c4e57d5f1ca767c331fd" Feb 17 16:16:15 crc kubenswrapper[4808]: I0217 16:16:15.078816 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 17 16:16:15 crc kubenswrapper[4808]: I0217 16:16:15.079210 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 17 16:16:15 crc kubenswrapper[4808]: I0217 16:16:15.088710 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:16:15 crc kubenswrapper[4808]: I0217 16:16:15.158087 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b26053b6-532d-42e0-84a8-9ad29e1168d3" path="/var/lib/kubelet/pods/b26053b6-532d-42e0-84a8-9ad29e1168d3/volumes" Feb 17 16:16:15 crc kubenswrapper[4808]: I0217 16:16:15.226238 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c97f3908-a38c-4f62-ace9-1071eb7f8d55-config-data\") pod \"ceilometer-0\" (UID: \"c97f3908-a38c-4f62-ace9-1071eb7f8d55\") " pod="openstack/ceilometer-0" Feb 17 16:16:15 crc kubenswrapper[4808]: I0217 16:16:15.226329 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c97f3908-a38c-4f62-ace9-1071eb7f8d55-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c97f3908-a38c-4f62-ace9-1071eb7f8d55\") " pod="openstack/ceilometer-0" Feb 17 16:16:15 crc kubenswrapper[4808]: I0217 16:16:15.226524 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c97f3908-a38c-4f62-ace9-1071eb7f8d55-log-httpd\") pod \"ceilometer-0\" (UID: \"c97f3908-a38c-4f62-ace9-1071eb7f8d55\") " pod="openstack/ceilometer-0" Feb 17 16:16:15 crc kubenswrapper[4808]: I0217 16:16:15.226637 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c97f3908-a38c-4f62-ace9-1071eb7f8d55-run-httpd\") pod \"ceilometer-0\" (UID: \"c97f3908-a38c-4f62-ace9-1071eb7f8d55\") " pod="openstack/ceilometer-0" Feb 17 16:16:15 crc kubenswrapper[4808]: I0217 16:16:15.226739 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c97f3908-a38c-4f62-ace9-1071eb7f8d55-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c97f3908-a38c-4f62-ace9-1071eb7f8d55\") " pod="openstack/ceilometer-0" Feb 17 16:16:15 crc kubenswrapper[4808]: I0217 16:16:15.226888 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8blx\" (UniqueName: \"kubernetes.io/projected/c97f3908-a38c-4f62-ace9-1071eb7f8d55-kube-api-access-k8blx\") pod \"ceilometer-0\" (UID: \"c97f3908-a38c-4f62-ace9-1071eb7f8d55\") " pod="openstack/ceilometer-0" Feb 17 16:16:15 crc kubenswrapper[4808]: I0217 16:16:15.227104 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c97f3908-a38c-4f62-ace9-1071eb7f8d55-scripts\") pod \"ceilometer-0\" (UID: \"c97f3908-a38c-4f62-ace9-1071eb7f8d55\") " pod="openstack/ceilometer-0" Feb 17 16:16:15 crc kubenswrapper[4808]: I0217 16:16:15.328250 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c97f3908-a38c-4f62-ace9-1071eb7f8d55-config-data\") pod \"ceilometer-0\" (UID: \"c97f3908-a38c-4f62-ace9-1071eb7f8d55\") " pod="openstack/ceilometer-0" Feb 17 16:16:15 crc kubenswrapper[4808]: I0217 16:16:15.328317 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c97f3908-a38c-4f62-ace9-1071eb7f8d55-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c97f3908-a38c-4f62-ace9-1071eb7f8d55\") " pod="openstack/ceilometer-0" Feb 17 16:16:15 crc kubenswrapper[4808]: I0217 16:16:15.328345 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c97f3908-a38c-4f62-ace9-1071eb7f8d55-log-httpd\") pod \"ceilometer-0\" (UID: \"c97f3908-a38c-4f62-ace9-1071eb7f8d55\") " pod="openstack/ceilometer-0" Feb 17 16:16:15 crc kubenswrapper[4808]: I0217 16:16:15.328993 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c97f3908-a38c-4f62-ace9-1071eb7f8d55-run-httpd\") pod \"ceilometer-0\" (UID: \"c97f3908-a38c-4f62-ace9-1071eb7f8d55\") " pod="openstack/ceilometer-0" Feb 17 16:16:15 crc kubenswrapper[4808]: I0217 16:16:15.329033 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c97f3908-a38c-4f62-ace9-1071eb7f8d55-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c97f3908-a38c-4f62-ace9-1071eb7f8d55\") " pod="openstack/ceilometer-0" Feb 17 16:16:15 crc kubenswrapper[4808]: I0217 16:16:15.329067 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8blx\" (UniqueName: \"kubernetes.io/projected/c97f3908-a38c-4f62-ace9-1071eb7f8d55-kube-api-access-k8blx\") pod \"ceilometer-0\" (UID: \"c97f3908-a38c-4f62-ace9-1071eb7f8d55\") " pod="openstack/ceilometer-0" Feb 17 16:16:15 crc kubenswrapper[4808]: I0217 16:16:15.329128 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c97f3908-a38c-4f62-ace9-1071eb7f8d55-scripts\") pod \"ceilometer-0\" (UID: \"c97f3908-a38c-4f62-ace9-1071eb7f8d55\") " pod="openstack/ceilometer-0" Feb 17 16:16:15 crc kubenswrapper[4808]: I0217 16:16:15.329736 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c97f3908-a38c-4f62-ace9-1071eb7f8d55-log-httpd\") pod \"ceilometer-0\" (UID: \"c97f3908-a38c-4f62-ace9-1071eb7f8d55\") " pod="openstack/ceilometer-0" Feb 17 16:16:15 crc kubenswrapper[4808]: I0217 16:16:15.331672 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c97f3908-a38c-4f62-ace9-1071eb7f8d55-run-httpd\") pod \"ceilometer-0\" (UID: \"c97f3908-a38c-4f62-ace9-1071eb7f8d55\") " pod="openstack/ceilometer-0" Feb 17 16:16:15 crc kubenswrapper[4808]: I0217 16:16:15.332479 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c97f3908-a38c-4f62-ace9-1071eb7f8d55-scripts\") pod \"ceilometer-0\" (UID: \"c97f3908-a38c-4f62-ace9-1071eb7f8d55\") " pod="openstack/ceilometer-0" Feb 17 16:16:15 crc kubenswrapper[4808]: I0217 16:16:15.332877 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c97f3908-a38c-4f62-ace9-1071eb7f8d55-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c97f3908-a38c-4f62-ace9-1071eb7f8d55\") " pod="openstack/ceilometer-0" Feb 17 16:16:15 crc kubenswrapper[4808]: I0217 16:16:15.334525 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c97f3908-a38c-4f62-ace9-1071eb7f8d55-config-data\") pod \"ceilometer-0\" (UID: \"c97f3908-a38c-4f62-ace9-1071eb7f8d55\") " pod="openstack/ceilometer-0" Feb 17 16:16:15 crc kubenswrapper[4808]: I0217 16:16:15.348087 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c97f3908-a38c-4f62-ace9-1071eb7f8d55-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c97f3908-a38c-4f62-ace9-1071eb7f8d55\") " pod="openstack/ceilometer-0" Feb 17 16:16:15 crc kubenswrapper[4808]: I0217 16:16:15.356142 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8blx\" (UniqueName: \"kubernetes.io/projected/c97f3908-a38c-4f62-ace9-1071eb7f8d55-kube-api-access-k8blx\") pod \"ceilometer-0\" (UID: \"c97f3908-a38c-4f62-ace9-1071eb7f8d55\") " pod="openstack/ceilometer-0" Feb 17 16:16:15 crc kubenswrapper[4808]: I0217 16:16:15.390075 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:16:15 crc kubenswrapper[4808]: I0217 16:16:15.875407 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:16:16 crc kubenswrapper[4808]: I0217 16:16:16.004471 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c97f3908-a38c-4f62-ace9-1071eb7f8d55","Type":"ContainerStarted","Data":"b85ba2e2aadf05c8a92885adbf2c7f51e6f51c7f11cdad1a0c73632146a66e50"} Feb 17 16:16:17 crc kubenswrapper[4808]: I0217 16:16:17.016564 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c97f3908-a38c-4f62-ace9-1071eb7f8d55","Type":"ContainerStarted","Data":"301f9423e1208ffad6a659af39889617aa9a122d75c8beea860d6ea0aaa127b5"} Feb 17 16:16:17 crc kubenswrapper[4808]: I0217 16:16:17.246450 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:16:18 crc kubenswrapper[4808]: I0217 16:16:18.030294 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 16:16:18 crc kubenswrapper[4808]: I0217 16:16:18.030717 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="311ff62c-be53-44b9-a2f7-933e94d8dfb1" containerName="glance-httpd" containerID="cri-o://ff2f31bf8a59a9020889f1060c244d02f3cdf820c32dde20eee91d0b4e8e88f5" gracePeriod=30 Feb 17 16:16:18 crc kubenswrapper[4808]: I0217 16:16:18.030665 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="311ff62c-be53-44b9-a2f7-933e94d8dfb1" containerName="glance-log" containerID="cri-o://ae6f17f8e667309ba204350d8bb1c7687a14a6c30d1d2913b4f840091857035f" gracePeriod=30 Feb 17 16:16:18 crc kubenswrapper[4808]: I0217 16:16:18.041508 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c97f3908-a38c-4f62-ace9-1071eb7f8d55","Type":"ContainerStarted","Data":"112b761c64facadb4f5fba21c4d4dffd36bb2124f569063f0df2934df09e7fd2"} Feb 17 16:16:18 crc kubenswrapper[4808]: I0217 16:16:18.041552 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c97f3908-a38c-4f62-ace9-1071eb7f8d55","Type":"ContainerStarted","Data":"271919e71f2932ffb8ee4558779cd5e9d9143c5a96f365f9eb7383d48e958de8"} Feb 17 16:16:18 crc kubenswrapper[4808]: I0217 16:16:18.980850 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 16:16:18 crc kubenswrapper[4808]: I0217 16:16:18.981353 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0" containerName="glance-httpd" containerID="cri-o://177996b4a729c403d13937849e62a1c2bc6f990a64abe1437c1ef760ae1c250e" gracePeriod=30 Feb 17 16:16:18 crc kubenswrapper[4808]: I0217 16:16:18.982461 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0" containerName="glance-log" containerID="cri-o://93b27ef0402c822c4382b1631c2f850f5ab2be4020697d343106fc4f85f7b674" gracePeriod=30 Feb 17 16:16:19 crc kubenswrapper[4808]: I0217 16:16:19.052376 4808 generic.go:334] "Generic (PLEG): container finished" podID="311ff62c-be53-44b9-a2f7-933e94d8dfb1" containerID="ae6f17f8e667309ba204350d8bb1c7687a14a6c30d1d2913b4f840091857035f" exitCode=143 Feb 17 16:16:19 crc kubenswrapper[4808]: I0217 16:16:19.052417 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"311ff62c-be53-44b9-a2f7-933e94d8dfb1","Type":"ContainerDied","Data":"ae6f17f8e667309ba204350d8bb1c7687a14a6c30d1d2913b4f840091857035f"} Feb 17 16:16:20 crc kubenswrapper[4808]: I0217 16:16:20.064373 4808 generic.go:334] "Generic (PLEG): container finished" podID="a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0" containerID="93b27ef0402c822c4382b1631c2f850f5ab2be4020697d343106fc4f85f7b674" exitCode=143 Feb 17 16:16:20 crc kubenswrapper[4808]: I0217 16:16:20.064443 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0","Type":"ContainerDied","Data":"93b27ef0402c822c4382b1631c2f850f5ab2be4020697d343106fc4f85f7b674"} Feb 17 16:16:20 crc kubenswrapper[4808]: I0217 16:16:20.067554 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c97f3908-a38c-4f62-ace9-1071eb7f8d55","Type":"ContainerStarted","Data":"d147d3a774beef8d56b16073b0312fff476cdb9167202637fbefdc69afdfde83"} Feb 17 16:16:20 crc kubenswrapper[4808]: I0217 16:16:20.067785 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c97f3908-a38c-4f62-ace9-1071eb7f8d55" containerName="ceilometer-central-agent" containerID="cri-o://301f9423e1208ffad6a659af39889617aa9a122d75c8beea860d6ea0aaa127b5" gracePeriod=30 Feb 17 16:16:20 crc kubenswrapper[4808]: I0217 16:16:20.067918 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c97f3908-a38c-4f62-ace9-1071eb7f8d55" containerName="ceilometer-notification-agent" containerID="cri-o://271919e71f2932ffb8ee4558779cd5e9d9143c5a96f365f9eb7383d48e958de8" gracePeriod=30 Feb 17 16:16:20 crc kubenswrapper[4808]: I0217 16:16:20.067840 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c97f3908-a38c-4f62-ace9-1071eb7f8d55" containerName="proxy-httpd" containerID="cri-o://d147d3a774beef8d56b16073b0312fff476cdb9167202637fbefdc69afdfde83" gracePeriod=30 Feb 17 16:16:20 crc kubenswrapper[4808]: I0217 16:16:20.067820 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 17 16:16:20 crc kubenswrapper[4808]: I0217 16:16:20.067876 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c97f3908-a38c-4f62-ace9-1071eb7f8d55" containerName="sg-core" containerID="cri-o://112b761c64facadb4f5fba21c4d4dffd36bb2124f569063f0df2934df09e7fd2" gracePeriod=30 Feb 17 16:16:20 crc kubenswrapper[4808]: I0217 16:16:20.099395 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.688753285 podStartE2EDuration="5.099371446s" podCreationTimestamp="2026-02-17 16:16:15 +0000 UTC" firstStartedPulling="2026-02-17 16:16:15.889485063 +0000 UTC m=+1339.405844136" lastFinishedPulling="2026-02-17 16:16:19.300103224 +0000 UTC m=+1342.816462297" observedRunningTime="2026-02-17 16:16:20.089297403 +0000 UTC m=+1343.605656476" watchObservedRunningTime="2026-02-17 16:16:20.099371446 +0000 UTC m=+1343.615730529" Feb 17 16:16:21 crc kubenswrapper[4808]: I0217 16:16:21.076978 4808 generic.go:334] "Generic (PLEG): container finished" podID="c97f3908-a38c-4f62-ace9-1071eb7f8d55" containerID="d147d3a774beef8d56b16073b0312fff476cdb9167202637fbefdc69afdfde83" exitCode=0 Feb 17 16:16:21 crc kubenswrapper[4808]: I0217 16:16:21.078021 4808 generic.go:334] "Generic (PLEG): container finished" podID="c97f3908-a38c-4f62-ace9-1071eb7f8d55" containerID="112b761c64facadb4f5fba21c4d4dffd36bb2124f569063f0df2934df09e7fd2" exitCode=2 Feb 17 16:16:21 crc kubenswrapper[4808]: I0217 16:16:21.078145 4808 generic.go:334] "Generic (PLEG): container finished" podID="c97f3908-a38c-4f62-ace9-1071eb7f8d55" containerID="271919e71f2932ffb8ee4558779cd5e9d9143c5a96f365f9eb7383d48e958de8" exitCode=0 Feb 17 16:16:21 crc kubenswrapper[4808]: I0217 16:16:21.077049 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c97f3908-a38c-4f62-ace9-1071eb7f8d55","Type":"ContainerDied","Data":"d147d3a774beef8d56b16073b0312fff476cdb9167202637fbefdc69afdfde83"} Feb 17 16:16:21 crc kubenswrapper[4808]: I0217 16:16:21.078334 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c97f3908-a38c-4f62-ace9-1071eb7f8d55","Type":"ContainerDied","Data":"112b761c64facadb4f5fba21c4d4dffd36bb2124f569063f0df2934df09e7fd2"} Feb 17 16:16:21 crc kubenswrapper[4808]: I0217 16:16:21.078426 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c97f3908-a38c-4f62-ace9-1071eb7f8d55","Type":"ContainerDied","Data":"271919e71f2932ffb8ee4558779cd5e9d9143c5a96f365f9eb7383d48e958de8"} Feb 17 16:16:21 crc kubenswrapper[4808]: I0217 16:16:21.592273 4808 patch_prober.go:28] interesting pod/machine-config-daemon-k8v8k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:16:21 crc kubenswrapper[4808]: I0217 16:16:21.592663 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:16:21 crc kubenswrapper[4808]: I0217 16:16:21.592709 4808 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" Feb 17 16:16:21 crc kubenswrapper[4808]: I0217 16:16:21.593332 4808 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"34e69d9ce6b54cc95e099ff98c49ef8661be9798a1b5f5a56fc276247e76ba49"} pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 16:16:21 crc kubenswrapper[4808]: I0217 16:16:21.593400 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" containerID="cri-o://34e69d9ce6b54cc95e099ff98c49ef8661be9798a1b5f5a56fc276247e76ba49" gracePeriod=600 Feb 17 16:16:21 crc kubenswrapper[4808]: I0217 16:16:21.774551 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 17 16:16:21 crc kubenswrapper[4808]: I0217 16:16:21.859138 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v2l72\" (UniqueName: \"kubernetes.io/projected/311ff62c-be53-44b9-a2f7-933e94d8dfb1-kube-api-access-v2l72\") pod \"311ff62c-be53-44b9-a2f7-933e94d8dfb1\" (UID: \"311ff62c-be53-44b9-a2f7-933e94d8dfb1\") " Feb 17 16:16:21 crc kubenswrapper[4808]: I0217 16:16:21.859248 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/311ff62c-be53-44b9-a2f7-933e94d8dfb1-httpd-run\") pod \"311ff62c-be53-44b9-a2f7-933e94d8dfb1\" (UID: \"311ff62c-be53-44b9-a2f7-933e94d8dfb1\") " Feb 17 16:16:21 crc kubenswrapper[4808]: I0217 16:16:21.859300 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/311ff62c-be53-44b9-a2f7-933e94d8dfb1-public-tls-certs\") pod \"311ff62c-be53-44b9-a2f7-933e94d8dfb1\" (UID: \"311ff62c-be53-44b9-a2f7-933e94d8dfb1\") " Feb 17 16:16:21 crc kubenswrapper[4808]: I0217 16:16:21.859391 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/311ff62c-be53-44b9-a2f7-933e94d8dfb1-config-data\") pod \"311ff62c-be53-44b9-a2f7-933e94d8dfb1\" (UID: \"311ff62c-be53-44b9-a2f7-933e94d8dfb1\") " Feb 17 16:16:21 crc kubenswrapper[4808]: I0217 16:16:21.859449 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/311ff62c-be53-44b9-a2f7-933e94d8dfb1-combined-ca-bundle\") pod \"311ff62c-be53-44b9-a2f7-933e94d8dfb1\" (UID: \"311ff62c-be53-44b9-a2f7-933e94d8dfb1\") " Feb 17 16:16:21 crc kubenswrapper[4808]: I0217 16:16:21.859489 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/311ff62c-be53-44b9-a2f7-933e94d8dfb1-logs\") pod \"311ff62c-be53-44b9-a2f7-933e94d8dfb1\" (UID: \"311ff62c-be53-44b9-a2f7-933e94d8dfb1\") " Feb 17 16:16:21 crc kubenswrapper[4808]: I0217 16:16:21.859508 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/311ff62c-be53-44b9-a2f7-933e94d8dfb1-scripts\") pod \"311ff62c-be53-44b9-a2f7-933e94d8dfb1\" (UID: \"311ff62c-be53-44b9-a2f7-933e94d8dfb1\") " Feb 17 16:16:21 crc kubenswrapper[4808]: I0217 16:16:21.859667 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2d669ca1-f580-41d6-88d3-29cb32d20522\") pod \"311ff62c-be53-44b9-a2f7-933e94d8dfb1\" (UID: \"311ff62c-be53-44b9-a2f7-933e94d8dfb1\") " Feb 17 16:16:21 crc kubenswrapper[4808]: I0217 16:16:21.859912 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/311ff62c-be53-44b9-a2f7-933e94d8dfb1-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "311ff62c-be53-44b9-a2f7-933e94d8dfb1" (UID: "311ff62c-be53-44b9-a2f7-933e94d8dfb1"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:16:21 crc kubenswrapper[4808]: I0217 16:16:21.860496 4808 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/311ff62c-be53-44b9-a2f7-933e94d8dfb1-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:21 crc kubenswrapper[4808]: I0217 16:16:21.860615 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/311ff62c-be53-44b9-a2f7-933e94d8dfb1-logs" (OuterVolumeSpecName: "logs") pod "311ff62c-be53-44b9-a2f7-933e94d8dfb1" (UID: "311ff62c-be53-44b9-a2f7-933e94d8dfb1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:16:21 crc kubenswrapper[4808]: I0217 16:16:21.866214 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/311ff62c-be53-44b9-a2f7-933e94d8dfb1-kube-api-access-v2l72" (OuterVolumeSpecName: "kube-api-access-v2l72") pod "311ff62c-be53-44b9-a2f7-933e94d8dfb1" (UID: "311ff62c-be53-44b9-a2f7-933e94d8dfb1"). InnerVolumeSpecName "kube-api-access-v2l72". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:16:21 crc kubenswrapper[4808]: I0217 16:16:21.875231 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/311ff62c-be53-44b9-a2f7-933e94d8dfb1-scripts" (OuterVolumeSpecName: "scripts") pod "311ff62c-be53-44b9-a2f7-933e94d8dfb1" (UID: "311ff62c-be53-44b9-a2f7-933e94d8dfb1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:16:21 crc kubenswrapper[4808]: I0217 16:16:21.896597 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2d669ca1-f580-41d6-88d3-29cb32d20522" (OuterVolumeSpecName: "glance") pod "311ff62c-be53-44b9-a2f7-933e94d8dfb1" (UID: "311ff62c-be53-44b9-a2f7-933e94d8dfb1"). InnerVolumeSpecName "pvc-2d669ca1-f580-41d6-88d3-29cb32d20522". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 17 16:16:21 crc kubenswrapper[4808]: I0217 16:16:21.921452 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/311ff62c-be53-44b9-a2f7-933e94d8dfb1-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "311ff62c-be53-44b9-a2f7-933e94d8dfb1" (UID: "311ff62c-be53-44b9-a2f7-933e94d8dfb1"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:16:21 crc kubenswrapper[4808]: I0217 16:16:21.921595 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/311ff62c-be53-44b9-a2f7-933e94d8dfb1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "311ff62c-be53-44b9-a2f7-933e94d8dfb1" (UID: "311ff62c-be53-44b9-a2f7-933e94d8dfb1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:16:21 crc kubenswrapper[4808]: I0217 16:16:21.962594 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v2l72\" (UniqueName: \"kubernetes.io/projected/311ff62c-be53-44b9-a2f7-933e94d8dfb1-kube-api-access-v2l72\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:21 crc kubenswrapper[4808]: I0217 16:16:21.962919 4808 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/311ff62c-be53-44b9-a2f7-933e94d8dfb1-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:21 crc kubenswrapper[4808]: I0217 16:16:21.962931 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/311ff62c-be53-44b9-a2f7-933e94d8dfb1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:21 crc kubenswrapper[4808]: I0217 16:16:21.962945 4808 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/311ff62c-be53-44b9-a2f7-933e94d8dfb1-logs\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:21 crc kubenswrapper[4808]: I0217 16:16:21.962956 4808 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/311ff62c-be53-44b9-a2f7-933e94d8dfb1-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:21 crc kubenswrapper[4808]: I0217 16:16:21.962991 4808 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-2d669ca1-f580-41d6-88d3-29cb32d20522\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2d669ca1-f580-41d6-88d3-29cb32d20522\") on node \"crc\" " Feb 17 16:16:21 crc kubenswrapper[4808]: I0217 16:16:21.976858 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/311ff62c-be53-44b9-a2f7-933e94d8dfb1-config-data" (OuterVolumeSpecName: "config-data") pod "311ff62c-be53-44b9-a2f7-933e94d8dfb1" (UID: "311ff62c-be53-44b9-a2f7-933e94d8dfb1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:16:21 crc kubenswrapper[4808]: I0217 16:16:21.995274 4808 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 17 16:16:21 crc kubenswrapper[4808]: I0217 16:16:21.995430 4808 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-2d669ca1-f580-41d6-88d3-29cb32d20522" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2d669ca1-f580-41d6-88d3-29cb32d20522") on node "crc" Feb 17 16:16:22 crc kubenswrapper[4808]: I0217 16:16:22.064588 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/311ff62c-be53-44b9-a2f7-933e94d8dfb1-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:22 crc kubenswrapper[4808]: I0217 16:16:22.064617 4808 reconciler_common.go:293] "Volume detached for volume \"pvc-2d669ca1-f580-41d6-88d3-29cb32d20522\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2d669ca1-f580-41d6-88d3-29cb32d20522\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:22 crc kubenswrapper[4808]: I0217 16:16:22.091178 4808 generic.go:334] "Generic (PLEG): container finished" podID="311ff62c-be53-44b9-a2f7-933e94d8dfb1" containerID="ff2f31bf8a59a9020889f1060c244d02f3cdf820c32dde20eee91d0b4e8e88f5" exitCode=0 Feb 17 16:16:22 crc kubenswrapper[4808]: I0217 16:16:22.091225 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"311ff62c-be53-44b9-a2f7-933e94d8dfb1","Type":"ContainerDied","Data":"ff2f31bf8a59a9020889f1060c244d02f3cdf820c32dde20eee91d0b4e8e88f5"} Feb 17 16:16:22 crc kubenswrapper[4808]: I0217 16:16:22.091273 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"311ff62c-be53-44b9-a2f7-933e94d8dfb1","Type":"ContainerDied","Data":"5259b7f9e5eb8d16dd9b6467f0a2e9d1eee838ac2578fd7225262f0187ce85fa"} Feb 17 16:16:22 crc kubenswrapper[4808]: I0217 16:16:22.091292 4808 scope.go:117] "RemoveContainer" containerID="ff2f31bf8a59a9020889f1060c244d02f3cdf820c32dde20eee91d0b4e8e88f5" Feb 17 16:16:22 crc kubenswrapper[4808]: I0217 16:16:22.091254 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 17 16:16:22 crc kubenswrapper[4808]: I0217 16:16:22.099195 4808 generic.go:334] "Generic (PLEG): container finished" podID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerID="34e69d9ce6b54cc95e099ff98c49ef8661be9798a1b5f5a56fc276247e76ba49" exitCode=0 Feb 17 16:16:22 crc kubenswrapper[4808]: I0217 16:16:22.099296 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" event={"ID":"ca38b6e7-b21c-453d-8b6c-a163dac84b35","Type":"ContainerDied","Data":"34e69d9ce6b54cc95e099ff98c49ef8661be9798a1b5f5a56fc276247e76ba49"} Feb 17 16:16:22 crc kubenswrapper[4808]: I0217 16:16:22.099488 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" event={"ID":"ca38b6e7-b21c-453d-8b6c-a163dac84b35","Type":"ContainerStarted","Data":"3d547770092f773b5c7f62497d5451390c51dc1c958b49576b85d692e046de5d"} Feb 17 16:16:22 crc kubenswrapper[4808]: I0217 16:16:22.114598 4808 scope.go:117] "RemoveContainer" containerID="ae6f17f8e667309ba204350d8bb1c7687a14a6c30d1d2913b4f840091857035f" Feb 17 16:16:22 crc kubenswrapper[4808]: I0217 16:16:22.139894 4808 scope.go:117] "RemoveContainer" containerID="ff2f31bf8a59a9020889f1060c244d02f3cdf820c32dde20eee91d0b4e8e88f5" Feb 17 16:16:22 crc kubenswrapper[4808]: E0217 16:16:22.140301 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff2f31bf8a59a9020889f1060c244d02f3cdf820c32dde20eee91d0b4e8e88f5\": container with ID starting with ff2f31bf8a59a9020889f1060c244d02f3cdf820c32dde20eee91d0b4e8e88f5 not found: ID does not exist" containerID="ff2f31bf8a59a9020889f1060c244d02f3cdf820c32dde20eee91d0b4e8e88f5" Feb 17 16:16:22 crc kubenswrapper[4808]: I0217 16:16:22.140338 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff2f31bf8a59a9020889f1060c244d02f3cdf820c32dde20eee91d0b4e8e88f5"} err="failed to get container status \"ff2f31bf8a59a9020889f1060c244d02f3cdf820c32dde20eee91d0b4e8e88f5\": rpc error: code = NotFound desc = could not find container \"ff2f31bf8a59a9020889f1060c244d02f3cdf820c32dde20eee91d0b4e8e88f5\": container with ID starting with ff2f31bf8a59a9020889f1060c244d02f3cdf820c32dde20eee91d0b4e8e88f5 not found: ID does not exist" Feb 17 16:16:22 crc kubenswrapper[4808]: I0217 16:16:22.140365 4808 scope.go:117] "RemoveContainer" containerID="ae6f17f8e667309ba204350d8bb1c7687a14a6c30d1d2913b4f840091857035f" Feb 17 16:16:22 crc kubenswrapper[4808]: E0217 16:16:22.142300 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae6f17f8e667309ba204350d8bb1c7687a14a6c30d1d2913b4f840091857035f\": container with ID starting with ae6f17f8e667309ba204350d8bb1c7687a14a6c30d1d2913b4f840091857035f not found: ID does not exist" containerID="ae6f17f8e667309ba204350d8bb1c7687a14a6c30d1d2913b4f840091857035f" Feb 17 16:16:22 crc kubenswrapper[4808]: I0217 16:16:22.142469 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae6f17f8e667309ba204350d8bb1c7687a14a6c30d1d2913b4f840091857035f"} err="failed to get container status \"ae6f17f8e667309ba204350d8bb1c7687a14a6c30d1d2913b4f840091857035f\": rpc error: code = NotFound desc = could not find container \"ae6f17f8e667309ba204350d8bb1c7687a14a6c30d1d2913b4f840091857035f\": container with ID starting with ae6f17f8e667309ba204350d8bb1c7687a14a6c30d1d2913b4f840091857035f not found: ID does not exist" Feb 17 16:16:22 crc kubenswrapper[4808]: I0217 16:16:22.142555 4808 scope.go:117] "RemoveContainer" containerID="12b4e957316b11ee081f9acecacedfdbabeee0248dc83ade7fe5f8b084a798ba" Feb 17 16:16:22 crc kubenswrapper[4808]: I0217 16:16:22.165265 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 16:16:22 crc kubenswrapper[4808]: I0217 16:16:22.212640 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 16:16:22 crc kubenswrapper[4808]: I0217 16:16:22.229647 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 16:16:22 crc kubenswrapper[4808]: E0217 16:16:22.230422 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="311ff62c-be53-44b9-a2f7-933e94d8dfb1" containerName="glance-httpd" Feb 17 16:16:22 crc kubenswrapper[4808]: I0217 16:16:22.230545 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="311ff62c-be53-44b9-a2f7-933e94d8dfb1" containerName="glance-httpd" Feb 17 16:16:22 crc kubenswrapper[4808]: E0217 16:16:22.230683 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="311ff62c-be53-44b9-a2f7-933e94d8dfb1" containerName="glance-log" Feb 17 16:16:22 crc kubenswrapper[4808]: I0217 16:16:22.230767 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="311ff62c-be53-44b9-a2f7-933e94d8dfb1" containerName="glance-log" Feb 17 16:16:22 crc kubenswrapper[4808]: I0217 16:16:22.231086 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="311ff62c-be53-44b9-a2f7-933e94d8dfb1" containerName="glance-httpd" Feb 17 16:16:22 crc kubenswrapper[4808]: I0217 16:16:22.231164 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="311ff62c-be53-44b9-a2f7-933e94d8dfb1" containerName="glance-log" Feb 17 16:16:22 crc kubenswrapper[4808]: I0217 16:16:22.232631 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 17 16:16:22 crc kubenswrapper[4808]: I0217 16:16:22.235203 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 17 16:16:22 crc kubenswrapper[4808]: I0217 16:16:22.239442 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 17 16:16:22 crc kubenswrapper[4808]: I0217 16:16:22.244970 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 16:16:22 crc kubenswrapper[4808]: I0217 16:16:22.370200 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5dbe689-5e11-4832-84c8-d603c08a23e2-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"d5dbe689-5e11-4832-84c8-d603c08a23e2\") " pod="openstack/glance-default-external-api-0" Feb 17 16:16:22 crc kubenswrapper[4808]: I0217 16:16:22.370258 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d5dbe689-5e11-4832-84c8-d603c08a23e2-scripts\") pod \"glance-default-external-api-0\" (UID: \"d5dbe689-5e11-4832-84c8-d603c08a23e2\") " pod="openstack/glance-default-external-api-0" Feb 17 16:16:22 crc kubenswrapper[4808]: I0217 16:16:22.370713 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6q9x\" (UniqueName: \"kubernetes.io/projected/d5dbe689-5e11-4832-84c8-d603c08a23e2-kube-api-access-q6q9x\") pod \"glance-default-external-api-0\" (UID: \"d5dbe689-5e11-4832-84c8-d603c08a23e2\") " pod="openstack/glance-default-external-api-0" Feb 17 16:16:22 crc kubenswrapper[4808]: I0217 16:16:22.370856 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d5dbe689-5e11-4832-84c8-d603c08a23e2-logs\") pod \"glance-default-external-api-0\" (UID: \"d5dbe689-5e11-4832-84c8-d603c08a23e2\") " pod="openstack/glance-default-external-api-0" Feb 17 16:16:22 crc kubenswrapper[4808]: I0217 16:16:22.370904 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5dbe689-5e11-4832-84c8-d603c08a23e2-config-data\") pod \"glance-default-external-api-0\" (UID: \"d5dbe689-5e11-4832-84c8-d603c08a23e2\") " pod="openstack/glance-default-external-api-0" Feb 17 16:16:22 crc kubenswrapper[4808]: I0217 16:16:22.370979 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2d669ca1-f580-41d6-88d3-29cb32d20522\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2d669ca1-f580-41d6-88d3-29cb32d20522\") pod \"glance-default-external-api-0\" (UID: \"d5dbe689-5e11-4832-84c8-d603c08a23e2\") " pod="openstack/glance-default-external-api-0" Feb 17 16:16:22 crc kubenswrapper[4808]: I0217 16:16:22.371035 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5dbe689-5e11-4832-84c8-d603c08a23e2-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"d5dbe689-5e11-4832-84c8-d603c08a23e2\") " pod="openstack/glance-default-external-api-0" Feb 17 16:16:22 crc kubenswrapper[4808]: I0217 16:16:22.371067 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d5dbe689-5e11-4832-84c8-d603c08a23e2-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"d5dbe689-5e11-4832-84c8-d603c08a23e2\") " pod="openstack/glance-default-external-api-0" Feb 17 16:16:22 crc kubenswrapper[4808]: I0217 16:16:22.472340 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d5dbe689-5e11-4832-84c8-d603c08a23e2-scripts\") pod \"glance-default-external-api-0\" (UID: \"d5dbe689-5e11-4832-84c8-d603c08a23e2\") " pod="openstack/glance-default-external-api-0" Feb 17 16:16:22 crc kubenswrapper[4808]: I0217 16:16:22.472731 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6q9x\" (UniqueName: \"kubernetes.io/projected/d5dbe689-5e11-4832-84c8-d603c08a23e2-kube-api-access-q6q9x\") pod \"glance-default-external-api-0\" (UID: \"d5dbe689-5e11-4832-84c8-d603c08a23e2\") " pod="openstack/glance-default-external-api-0" Feb 17 16:16:22 crc kubenswrapper[4808]: I0217 16:16:22.472780 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d5dbe689-5e11-4832-84c8-d603c08a23e2-logs\") pod \"glance-default-external-api-0\" (UID: \"d5dbe689-5e11-4832-84c8-d603c08a23e2\") " pod="openstack/glance-default-external-api-0" Feb 17 16:16:22 crc kubenswrapper[4808]: I0217 16:16:22.472803 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5dbe689-5e11-4832-84c8-d603c08a23e2-config-data\") pod \"glance-default-external-api-0\" (UID: \"d5dbe689-5e11-4832-84c8-d603c08a23e2\") " pod="openstack/glance-default-external-api-0" Feb 17 16:16:22 crc kubenswrapper[4808]: I0217 16:16:22.472832 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2d669ca1-f580-41d6-88d3-29cb32d20522\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2d669ca1-f580-41d6-88d3-29cb32d20522\") pod \"glance-default-external-api-0\" (UID: \"d5dbe689-5e11-4832-84c8-d603c08a23e2\") " pod="openstack/glance-default-external-api-0" Feb 17 16:16:22 crc kubenswrapper[4808]: I0217 16:16:22.472856 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5dbe689-5e11-4832-84c8-d603c08a23e2-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"d5dbe689-5e11-4832-84c8-d603c08a23e2\") " pod="openstack/glance-default-external-api-0" Feb 17 16:16:22 crc kubenswrapper[4808]: I0217 16:16:22.472878 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d5dbe689-5e11-4832-84c8-d603c08a23e2-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"d5dbe689-5e11-4832-84c8-d603c08a23e2\") " pod="openstack/glance-default-external-api-0" Feb 17 16:16:22 crc kubenswrapper[4808]: I0217 16:16:22.472925 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5dbe689-5e11-4832-84c8-d603c08a23e2-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"d5dbe689-5e11-4832-84c8-d603c08a23e2\") " pod="openstack/glance-default-external-api-0" Feb 17 16:16:22 crc kubenswrapper[4808]: I0217 16:16:22.474106 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d5dbe689-5e11-4832-84c8-d603c08a23e2-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"d5dbe689-5e11-4832-84c8-d603c08a23e2\") " pod="openstack/glance-default-external-api-0" Feb 17 16:16:22 crc kubenswrapper[4808]: I0217 16:16:22.474309 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d5dbe689-5e11-4832-84c8-d603c08a23e2-logs\") pod \"glance-default-external-api-0\" (UID: \"d5dbe689-5e11-4832-84c8-d603c08a23e2\") " pod="openstack/glance-default-external-api-0" Feb 17 16:16:22 crc kubenswrapper[4808]: I0217 16:16:22.480897 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5dbe689-5e11-4832-84c8-d603c08a23e2-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"d5dbe689-5e11-4832-84c8-d603c08a23e2\") " pod="openstack/glance-default-external-api-0" Feb 17 16:16:22 crc kubenswrapper[4808]: I0217 16:16:22.481729 4808 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 16:16:22 crc kubenswrapper[4808]: I0217 16:16:22.481765 4808 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2d669ca1-f580-41d6-88d3-29cb32d20522\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2d669ca1-f580-41d6-88d3-29cb32d20522\") pod \"glance-default-external-api-0\" (UID: \"d5dbe689-5e11-4832-84c8-d603c08a23e2\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/793125420e976eb43638bc1f8c10c1dbf19200ea40f241dea1aa3deff96042e8/globalmount\"" pod="openstack/glance-default-external-api-0" Feb 17 16:16:22 crc kubenswrapper[4808]: I0217 16:16:22.486273 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5dbe689-5e11-4832-84c8-d603c08a23e2-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"d5dbe689-5e11-4832-84c8-d603c08a23e2\") " pod="openstack/glance-default-external-api-0" Feb 17 16:16:22 crc kubenswrapper[4808]: I0217 16:16:22.492385 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d5dbe689-5e11-4832-84c8-d603c08a23e2-scripts\") pod \"glance-default-external-api-0\" (UID: \"d5dbe689-5e11-4832-84c8-d603c08a23e2\") " pod="openstack/glance-default-external-api-0" Feb 17 16:16:22 crc kubenswrapper[4808]: I0217 16:16:22.497733 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5dbe689-5e11-4832-84c8-d603c08a23e2-config-data\") pod \"glance-default-external-api-0\" (UID: \"d5dbe689-5e11-4832-84c8-d603c08a23e2\") " pod="openstack/glance-default-external-api-0" Feb 17 16:16:22 crc kubenswrapper[4808]: I0217 16:16:22.511922 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6q9x\" (UniqueName: \"kubernetes.io/projected/d5dbe689-5e11-4832-84c8-d603c08a23e2-kube-api-access-q6q9x\") pod \"glance-default-external-api-0\" (UID: \"d5dbe689-5e11-4832-84c8-d603c08a23e2\") " pod="openstack/glance-default-external-api-0" Feb 17 16:16:22 crc kubenswrapper[4808]: I0217 16:16:22.573868 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2d669ca1-f580-41d6-88d3-29cb32d20522\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2d669ca1-f580-41d6-88d3-29cb32d20522\") pod \"glance-default-external-api-0\" (UID: \"d5dbe689-5e11-4832-84c8-d603c08a23e2\") " pod="openstack/glance-default-external-api-0" Feb 17 16:16:22 crc kubenswrapper[4808]: I0217 16:16:22.854712 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 17 16:16:22 crc kubenswrapper[4808]: I0217 16:16:22.959590 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.112078 4808 generic.go:334] "Generic (PLEG): container finished" podID="a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0" containerID="177996b4a729c403d13937849e62a1c2bc6f990a64abe1437c1ef760ae1c250e" exitCode=0 Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.112122 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0","Type":"ContainerDied","Data":"177996b4a729c403d13937849e62a1c2bc6f990a64abe1437c1ef760ae1c250e"} Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.112437 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0","Type":"ContainerDied","Data":"674bc197545e528a3fae6a8ee441743eba630fd0f6cf0ca9277898370f13b963"} Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.112149 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.112458 4808 scope.go:117] "RemoveContainer" containerID="177996b4a729c403d13937849e62a1c2bc6f990a64abe1437c1ef760ae1c250e" Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.114182 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0-httpd-run\") pod \"a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0\" (UID: \"a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0\") " Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.114304 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0-scripts\") pod \"a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0\" (UID: \"a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0\") " Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.114328 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0-logs\") pod \"a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0\" (UID: \"a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0\") " Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.114356 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0-internal-tls-certs\") pod \"a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0\" (UID: \"a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0\") " Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.114382 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0-config-data\") pod \"a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0\" (UID: \"a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0\") " Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.114420 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wngfm\" (UniqueName: \"kubernetes.io/projected/a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0-kube-api-access-wngfm\") pod \"a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0\" (UID: \"a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0\") " Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.114654 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cde2fba9-8f9b-406e-abc6-bd786e0adb3c\") pod \"a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0\" (UID: \"a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0\") " Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.114714 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0-combined-ca-bundle\") pod \"a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0\" (UID: \"a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0\") " Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.118529 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0-logs" (OuterVolumeSpecName: "logs") pod "a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0" (UID: "a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.120114 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0" (UID: "a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.134834 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0-scripts" (OuterVolumeSpecName: "scripts") pod "a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0" (UID: "a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.134918 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0-kube-api-access-wngfm" (OuterVolumeSpecName: "kube-api-access-wngfm") pod "a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0" (UID: "a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0"). InnerVolumeSpecName "kube-api-access-wngfm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.136892 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cde2fba9-8f9b-406e-abc6-bd786e0adb3c" (OuterVolumeSpecName: "glance") pod "a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0" (UID: "a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0"). InnerVolumeSpecName "pvc-cde2fba9-8f9b-406e-abc6-bd786e0adb3c". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.140099 4808 scope.go:117] "RemoveContainer" containerID="93b27ef0402c822c4382b1631c2f850f5ab2be4020697d343106fc4f85f7b674" Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.174092 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="311ff62c-be53-44b9-a2f7-933e94d8dfb1" path="/var/lib/kubelet/pods/311ff62c-be53-44b9-a2f7-933e94d8dfb1/volumes" Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.175192 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0" (UID: "a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.184753 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0" (UID: "a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.193103 4808 scope.go:117] "RemoveContainer" containerID="177996b4a729c403d13937849e62a1c2bc6f990a64abe1437c1ef760ae1c250e" Feb 17 16:16:23 crc kubenswrapper[4808]: E0217 16:16:23.200472 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"177996b4a729c403d13937849e62a1c2bc6f990a64abe1437c1ef760ae1c250e\": container with ID starting with 177996b4a729c403d13937849e62a1c2bc6f990a64abe1437c1ef760ae1c250e not found: ID does not exist" containerID="177996b4a729c403d13937849e62a1c2bc6f990a64abe1437c1ef760ae1c250e" Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.200515 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"177996b4a729c403d13937849e62a1c2bc6f990a64abe1437c1ef760ae1c250e"} err="failed to get container status \"177996b4a729c403d13937849e62a1c2bc6f990a64abe1437c1ef760ae1c250e\": rpc error: code = NotFound desc = could not find container \"177996b4a729c403d13937849e62a1c2bc6f990a64abe1437c1ef760ae1c250e\": container with ID starting with 177996b4a729c403d13937849e62a1c2bc6f990a64abe1437c1ef760ae1c250e not found: ID does not exist" Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.200562 4808 scope.go:117] "RemoveContainer" containerID="93b27ef0402c822c4382b1631c2f850f5ab2be4020697d343106fc4f85f7b674" Feb 17 16:16:23 crc kubenswrapper[4808]: E0217 16:16:23.201257 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93b27ef0402c822c4382b1631c2f850f5ab2be4020697d343106fc4f85f7b674\": container with ID starting with 93b27ef0402c822c4382b1631c2f850f5ab2be4020697d343106fc4f85f7b674 not found: ID does not exist" containerID="93b27ef0402c822c4382b1631c2f850f5ab2be4020697d343106fc4f85f7b674" Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.201294 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93b27ef0402c822c4382b1631c2f850f5ab2be4020697d343106fc4f85f7b674"} err="failed to get container status \"93b27ef0402c822c4382b1631c2f850f5ab2be4020697d343106fc4f85f7b674\": rpc error: code = NotFound desc = could not find container \"93b27ef0402c822c4382b1631c2f850f5ab2be4020697d343106fc4f85f7b674\": container with ID starting with 93b27ef0402c822c4382b1631c2f850f5ab2be4020697d343106fc4f85f7b674 not found: ID does not exist" Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.215041 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0-config-data" (OuterVolumeSpecName: "config-data") pod "a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0" (UID: "a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.216613 4808 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.216635 4808 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0-logs\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.216645 4808 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.219914 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.219958 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wngfm\" (UniqueName: \"kubernetes.io/projected/a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0-kube-api-access-wngfm\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.219997 4808 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-cde2fba9-8f9b-406e-abc6-bd786e0adb3c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cde2fba9-8f9b-406e-abc6-bd786e0adb3c\") on node \"crc\" " Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.220012 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.220024 4808 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.257484 4808 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.258038 4808 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-cde2fba9-8f9b-406e-abc6-bd786e0adb3c" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cde2fba9-8f9b-406e-abc6-bd786e0adb3c") on node "crc" Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.322243 4808 reconciler_common.go:293] "Volume detached for volume \"pvc-cde2fba9-8f9b-406e-abc6-bd786e0adb3c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cde2fba9-8f9b-406e-abc6-bd786e0adb3c\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.445210 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.457174 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.469930 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 16:16:23 crc kubenswrapper[4808]: E0217 16:16:23.470506 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0" containerName="glance-httpd" Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.470524 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0" containerName="glance-httpd" Feb 17 16:16:23 crc kubenswrapper[4808]: E0217 16:16:23.470589 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0" containerName="glance-log" Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.470597 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0" containerName="glance-log" Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.470934 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0" containerName="glance-log" Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.470964 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0" containerName="glance-httpd" Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.473483 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.475950 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.476124 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.488281 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.499891 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 16:16:23 crc kubenswrapper[4808]: W0217 16:16:23.509137 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd5dbe689_5e11_4832_84c8_d603c08a23e2.slice/crio-3c1154a88259d7c5533a0bfb92c0746de5fcbd416c6a484170a1f54c17bf6550 WatchSource:0}: Error finding container 3c1154a88259d7c5533a0bfb92c0746de5fcbd416c6a484170a1f54c17bf6550: Status 404 returned error can't find the container with id 3c1154a88259d7c5533a0bfb92c0746de5fcbd416c6a484170a1f54c17bf6550 Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.630894 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b59528d2-0bad-4c66-9971-222dcaf72184-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"b59528d2-0bad-4c66-9971-222dcaf72184\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.630950 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b59528d2-0bad-4c66-9971-222dcaf72184-logs\") pod \"glance-default-internal-api-0\" (UID: \"b59528d2-0bad-4c66-9971-222dcaf72184\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.631107 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-cde2fba9-8f9b-406e-abc6-bd786e0adb3c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cde2fba9-8f9b-406e-abc6-bd786e0adb3c\") pod \"glance-default-internal-api-0\" (UID: \"b59528d2-0bad-4c66-9971-222dcaf72184\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.631191 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkjjq\" (UniqueName: \"kubernetes.io/projected/b59528d2-0bad-4c66-9971-222dcaf72184-kube-api-access-dkjjq\") pod \"glance-default-internal-api-0\" (UID: \"b59528d2-0bad-4c66-9971-222dcaf72184\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.631230 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b59528d2-0bad-4c66-9971-222dcaf72184-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b59528d2-0bad-4c66-9971-222dcaf72184\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.631290 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b59528d2-0bad-4c66-9971-222dcaf72184-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b59528d2-0bad-4c66-9971-222dcaf72184\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.631326 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b59528d2-0bad-4c66-9971-222dcaf72184-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b59528d2-0bad-4c66-9971-222dcaf72184\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.631349 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b59528d2-0bad-4c66-9971-222dcaf72184-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b59528d2-0bad-4c66-9971-222dcaf72184\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.732765 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-cde2fba9-8f9b-406e-abc6-bd786e0adb3c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cde2fba9-8f9b-406e-abc6-bd786e0adb3c\") pod \"glance-default-internal-api-0\" (UID: \"b59528d2-0bad-4c66-9971-222dcaf72184\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.732837 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkjjq\" (UniqueName: \"kubernetes.io/projected/b59528d2-0bad-4c66-9971-222dcaf72184-kube-api-access-dkjjq\") pod \"glance-default-internal-api-0\" (UID: \"b59528d2-0bad-4c66-9971-222dcaf72184\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.732867 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b59528d2-0bad-4c66-9971-222dcaf72184-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b59528d2-0bad-4c66-9971-222dcaf72184\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.732910 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b59528d2-0bad-4c66-9971-222dcaf72184-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b59528d2-0bad-4c66-9971-222dcaf72184\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.732932 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b59528d2-0bad-4c66-9971-222dcaf72184-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b59528d2-0bad-4c66-9971-222dcaf72184\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.732951 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b59528d2-0bad-4c66-9971-222dcaf72184-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b59528d2-0bad-4c66-9971-222dcaf72184\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.733005 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b59528d2-0bad-4c66-9971-222dcaf72184-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"b59528d2-0bad-4c66-9971-222dcaf72184\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.733029 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b59528d2-0bad-4c66-9971-222dcaf72184-logs\") pod \"glance-default-internal-api-0\" (UID: \"b59528d2-0bad-4c66-9971-222dcaf72184\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.733944 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b59528d2-0bad-4c66-9971-222dcaf72184-logs\") pod \"glance-default-internal-api-0\" (UID: \"b59528d2-0bad-4c66-9971-222dcaf72184\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.734029 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b59528d2-0bad-4c66-9971-222dcaf72184-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b59528d2-0bad-4c66-9971-222dcaf72184\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.739061 4808 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.739148 4808 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-cde2fba9-8f9b-406e-abc6-bd786e0adb3c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cde2fba9-8f9b-406e-abc6-bd786e0adb3c\") pod \"glance-default-internal-api-0\" (UID: \"b59528d2-0bad-4c66-9971-222dcaf72184\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/babb0a58e49abb7abbb526a723d7265132519584485959e000cf4b8b02c96a84/globalmount\"" pod="openstack/glance-default-internal-api-0" Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.742364 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b59528d2-0bad-4c66-9971-222dcaf72184-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b59528d2-0bad-4c66-9971-222dcaf72184\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.742669 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b59528d2-0bad-4c66-9971-222dcaf72184-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b59528d2-0bad-4c66-9971-222dcaf72184\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.744970 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b59528d2-0bad-4c66-9971-222dcaf72184-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b59528d2-0bad-4c66-9971-222dcaf72184\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.749458 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b59528d2-0bad-4c66-9971-222dcaf72184-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"b59528d2-0bad-4c66-9971-222dcaf72184\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.753843 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkjjq\" (UniqueName: \"kubernetes.io/projected/b59528d2-0bad-4c66-9971-222dcaf72184-kube-api-access-dkjjq\") pod \"glance-default-internal-api-0\" (UID: \"b59528d2-0bad-4c66-9971-222dcaf72184\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.808243 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-cde2fba9-8f9b-406e-abc6-bd786e0adb3c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cde2fba9-8f9b-406e-abc6-bd786e0adb3c\") pod \"glance-default-internal-api-0\" (UID: \"b59528d2-0bad-4c66-9971-222dcaf72184\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:16:24 crc kubenswrapper[4808]: I0217 16:16:24.088439 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 17 16:16:24 crc kubenswrapper[4808]: I0217 16:16:24.169424 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d5dbe689-5e11-4832-84c8-d603c08a23e2","Type":"ContainerStarted","Data":"3c1154a88259d7c5533a0bfb92c0746de5fcbd416c6a484170a1f54c17bf6550"} Feb 17 16:16:24 crc kubenswrapper[4808]: I0217 16:16:24.675699 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 16:16:24 crc kubenswrapper[4808]: W0217 16:16:24.686372 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb59528d2_0bad_4c66_9971_222dcaf72184.slice/crio-597a3ac682e1224c10a08395fef8c338c5adecba0115cc547b97371502dc6e4b WatchSource:0}: Error finding container 597a3ac682e1224c10a08395fef8c338c5adecba0115cc547b97371502dc6e4b: Status 404 returned error can't find the container with id 597a3ac682e1224c10a08395fef8c338c5adecba0115cc547b97371502dc6e4b Feb 17 16:16:24 crc kubenswrapper[4808]: I0217 16:16:24.889908 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:16:24 crc kubenswrapper[4808]: I0217 16:16:24.967016 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c97f3908-a38c-4f62-ace9-1071eb7f8d55-scripts\") pod \"c97f3908-a38c-4f62-ace9-1071eb7f8d55\" (UID: \"c97f3908-a38c-4f62-ace9-1071eb7f8d55\") " Feb 17 16:16:24 crc kubenswrapper[4808]: I0217 16:16:24.967103 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c97f3908-a38c-4f62-ace9-1071eb7f8d55-config-data\") pod \"c97f3908-a38c-4f62-ace9-1071eb7f8d55\" (UID: \"c97f3908-a38c-4f62-ace9-1071eb7f8d55\") " Feb 17 16:16:24 crc kubenswrapper[4808]: I0217 16:16:24.967139 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c97f3908-a38c-4f62-ace9-1071eb7f8d55-sg-core-conf-yaml\") pod \"c97f3908-a38c-4f62-ace9-1071eb7f8d55\" (UID: \"c97f3908-a38c-4f62-ace9-1071eb7f8d55\") " Feb 17 16:16:24 crc kubenswrapper[4808]: I0217 16:16:24.967172 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c97f3908-a38c-4f62-ace9-1071eb7f8d55-combined-ca-bundle\") pod \"c97f3908-a38c-4f62-ace9-1071eb7f8d55\" (UID: \"c97f3908-a38c-4f62-ace9-1071eb7f8d55\") " Feb 17 16:16:24 crc kubenswrapper[4808]: I0217 16:16:24.967214 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c97f3908-a38c-4f62-ace9-1071eb7f8d55-log-httpd\") pod \"c97f3908-a38c-4f62-ace9-1071eb7f8d55\" (UID: \"c97f3908-a38c-4f62-ace9-1071eb7f8d55\") " Feb 17 16:16:24 crc kubenswrapper[4808]: I0217 16:16:24.967324 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c97f3908-a38c-4f62-ace9-1071eb7f8d55-run-httpd\") pod \"c97f3908-a38c-4f62-ace9-1071eb7f8d55\" (UID: \"c97f3908-a38c-4f62-ace9-1071eb7f8d55\") " Feb 17 16:16:24 crc kubenswrapper[4808]: I0217 16:16:24.967393 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k8blx\" (UniqueName: \"kubernetes.io/projected/c97f3908-a38c-4f62-ace9-1071eb7f8d55-kube-api-access-k8blx\") pod \"c97f3908-a38c-4f62-ace9-1071eb7f8d55\" (UID: \"c97f3908-a38c-4f62-ace9-1071eb7f8d55\") " Feb 17 16:16:24 crc kubenswrapper[4808]: I0217 16:16:24.970973 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c97f3908-a38c-4f62-ace9-1071eb7f8d55-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "c97f3908-a38c-4f62-ace9-1071eb7f8d55" (UID: "c97f3908-a38c-4f62-ace9-1071eb7f8d55"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:16:24 crc kubenswrapper[4808]: I0217 16:16:24.971241 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c97f3908-a38c-4f62-ace9-1071eb7f8d55-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "c97f3908-a38c-4f62-ace9-1071eb7f8d55" (UID: "c97f3908-a38c-4f62-ace9-1071eb7f8d55"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:16:24 crc kubenswrapper[4808]: I0217 16:16:24.974172 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c97f3908-a38c-4f62-ace9-1071eb7f8d55-scripts" (OuterVolumeSpecName: "scripts") pod "c97f3908-a38c-4f62-ace9-1071eb7f8d55" (UID: "c97f3908-a38c-4f62-ace9-1071eb7f8d55"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:16:24 crc kubenswrapper[4808]: I0217 16:16:24.983461 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c97f3908-a38c-4f62-ace9-1071eb7f8d55-kube-api-access-k8blx" (OuterVolumeSpecName: "kube-api-access-k8blx") pod "c97f3908-a38c-4f62-ace9-1071eb7f8d55" (UID: "c97f3908-a38c-4f62-ace9-1071eb7f8d55"). InnerVolumeSpecName "kube-api-access-k8blx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:16:25 crc kubenswrapper[4808]: I0217 16:16:25.005361 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c97f3908-a38c-4f62-ace9-1071eb7f8d55-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "c97f3908-a38c-4f62-ace9-1071eb7f8d55" (UID: "c97f3908-a38c-4f62-ace9-1071eb7f8d55"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:16:25 crc kubenswrapper[4808]: I0217 16:16:25.070106 4808 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c97f3908-a38c-4f62-ace9-1071eb7f8d55-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:25 crc kubenswrapper[4808]: I0217 16:16:25.070863 4808 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c97f3908-a38c-4f62-ace9-1071eb7f8d55-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:25 crc kubenswrapper[4808]: I0217 16:16:25.071012 4808 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c97f3908-a38c-4f62-ace9-1071eb7f8d55-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:25 crc kubenswrapper[4808]: I0217 16:16:25.071096 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k8blx\" (UniqueName: \"kubernetes.io/projected/c97f3908-a38c-4f62-ace9-1071eb7f8d55-kube-api-access-k8blx\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:25 crc kubenswrapper[4808]: I0217 16:16:25.071182 4808 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c97f3908-a38c-4f62-ace9-1071eb7f8d55-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:25 crc kubenswrapper[4808]: I0217 16:16:25.080744 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c97f3908-a38c-4f62-ace9-1071eb7f8d55-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c97f3908-a38c-4f62-ace9-1071eb7f8d55" (UID: "c97f3908-a38c-4f62-ace9-1071eb7f8d55"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:16:25 crc kubenswrapper[4808]: I0217 16:16:25.128222 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c97f3908-a38c-4f62-ace9-1071eb7f8d55-config-data" (OuterVolumeSpecName: "config-data") pod "c97f3908-a38c-4f62-ace9-1071eb7f8d55" (UID: "c97f3908-a38c-4f62-ace9-1071eb7f8d55"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:16:25 crc kubenswrapper[4808]: I0217 16:16:25.167690 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0" path="/var/lib/kubelet/pods/a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0/volumes" Feb 17 16:16:25 crc kubenswrapper[4808]: I0217 16:16:25.173510 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c97f3908-a38c-4f62-ace9-1071eb7f8d55-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:25 crc kubenswrapper[4808]: I0217 16:16:25.173552 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c97f3908-a38c-4f62-ace9-1071eb7f8d55-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:25 crc kubenswrapper[4808]: I0217 16:16:25.206740 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d5dbe689-5e11-4832-84c8-d603c08a23e2","Type":"ContainerStarted","Data":"3fc0e3e9839ba6ba04d80ec65d4fefff92d9970c5ba78a504133c669ee060018"} Feb 17 16:16:25 crc kubenswrapper[4808]: I0217 16:16:25.206812 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d5dbe689-5e11-4832-84c8-d603c08a23e2","Type":"ContainerStarted","Data":"68fa00e5c58a7a7daea19a1d47626e9d66f57afa40f30874855c7674e068d81f"} Feb 17 16:16:25 crc kubenswrapper[4808]: I0217 16:16:25.216150 4808 generic.go:334] "Generic (PLEG): container finished" podID="c97f3908-a38c-4f62-ace9-1071eb7f8d55" containerID="301f9423e1208ffad6a659af39889617aa9a122d75c8beea860d6ea0aaa127b5" exitCode=0 Feb 17 16:16:25 crc kubenswrapper[4808]: I0217 16:16:25.216224 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c97f3908-a38c-4f62-ace9-1071eb7f8d55","Type":"ContainerDied","Data":"301f9423e1208ffad6a659af39889617aa9a122d75c8beea860d6ea0aaa127b5"} Feb 17 16:16:25 crc kubenswrapper[4808]: I0217 16:16:25.216256 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c97f3908-a38c-4f62-ace9-1071eb7f8d55","Type":"ContainerDied","Data":"b85ba2e2aadf05c8a92885adbf2c7f51e6f51c7f11cdad1a0c73632146a66e50"} Feb 17 16:16:25 crc kubenswrapper[4808]: I0217 16:16:25.216277 4808 scope.go:117] "RemoveContainer" containerID="d147d3a774beef8d56b16073b0312fff476cdb9167202637fbefdc69afdfde83" Feb 17 16:16:25 crc kubenswrapper[4808]: I0217 16:16:25.216429 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:16:25 crc kubenswrapper[4808]: I0217 16:16:25.227883 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b59528d2-0bad-4c66-9971-222dcaf72184","Type":"ContainerStarted","Data":"597a3ac682e1224c10a08395fef8c338c5adecba0115cc547b97371502dc6e4b"} Feb 17 16:16:25 crc kubenswrapper[4808]: I0217 16:16:25.252620 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.252402237 podStartE2EDuration="3.252402237s" podCreationTimestamp="2026-02-17 16:16:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:16:25.228722716 +0000 UTC m=+1348.745081789" watchObservedRunningTime="2026-02-17 16:16:25.252402237 +0000 UTC m=+1348.768761310" Feb 17 16:16:25 crc kubenswrapper[4808]: I0217 16:16:25.262286 4808 scope.go:117] "RemoveContainer" containerID="112b761c64facadb4f5fba21c4d4dffd36bb2124f569063f0df2934df09e7fd2" Feb 17 16:16:25 crc kubenswrapper[4808]: I0217 16:16:25.280474 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:16:25 crc kubenswrapper[4808]: I0217 16:16:25.296680 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:16:25 crc kubenswrapper[4808]: I0217 16:16:25.310769 4808 scope.go:117] "RemoveContainer" containerID="271919e71f2932ffb8ee4558779cd5e9d9143c5a96f365f9eb7383d48e958de8" Feb 17 16:16:25 crc kubenswrapper[4808]: I0217 16:16:25.314672 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:16:25 crc kubenswrapper[4808]: E0217 16:16:25.315181 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c97f3908-a38c-4f62-ace9-1071eb7f8d55" containerName="sg-core" Feb 17 16:16:25 crc kubenswrapper[4808]: I0217 16:16:25.315201 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="c97f3908-a38c-4f62-ace9-1071eb7f8d55" containerName="sg-core" Feb 17 16:16:25 crc kubenswrapper[4808]: E0217 16:16:25.315221 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c97f3908-a38c-4f62-ace9-1071eb7f8d55" containerName="ceilometer-notification-agent" Feb 17 16:16:25 crc kubenswrapper[4808]: I0217 16:16:25.315229 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="c97f3908-a38c-4f62-ace9-1071eb7f8d55" containerName="ceilometer-notification-agent" Feb 17 16:16:25 crc kubenswrapper[4808]: E0217 16:16:25.315257 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c97f3908-a38c-4f62-ace9-1071eb7f8d55" containerName="proxy-httpd" Feb 17 16:16:25 crc kubenswrapper[4808]: I0217 16:16:25.315265 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="c97f3908-a38c-4f62-ace9-1071eb7f8d55" containerName="proxy-httpd" Feb 17 16:16:25 crc kubenswrapper[4808]: E0217 16:16:25.315303 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c97f3908-a38c-4f62-ace9-1071eb7f8d55" containerName="ceilometer-central-agent" Feb 17 16:16:25 crc kubenswrapper[4808]: I0217 16:16:25.315312 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="c97f3908-a38c-4f62-ace9-1071eb7f8d55" containerName="ceilometer-central-agent" Feb 17 16:16:25 crc kubenswrapper[4808]: I0217 16:16:25.315543 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="c97f3908-a38c-4f62-ace9-1071eb7f8d55" containerName="ceilometer-notification-agent" Feb 17 16:16:25 crc kubenswrapper[4808]: I0217 16:16:25.315561 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="c97f3908-a38c-4f62-ace9-1071eb7f8d55" containerName="ceilometer-central-agent" Feb 17 16:16:25 crc kubenswrapper[4808]: I0217 16:16:25.315602 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="c97f3908-a38c-4f62-ace9-1071eb7f8d55" containerName="proxy-httpd" Feb 17 16:16:25 crc kubenswrapper[4808]: I0217 16:16:25.315619 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="c97f3908-a38c-4f62-ace9-1071eb7f8d55" containerName="sg-core" Feb 17 16:16:25 crc kubenswrapper[4808]: I0217 16:16:25.317812 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:16:25 crc kubenswrapper[4808]: I0217 16:16:25.333191 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:16:25 crc kubenswrapper[4808]: I0217 16:16:25.342403 4808 scope.go:117] "RemoveContainer" containerID="301f9423e1208ffad6a659af39889617aa9a122d75c8beea860d6ea0aaa127b5" Feb 17 16:16:25 crc kubenswrapper[4808]: I0217 16:16:25.343480 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 17 16:16:25 crc kubenswrapper[4808]: I0217 16:16:25.343649 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 17 16:16:25 crc kubenswrapper[4808]: I0217 16:16:25.373144 4808 scope.go:117] "RemoveContainer" containerID="d147d3a774beef8d56b16073b0312fff476cdb9167202637fbefdc69afdfde83" Feb 17 16:16:25 crc kubenswrapper[4808]: E0217 16:16:25.375040 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d147d3a774beef8d56b16073b0312fff476cdb9167202637fbefdc69afdfde83\": container with ID starting with d147d3a774beef8d56b16073b0312fff476cdb9167202637fbefdc69afdfde83 not found: ID does not exist" containerID="d147d3a774beef8d56b16073b0312fff476cdb9167202637fbefdc69afdfde83" Feb 17 16:16:25 crc kubenswrapper[4808]: I0217 16:16:25.375096 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d147d3a774beef8d56b16073b0312fff476cdb9167202637fbefdc69afdfde83"} err="failed to get container status \"d147d3a774beef8d56b16073b0312fff476cdb9167202637fbefdc69afdfde83\": rpc error: code = NotFound desc = could not find container \"d147d3a774beef8d56b16073b0312fff476cdb9167202637fbefdc69afdfde83\": container with ID starting with d147d3a774beef8d56b16073b0312fff476cdb9167202637fbefdc69afdfde83 not found: ID does not exist" Feb 17 16:16:25 crc kubenswrapper[4808]: I0217 16:16:25.375133 4808 scope.go:117] "RemoveContainer" containerID="112b761c64facadb4f5fba21c4d4dffd36bb2124f569063f0df2934df09e7fd2" Feb 17 16:16:25 crc kubenswrapper[4808]: E0217 16:16:25.378010 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"112b761c64facadb4f5fba21c4d4dffd36bb2124f569063f0df2934df09e7fd2\": container with ID starting with 112b761c64facadb4f5fba21c4d4dffd36bb2124f569063f0df2934df09e7fd2 not found: ID does not exist" containerID="112b761c64facadb4f5fba21c4d4dffd36bb2124f569063f0df2934df09e7fd2" Feb 17 16:16:25 crc kubenswrapper[4808]: I0217 16:16:25.378086 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"112b761c64facadb4f5fba21c4d4dffd36bb2124f569063f0df2934df09e7fd2"} err="failed to get container status \"112b761c64facadb4f5fba21c4d4dffd36bb2124f569063f0df2934df09e7fd2\": rpc error: code = NotFound desc = could not find container \"112b761c64facadb4f5fba21c4d4dffd36bb2124f569063f0df2934df09e7fd2\": container with ID starting with 112b761c64facadb4f5fba21c4d4dffd36bb2124f569063f0df2934df09e7fd2 not found: ID does not exist" Feb 17 16:16:25 crc kubenswrapper[4808]: I0217 16:16:25.378116 4808 scope.go:117] "RemoveContainer" containerID="271919e71f2932ffb8ee4558779cd5e9d9143c5a96f365f9eb7383d48e958de8" Feb 17 16:16:25 crc kubenswrapper[4808]: E0217 16:16:25.378882 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"271919e71f2932ffb8ee4558779cd5e9d9143c5a96f365f9eb7383d48e958de8\": container with ID starting with 271919e71f2932ffb8ee4558779cd5e9d9143c5a96f365f9eb7383d48e958de8 not found: ID does not exist" containerID="271919e71f2932ffb8ee4558779cd5e9d9143c5a96f365f9eb7383d48e958de8" Feb 17 16:16:25 crc kubenswrapper[4808]: I0217 16:16:25.378921 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"271919e71f2932ffb8ee4558779cd5e9d9143c5a96f365f9eb7383d48e958de8"} err="failed to get container status \"271919e71f2932ffb8ee4558779cd5e9d9143c5a96f365f9eb7383d48e958de8\": rpc error: code = NotFound desc = could not find container \"271919e71f2932ffb8ee4558779cd5e9d9143c5a96f365f9eb7383d48e958de8\": container with ID starting with 271919e71f2932ffb8ee4558779cd5e9d9143c5a96f365f9eb7383d48e958de8 not found: ID does not exist" Feb 17 16:16:25 crc kubenswrapper[4808]: I0217 16:16:25.378942 4808 scope.go:117] "RemoveContainer" containerID="301f9423e1208ffad6a659af39889617aa9a122d75c8beea860d6ea0aaa127b5" Feb 17 16:16:25 crc kubenswrapper[4808]: E0217 16:16:25.379639 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"301f9423e1208ffad6a659af39889617aa9a122d75c8beea860d6ea0aaa127b5\": container with ID starting with 301f9423e1208ffad6a659af39889617aa9a122d75c8beea860d6ea0aaa127b5 not found: ID does not exist" containerID="301f9423e1208ffad6a659af39889617aa9a122d75c8beea860d6ea0aaa127b5" Feb 17 16:16:25 crc kubenswrapper[4808]: I0217 16:16:25.379689 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"301f9423e1208ffad6a659af39889617aa9a122d75c8beea860d6ea0aaa127b5"} err="failed to get container status \"301f9423e1208ffad6a659af39889617aa9a122d75c8beea860d6ea0aaa127b5\": rpc error: code = NotFound desc = could not find container \"301f9423e1208ffad6a659af39889617aa9a122d75c8beea860d6ea0aaa127b5\": container with ID starting with 301f9423e1208ffad6a659af39889617aa9a122d75c8beea860d6ea0aaa127b5 not found: ID does not exist" Feb 17 16:16:25 crc kubenswrapper[4808]: I0217 16:16:25.480600 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e8456642-c501-433c-9644-afbe5c7a43e6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e8456642-c501-433c-9644-afbe5c7a43e6\") " pod="openstack/ceilometer-0" Feb 17 16:16:25 crc kubenswrapper[4808]: I0217 16:16:25.480702 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pmph\" (UniqueName: \"kubernetes.io/projected/e8456642-c501-433c-9644-afbe5c7a43e6-kube-api-access-6pmph\") pod \"ceilometer-0\" (UID: \"e8456642-c501-433c-9644-afbe5c7a43e6\") " pod="openstack/ceilometer-0" Feb 17 16:16:25 crc kubenswrapper[4808]: I0217 16:16:25.480765 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e8456642-c501-433c-9644-afbe5c7a43e6-run-httpd\") pod \"ceilometer-0\" (UID: \"e8456642-c501-433c-9644-afbe5c7a43e6\") " pod="openstack/ceilometer-0" Feb 17 16:16:25 crc kubenswrapper[4808]: I0217 16:16:25.480803 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e8456642-c501-433c-9644-afbe5c7a43e6-log-httpd\") pod \"ceilometer-0\" (UID: \"e8456642-c501-433c-9644-afbe5c7a43e6\") " pod="openstack/ceilometer-0" Feb 17 16:16:25 crc kubenswrapper[4808]: I0217 16:16:25.480853 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8456642-c501-433c-9644-afbe5c7a43e6-config-data\") pod \"ceilometer-0\" (UID: \"e8456642-c501-433c-9644-afbe5c7a43e6\") " pod="openstack/ceilometer-0" Feb 17 16:16:25 crc kubenswrapper[4808]: I0217 16:16:25.480885 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8456642-c501-433c-9644-afbe5c7a43e6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e8456642-c501-433c-9644-afbe5c7a43e6\") " pod="openstack/ceilometer-0" Feb 17 16:16:25 crc kubenswrapper[4808]: I0217 16:16:25.481064 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e8456642-c501-433c-9644-afbe5c7a43e6-scripts\") pod \"ceilometer-0\" (UID: \"e8456642-c501-433c-9644-afbe5c7a43e6\") " pod="openstack/ceilometer-0" Feb 17 16:16:25 crc kubenswrapper[4808]: I0217 16:16:25.583430 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8456642-c501-433c-9644-afbe5c7a43e6-config-data\") pod \"ceilometer-0\" (UID: \"e8456642-c501-433c-9644-afbe5c7a43e6\") " pod="openstack/ceilometer-0" Feb 17 16:16:25 crc kubenswrapper[4808]: I0217 16:16:25.583472 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8456642-c501-433c-9644-afbe5c7a43e6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e8456642-c501-433c-9644-afbe5c7a43e6\") " pod="openstack/ceilometer-0" Feb 17 16:16:25 crc kubenswrapper[4808]: I0217 16:16:25.583508 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e8456642-c501-433c-9644-afbe5c7a43e6-scripts\") pod \"ceilometer-0\" (UID: \"e8456642-c501-433c-9644-afbe5c7a43e6\") " pod="openstack/ceilometer-0" Feb 17 16:16:25 crc kubenswrapper[4808]: I0217 16:16:25.583604 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e8456642-c501-433c-9644-afbe5c7a43e6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e8456642-c501-433c-9644-afbe5c7a43e6\") " pod="openstack/ceilometer-0" Feb 17 16:16:25 crc kubenswrapper[4808]: I0217 16:16:25.583677 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6pmph\" (UniqueName: \"kubernetes.io/projected/e8456642-c501-433c-9644-afbe5c7a43e6-kube-api-access-6pmph\") pod \"ceilometer-0\" (UID: \"e8456642-c501-433c-9644-afbe5c7a43e6\") " pod="openstack/ceilometer-0" Feb 17 16:16:25 crc kubenswrapper[4808]: I0217 16:16:25.583736 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e8456642-c501-433c-9644-afbe5c7a43e6-run-httpd\") pod \"ceilometer-0\" (UID: \"e8456642-c501-433c-9644-afbe5c7a43e6\") " pod="openstack/ceilometer-0" Feb 17 16:16:25 crc kubenswrapper[4808]: I0217 16:16:25.583763 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e8456642-c501-433c-9644-afbe5c7a43e6-log-httpd\") pod \"ceilometer-0\" (UID: \"e8456642-c501-433c-9644-afbe5c7a43e6\") " pod="openstack/ceilometer-0" Feb 17 16:16:25 crc kubenswrapper[4808]: I0217 16:16:25.584146 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e8456642-c501-433c-9644-afbe5c7a43e6-log-httpd\") pod \"ceilometer-0\" (UID: \"e8456642-c501-433c-9644-afbe5c7a43e6\") " pod="openstack/ceilometer-0" Feb 17 16:16:25 crc kubenswrapper[4808]: I0217 16:16:25.584267 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e8456642-c501-433c-9644-afbe5c7a43e6-run-httpd\") pod \"ceilometer-0\" (UID: \"e8456642-c501-433c-9644-afbe5c7a43e6\") " pod="openstack/ceilometer-0" Feb 17 16:16:25 crc kubenswrapper[4808]: I0217 16:16:25.587220 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e8456642-c501-433c-9644-afbe5c7a43e6-scripts\") pod \"ceilometer-0\" (UID: \"e8456642-c501-433c-9644-afbe5c7a43e6\") " pod="openstack/ceilometer-0" Feb 17 16:16:25 crc kubenswrapper[4808]: I0217 16:16:25.588520 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8456642-c501-433c-9644-afbe5c7a43e6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e8456642-c501-433c-9644-afbe5c7a43e6\") " pod="openstack/ceilometer-0" Feb 17 16:16:25 crc kubenswrapper[4808]: I0217 16:16:25.589257 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e8456642-c501-433c-9644-afbe5c7a43e6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e8456642-c501-433c-9644-afbe5c7a43e6\") " pod="openstack/ceilometer-0" Feb 17 16:16:25 crc kubenswrapper[4808]: I0217 16:16:25.589763 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8456642-c501-433c-9644-afbe5c7a43e6-config-data\") pod \"ceilometer-0\" (UID: \"e8456642-c501-433c-9644-afbe5c7a43e6\") " pod="openstack/ceilometer-0" Feb 17 16:16:25 crc kubenswrapper[4808]: I0217 16:16:25.601164 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6pmph\" (UniqueName: \"kubernetes.io/projected/e8456642-c501-433c-9644-afbe5c7a43e6-kube-api-access-6pmph\") pod \"ceilometer-0\" (UID: \"e8456642-c501-433c-9644-afbe5c7a43e6\") " pod="openstack/ceilometer-0" Feb 17 16:16:25 crc kubenswrapper[4808]: I0217 16:16:25.698285 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:16:26 crc kubenswrapper[4808]: I0217 16:16:26.209379 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:16:26 crc kubenswrapper[4808]: W0217 16:16:26.209922 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode8456642_c501_433c_9644_afbe5c7a43e6.slice/crio-3a9b796e709869b9cb4799f9bb193f7ffc25705102bf28c2fde62d64ed8e86d0 WatchSource:0}: Error finding container 3a9b796e709869b9cb4799f9bb193f7ffc25705102bf28c2fde62d64ed8e86d0: Status 404 returned error can't find the container with id 3a9b796e709869b9cb4799f9bb193f7ffc25705102bf28c2fde62d64ed8e86d0 Feb 17 16:16:26 crc kubenswrapper[4808]: I0217 16:16:26.241298 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b59528d2-0bad-4c66-9971-222dcaf72184","Type":"ContainerStarted","Data":"d49d69e9af5ef0514c6116e0015e9c73bb90b2d46a58c33141c3338212974e96"} Feb 17 16:16:26 crc kubenswrapper[4808]: I0217 16:16:26.241336 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b59528d2-0bad-4c66-9971-222dcaf72184","Type":"ContainerStarted","Data":"4a0cc7af3e1540c076ba9d914c3743fc8e613a5a7782fbfcc159b718262a9a5c"} Feb 17 16:16:26 crc kubenswrapper[4808]: I0217 16:16:26.242679 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e8456642-c501-433c-9644-afbe5c7a43e6","Type":"ContainerStarted","Data":"3a9b796e709869b9cb4799f9bb193f7ffc25705102bf28c2fde62d64ed8e86d0"} Feb 17 16:16:26 crc kubenswrapper[4808]: I0217 16:16:26.245879 4808 generic.go:334] "Generic (PLEG): container finished" podID="a276997e-b8ab-4b5a-ac5f-c21a8114d673" containerID="03dd27d0072c98b182eebc081f82c18296cd4cef8a9626830d097fc0caa3a09f" exitCode=0 Feb 17 16:16:26 crc kubenswrapper[4808]: I0217 16:16:26.245962 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-zrx8j" event={"ID":"a276997e-b8ab-4b5a-ac5f-c21a8114d673","Type":"ContainerDied","Data":"03dd27d0072c98b182eebc081f82c18296cd4cef8a9626830d097fc0caa3a09f"} Feb 17 16:16:26 crc kubenswrapper[4808]: I0217 16:16:26.281153 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.281137613 podStartE2EDuration="3.281137613s" podCreationTimestamp="2026-02-17 16:16:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:16:26.277257157 +0000 UTC m=+1349.793616230" watchObservedRunningTime="2026-02-17 16:16:26.281137613 +0000 UTC m=+1349.797496686" Feb 17 16:16:27 crc kubenswrapper[4808]: I0217 16:16:27.122853 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:16:27 crc kubenswrapper[4808]: I0217 16:16:27.160912 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c97f3908-a38c-4f62-ace9-1071eb7f8d55" path="/var/lib/kubelet/pods/c97f3908-a38c-4f62-ace9-1071eb7f8d55/volumes" Feb 17 16:16:27 crc kubenswrapper[4808]: I0217 16:16:27.256755 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e8456642-c501-433c-9644-afbe5c7a43e6","Type":"ContainerStarted","Data":"3049e3aba53451516b070bb896da851c0303048da8bc21078f93399256594ef7"} Feb 17 16:16:27 crc kubenswrapper[4808]: I0217 16:16:27.802208 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-zrx8j" Feb 17 16:16:27 crc kubenswrapper[4808]: I0217 16:16:27.925163 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2fwrj\" (UniqueName: \"kubernetes.io/projected/a276997e-b8ab-4b5a-ac5f-c21a8114d673-kube-api-access-2fwrj\") pod \"a276997e-b8ab-4b5a-ac5f-c21a8114d673\" (UID: \"a276997e-b8ab-4b5a-ac5f-c21a8114d673\") " Feb 17 16:16:27 crc kubenswrapper[4808]: I0217 16:16:27.925346 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a276997e-b8ab-4b5a-ac5f-c21a8114d673-combined-ca-bundle\") pod \"a276997e-b8ab-4b5a-ac5f-c21a8114d673\" (UID: \"a276997e-b8ab-4b5a-ac5f-c21a8114d673\") " Feb 17 16:16:27 crc kubenswrapper[4808]: I0217 16:16:27.925384 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a276997e-b8ab-4b5a-ac5f-c21a8114d673-scripts\") pod \"a276997e-b8ab-4b5a-ac5f-c21a8114d673\" (UID: \"a276997e-b8ab-4b5a-ac5f-c21a8114d673\") " Feb 17 16:16:27 crc kubenswrapper[4808]: I0217 16:16:27.925441 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a276997e-b8ab-4b5a-ac5f-c21a8114d673-config-data\") pod \"a276997e-b8ab-4b5a-ac5f-c21a8114d673\" (UID: \"a276997e-b8ab-4b5a-ac5f-c21a8114d673\") " Feb 17 16:16:27 crc kubenswrapper[4808]: I0217 16:16:27.933760 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a276997e-b8ab-4b5a-ac5f-c21a8114d673-scripts" (OuterVolumeSpecName: "scripts") pod "a276997e-b8ab-4b5a-ac5f-c21a8114d673" (UID: "a276997e-b8ab-4b5a-ac5f-c21a8114d673"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:16:27 crc kubenswrapper[4808]: I0217 16:16:27.935792 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a276997e-b8ab-4b5a-ac5f-c21a8114d673-kube-api-access-2fwrj" (OuterVolumeSpecName: "kube-api-access-2fwrj") pod "a276997e-b8ab-4b5a-ac5f-c21a8114d673" (UID: "a276997e-b8ab-4b5a-ac5f-c21a8114d673"). InnerVolumeSpecName "kube-api-access-2fwrj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:16:27 crc kubenswrapper[4808]: I0217 16:16:27.958777 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a276997e-b8ab-4b5a-ac5f-c21a8114d673-config-data" (OuterVolumeSpecName: "config-data") pod "a276997e-b8ab-4b5a-ac5f-c21a8114d673" (UID: "a276997e-b8ab-4b5a-ac5f-c21a8114d673"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:16:27 crc kubenswrapper[4808]: I0217 16:16:27.960262 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a276997e-b8ab-4b5a-ac5f-c21a8114d673-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a276997e-b8ab-4b5a-ac5f-c21a8114d673" (UID: "a276997e-b8ab-4b5a-ac5f-c21a8114d673"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:16:28 crc kubenswrapper[4808]: I0217 16:16:28.028878 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a276997e-b8ab-4b5a-ac5f-c21a8114d673-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:28 crc kubenswrapper[4808]: I0217 16:16:28.028911 4808 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a276997e-b8ab-4b5a-ac5f-c21a8114d673-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:28 crc kubenswrapper[4808]: I0217 16:16:28.028921 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a276997e-b8ab-4b5a-ac5f-c21a8114d673-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:28 crc kubenswrapper[4808]: I0217 16:16:28.028929 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2fwrj\" (UniqueName: \"kubernetes.io/projected/a276997e-b8ab-4b5a-ac5f-c21a8114d673-kube-api-access-2fwrj\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:28 crc kubenswrapper[4808]: I0217 16:16:28.269390 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-zrx8j" event={"ID":"a276997e-b8ab-4b5a-ac5f-c21a8114d673","Type":"ContainerDied","Data":"268e843d688bb610fddbc979618a94257055f1aecd4284dda615a689b1e070c5"} Feb 17 16:16:28 crc kubenswrapper[4808]: I0217 16:16:28.269441 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="268e843d688bb610fddbc979618a94257055f1aecd4284dda615a689b1e070c5" Feb 17 16:16:28 crc kubenswrapper[4808]: I0217 16:16:28.269509 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-zrx8j" Feb 17 16:16:28 crc kubenswrapper[4808]: I0217 16:16:28.275652 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e8456642-c501-433c-9644-afbe5c7a43e6","Type":"ContainerStarted","Data":"e388db597c5f0636fd10ad14fe6e1347ac42817400f36ec63088edd356dbf6e1"} Feb 17 16:16:28 crc kubenswrapper[4808]: I0217 16:16:28.275698 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e8456642-c501-433c-9644-afbe5c7a43e6","Type":"ContainerStarted","Data":"090e70ffbb67322583c02e52f1888dc7a40cf42484f5eafcf7a974dc9ca72afc"} Feb 17 16:16:28 crc kubenswrapper[4808]: I0217 16:16:28.422994 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 17 16:16:28 crc kubenswrapper[4808]: E0217 16:16:28.423868 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a276997e-b8ab-4b5a-ac5f-c21a8114d673" containerName="nova-cell0-conductor-db-sync" Feb 17 16:16:28 crc kubenswrapper[4808]: I0217 16:16:28.423890 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="a276997e-b8ab-4b5a-ac5f-c21a8114d673" containerName="nova-cell0-conductor-db-sync" Feb 17 16:16:28 crc kubenswrapper[4808]: I0217 16:16:28.424136 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="a276997e-b8ab-4b5a-ac5f-c21a8114d673" containerName="nova-cell0-conductor-db-sync" Feb 17 16:16:28 crc kubenswrapper[4808]: I0217 16:16:28.425030 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 17 16:16:28 crc kubenswrapper[4808]: I0217 16:16:28.433628 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 17 16:16:28 crc kubenswrapper[4808]: I0217 16:16:28.434290 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-tcmz6" Feb 17 16:16:28 crc kubenswrapper[4808]: I0217 16:16:28.434309 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 17 16:16:28 crc kubenswrapper[4808]: I0217 16:16:28.539221 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7rtr\" (UniqueName: \"kubernetes.io/projected/793e01c5-a9c7-4545-8244-34a6bae837dc-kube-api-access-s7rtr\") pod \"nova-cell0-conductor-0\" (UID: \"793e01c5-a9c7-4545-8244-34a6bae837dc\") " pod="openstack/nova-cell0-conductor-0" Feb 17 16:16:28 crc kubenswrapper[4808]: I0217 16:16:28.539386 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/793e01c5-a9c7-4545-8244-34a6bae837dc-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"793e01c5-a9c7-4545-8244-34a6bae837dc\") " pod="openstack/nova-cell0-conductor-0" Feb 17 16:16:28 crc kubenswrapper[4808]: I0217 16:16:28.539416 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/793e01c5-a9c7-4545-8244-34a6bae837dc-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"793e01c5-a9c7-4545-8244-34a6bae837dc\") " pod="openstack/nova-cell0-conductor-0" Feb 17 16:16:28 crc kubenswrapper[4808]: I0217 16:16:28.641189 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/793e01c5-a9c7-4545-8244-34a6bae837dc-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"793e01c5-a9c7-4545-8244-34a6bae837dc\") " pod="openstack/nova-cell0-conductor-0" Feb 17 16:16:28 crc kubenswrapper[4808]: I0217 16:16:28.641232 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/793e01c5-a9c7-4545-8244-34a6bae837dc-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"793e01c5-a9c7-4545-8244-34a6bae837dc\") " pod="openstack/nova-cell0-conductor-0" Feb 17 16:16:28 crc kubenswrapper[4808]: I0217 16:16:28.641351 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s7rtr\" (UniqueName: \"kubernetes.io/projected/793e01c5-a9c7-4545-8244-34a6bae837dc-kube-api-access-s7rtr\") pod \"nova-cell0-conductor-0\" (UID: \"793e01c5-a9c7-4545-8244-34a6bae837dc\") " pod="openstack/nova-cell0-conductor-0" Feb 17 16:16:28 crc kubenswrapper[4808]: I0217 16:16:28.645168 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/793e01c5-a9c7-4545-8244-34a6bae837dc-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"793e01c5-a9c7-4545-8244-34a6bae837dc\") " pod="openstack/nova-cell0-conductor-0" Feb 17 16:16:28 crc kubenswrapper[4808]: I0217 16:16:28.645386 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/793e01c5-a9c7-4545-8244-34a6bae837dc-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"793e01c5-a9c7-4545-8244-34a6bae837dc\") " pod="openstack/nova-cell0-conductor-0" Feb 17 16:16:28 crc kubenswrapper[4808]: I0217 16:16:28.673660 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7rtr\" (UniqueName: \"kubernetes.io/projected/793e01c5-a9c7-4545-8244-34a6bae837dc-kube-api-access-s7rtr\") pod \"nova-cell0-conductor-0\" (UID: \"793e01c5-a9c7-4545-8244-34a6bae837dc\") " pod="openstack/nova-cell0-conductor-0" Feb 17 16:16:28 crc kubenswrapper[4808]: I0217 16:16:28.742680 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 17 16:16:29 crc kubenswrapper[4808]: I0217 16:16:29.216849 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 17 16:16:29 crc kubenswrapper[4808]: I0217 16:16:29.288799 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"793e01c5-a9c7-4545-8244-34a6bae837dc","Type":"ContainerStarted","Data":"7caffc2a919e783df44efecca3e4d55e23b17b2dd4860e6c42d49a0f3c69fe6a"} Feb 17 16:16:30 crc kubenswrapper[4808]: I0217 16:16:30.300068 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"793e01c5-a9c7-4545-8244-34a6bae837dc","Type":"ContainerStarted","Data":"7228d7aa3cccfbefcccd5def675e46d2b68a93553954a7160ff2f9acc2f06183"} Feb 17 16:16:30 crc kubenswrapper[4808]: I0217 16:16:30.300815 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 17 16:16:30 crc kubenswrapper[4808]: I0217 16:16:30.303117 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e8456642-c501-433c-9644-afbe5c7a43e6","Type":"ContainerStarted","Data":"bcff99d6ad66596d26a49c30224a0cbca9b4294d19393339a4468e149a4865eb"} Feb 17 16:16:30 crc kubenswrapper[4808]: I0217 16:16:30.303293 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e8456642-c501-433c-9644-afbe5c7a43e6" containerName="ceilometer-central-agent" containerID="cri-o://3049e3aba53451516b070bb896da851c0303048da8bc21078f93399256594ef7" gracePeriod=30 Feb 17 16:16:30 crc kubenswrapper[4808]: I0217 16:16:30.303420 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 17 16:16:30 crc kubenswrapper[4808]: I0217 16:16:30.303479 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e8456642-c501-433c-9644-afbe5c7a43e6" containerName="proxy-httpd" containerID="cri-o://bcff99d6ad66596d26a49c30224a0cbca9b4294d19393339a4468e149a4865eb" gracePeriod=30 Feb 17 16:16:30 crc kubenswrapper[4808]: I0217 16:16:30.303529 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e8456642-c501-433c-9644-afbe5c7a43e6" containerName="sg-core" containerID="cri-o://e388db597c5f0636fd10ad14fe6e1347ac42817400f36ec63088edd356dbf6e1" gracePeriod=30 Feb 17 16:16:30 crc kubenswrapper[4808]: I0217 16:16:30.303603 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e8456642-c501-433c-9644-afbe5c7a43e6" containerName="ceilometer-notification-agent" containerID="cri-o://090e70ffbb67322583c02e52f1888dc7a40cf42484f5eafcf7a974dc9ca72afc" gracePeriod=30 Feb 17 16:16:30 crc kubenswrapper[4808]: I0217 16:16:30.357693 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.172771616 podStartE2EDuration="5.357670305s" podCreationTimestamp="2026-02-17 16:16:25 +0000 UTC" firstStartedPulling="2026-02-17 16:16:26.212122374 +0000 UTC m=+1349.728481447" lastFinishedPulling="2026-02-17 16:16:29.397021043 +0000 UTC m=+1352.913380136" observedRunningTime="2026-02-17 16:16:30.350729106 +0000 UTC m=+1353.867088179" watchObservedRunningTime="2026-02-17 16:16:30.357670305 +0000 UTC m=+1353.874029388" Feb 17 16:16:30 crc kubenswrapper[4808]: I0217 16:16:30.365095 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.365075715 podStartE2EDuration="2.365075715s" podCreationTimestamp="2026-02-17 16:16:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:16:30.324145227 +0000 UTC m=+1353.840504300" watchObservedRunningTime="2026-02-17 16:16:30.365075715 +0000 UTC m=+1353.881434798" Feb 17 16:16:31 crc kubenswrapper[4808]: I0217 16:16:31.314828 4808 generic.go:334] "Generic (PLEG): container finished" podID="e8456642-c501-433c-9644-afbe5c7a43e6" containerID="bcff99d6ad66596d26a49c30224a0cbca9b4294d19393339a4468e149a4865eb" exitCode=0 Feb 17 16:16:31 crc kubenswrapper[4808]: I0217 16:16:31.315166 4808 generic.go:334] "Generic (PLEG): container finished" podID="e8456642-c501-433c-9644-afbe5c7a43e6" containerID="e388db597c5f0636fd10ad14fe6e1347ac42817400f36ec63088edd356dbf6e1" exitCode=2 Feb 17 16:16:31 crc kubenswrapper[4808]: I0217 16:16:31.315176 4808 generic.go:334] "Generic (PLEG): container finished" podID="e8456642-c501-433c-9644-afbe5c7a43e6" containerID="090e70ffbb67322583c02e52f1888dc7a40cf42484f5eafcf7a974dc9ca72afc" exitCode=0 Feb 17 16:16:31 crc kubenswrapper[4808]: I0217 16:16:31.314899 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e8456642-c501-433c-9644-afbe5c7a43e6","Type":"ContainerDied","Data":"bcff99d6ad66596d26a49c30224a0cbca9b4294d19393339a4468e149a4865eb"} Feb 17 16:16:31 crc kubenswrapper[4808]: I0217 16:16:31.315287 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e8456642-c501-433c-9644-afbe5c7a43e6","Type":"ContainerDied","Data":"e388db597c5f0636fd10ad14fe6e1347ac42817400f36ec63088edd356dbf6e1"} Feb 17 16:16:31 crc kubenswrapper[4808]: I0217 16:16:31.315301 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e8456642-c501-433c-9644-afbe5c7a43e6","Type":"ContainerDied","Data":"090e70ffbb67322583c02e52f1888dc7a40cf42484f5eafcf7a974dc9ca72afc"} Feb 17 16:16:32 crc kubenswrapper[4808]: I0217 16:16:32.855410 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 17 16:16:32 crc kubenswrapper[4808]: I0217 16:16:32.855955 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 17 16:16:32 crc kubenswrapper[4808]: I0217 16:16:32.882942 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 17 16:16:32 crc kubenswrapper[4808]: I0217 16:16:32.907141 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 17 16:16:33 crc kubenswrapper[4808]: I0217 16:16:33.341439 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 17 16:16:33 crc kubenswrapper[4808]: I0217 16:16:33.341512 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 17 16:16:34 crc kubenswrapper[4808]: I0217 16:16:34.089330 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 17 16:16:34 crc kubenswrapper[4808]: I0217 16:16:34.089401 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 17 16:16:34 crc kubenswrapper[4808]: I0217 16:16:34.126623 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 17 16:16:34 crc kubenswrapper[4808]: I0217 16:16:34.144019 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 17 16:16:34 crc kubenswrapper[4808]: I0217 16:16:34.368433 4808 generic.go:334] "Generic (PLEG): container finished" podID="e8456642-c501-433c-9644-afbe5c7a43e6" containerID="3049e3aba53451516b070bb896da851c0303048da8bc21078f93399256594ef7" exitCode=0 Feb 17 16:16:34 crc kubenswrapper[4808]: I0217 16:16:34.368616 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e8456642-c501-433c-9644-afbe5c7a43e6","Type":"ContainerDied","Data":"3049e3aba53451516b070bb896da851c0303048da8bc21078f93399256594ef7"} Feb 17 16:16:34 crc kubenswrapper[4808]: I0217 16:16:34.370296 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 17 16:16:34 crc kubenswrapper[4808]: I0217 16:16:34.371022 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 17 16:16:34 crc kubenswrapper[4808]: I0217 16:16:34.791041 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:16:34 crc kubenswrapper[4808]: I0217 16:16:34.892298 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e8456642-c501-433c-9644-afbe5c7a43e6-run-httpd\") pod \"e8456642-c501-433c-9644-afbe5c7a43e6\" (UID: \"e8456642-c501-433c-9644-afbe5c7a43e6\") " Feb 17 16:16:34 crc kubenswrapper[4808]: I0217 16:16:34.892885 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e8456642-c501-433c-9644-afbe5c7a43e6-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "e8456642-c501-433c-9644-afbe5c7a43e6" (UID: "e8456642-c501-433c-9644-afbe5c7a43e6"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:16:34 crc kubenswrapper[4808]: I0217 16:16:34.893212 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e8456642-c501-433c-9644-afbe5c7a43e6-scripts\") pod \"e8456642-c501-433c-9644-afbe5c7a43e6\" (UID: \"e8456642-c501-433c-9644-afbe5c7a43e6\") " Feb 17 16:16:34 crc kubenswrapper[4808]: I0217 16:16:34.893308 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8456642-c501-433c-9644-afbe5c7a43e6-config-data\") pod \"e8456642-c501-433c-9644-afbe5c7a43e6\" (UID: \"e8456642-c501-433c-9644-afbe5c7a43e6\") " Feb 17 16:16:34 crc kubenswrapper[4808]: I0217 16:16:34.893351 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e8456642-c501-433c-9644-afbe5c7a43e6-sg-core-conf-yaml\") pod \"e8456642-c501-433c-9644-afbe5c7a43e6\" (UID: \"e8456642-c501-433c-9644-afbe5c7a43e6\") " Feb 17 16:16:34 crc kubenswrapper[4808]: I0217 16:16:34.893396 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8456642-c501-433c-9644-afbe5c7a43e6-combined-ca-bundle\") pod \"e8456642-c501-433c-9644-afbe5c7a43e6\" (UID: \"e8456642-c501-433c-9644-afbe5c7a43e6\") " Feb 17 16:16:34 crc kubenswrapper[4808]: I0217 16:16:34.893514 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e8456642-c501-433c-9644-afbe5c7a43e6-log-httpd\") pod \"e8456642-c501-433c-9644-afbe5c7a43e6\" (UID: \"e8456642-c501-433c-9644-afbe5c7a43e6\") " Feb 17 16:16:34 crc kubenswrapper[4808]: I0217 16:16:34.893593 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6pmph\" (UniqueName: \"kubernetes.io/projected/e8456642-c501-433c-9644-afbe5c7a43e6-kube-api-access-6pmph\") pod \"e8456642-c501-433c-9644-afbe5c7a43e6\" (UID: \"e8456642-c501-433c-9644-afbe5c7a43e6\") " Feb 17 16:16:34 crc kubenswrapper[4808]: I0217 16:16:34.894379 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e8456642-c501-433c-9644-afbe5c7a43e6-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "e8456642-c501-433c-9644-afbe5c7a43e6" (UID: "e8456642-c501-433c-9644-afbe5c7a43e6"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:16:34 crc kubenswrapper[4808]: I0217 16:16:34.899663 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8456642-c501-433c-9644-afbe5c7a43e6-kube-api-access-6pmph" (OuterVolumeSpecName: "kube-api-access-6pmph") pod "e8456642-c501-433c-9644-afbe5c7a43e6" (UID: "e8456642-c501-433c-9644-afbe5c7a43e6"). InnerVolumeSpecName "kube-api-access-6pmph". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:16:34 crc kubenswrapper[4808]: I0217 16:16:34.903863 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8456642-c501-433c-9644-afbe5c7a43e6-scripts" (OuterVolumeSpecName: "scripts") pod "e8456642-c501-433c-9644-afbe5c7a43e6" (UID: "e8456642-c501-433c-9644-afbe5c7a43e6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:16:34 crc kubenswrapper[4808]: I0217 16:16:34.926744 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8456642-c501-433c-9644-afbe5c7a43e6-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "e8456642-c501-433c-9644-afbe5c7a43e6" (UID: "e8456642-c501-433c-9644-afbe5c7a43e6"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:16:34 crc kubenswrapper[4808]: I0217 16:16:34.974855 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8456642-c501-433c-9644-afbe5c7a43e6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e8456642-c501-433c-9644-afbe5c7a43e6" (UID: "e8456642-c501-433c-9644-afbe5c7a43e6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:16:34 crc kubenswrapper[4808]: I0217 16:16:34.996526 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6pmph\" (UniqueName: \"kubernetes.io/projected/e8456642-c501-433c-9644-afbe5c7a43e6-kube-api-access-6pmph\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:34 crc kubenswrapper[4808]: I0217 16:16:34.996588 4808 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e8456642-c501-433c-9644-afbe5c7a43e6-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:34 crc kubenswrapper[4808]: I0217 16:16:34.996603 4808 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e8456642-c501-433c-9644-afbe5c7a43e6-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:34 crc kubenswrapper[4808]: I0217 16:16:34.996616 4808 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e8456642-c501-433c-9644-afbe5c7a43e6-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:34 crc kubenswrapper[4808]: I0217 16:16:34.996628 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8456642-c501-433c-9644-afbe5c7a43e6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:34 crc kubenswrapper[4808]: I0217 16:16:34.996639 4808 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e8456642-c501-433c-9644-afbe5c7a43e6-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:35 crc kubenswrapper[4808]: I0217 16:16:35.014220 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8456642-c501-433c-9644-afbe5c7a43e6-config-data" (OuterVolumeSpecName: "config-data") pod "e8456642-c501-433c-9644-afbe5c7a43e6" (UID: "e8456642-c501-433c-9644-afbe5c7a43e6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:16:35 crc kubenswrapper[4808]: I0217 16:16:35.099439 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8456642-c501-433c-9644-afbe5c7a43e6-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:35 crc kubenswrapper[4808]: I0217 16:16:35.170552 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 17 16:16:35 crc kubenswrapper[4808]: I0217 16:16:35.174748 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 17 16:16:35 crc kubenswrapper[4808]: I0217 16:16:35.390682 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e8456642-c501-433c-9644-afbe5c7a43e6","Type":"ContainerDied","Data":"3a9b796e709869b9cb4799f9bb193f7ffc25705102bf28c2fde62d64ed8e86d0"} Feb 17 16:16:35 crc kubenswrapper[4808]: I0217 16:16:35.391071 4808 scope.go:117] "RemoveContainer" containerID="bcff99d6ad66596d26a49c30224a0cbca9b4294d19393339a4468e149a4865eb" Feb 17 16:16:35 crc kubenswrapper[4808]: I0217 16:16:35.391136 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:16:35 crc kubenswrapper[4808]: I0217 16:16:35.429716 4808 scope.go:117] "RemoveContainer" containerID="e388db597c5f0636fd10ad14fe6e1347ac42817400f36ec63088edd356dbf6e1" Feb 17 16:16:35 crc kubenswrapper[4808]: I0217 16:16:35.434830 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:16:35 crc kubenswrapper[4808]: I0217 16:16:35.465669 4808 scope.go:117] "RemoveContainer" containerID="090e70ffbb67322583c02e52f1888dc7a40cf42484f5eafcf7a974dc9ca72afc" Feb 17 16:16:35 crc kubenswrapper[4808]: I0217 16:16:35.465796 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:16:35 crc kubenswrapper[4808]: I0217 16:16:35.498525 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:16:35 crc kubenswrapper[4808]: E0217 16:16:35.499152 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8456642-c501-433c-9644-afbe5c7a43e6" containerName="ceilometer-central-agent" Feb 17 16:16:35 crc kubenswrapper[4808]: I0217 16:16:35.499171 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8456642-c501-433c-9644-afbe5c7a43e6" containerName="ceilometer-central-agent" Feb 17 16:16:35 crc kubenswrapper[4808]: E0217 16:16:35.499184 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8456642-c501-433c-9644-afbe5c7a43e6" containerName="proxy-httpd" Feb 17 16:16:35 crc kubenswrapper[4808]: I0217 16:16:35.499191 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8456642-c501-433c-9644-afbe5c7a43e6" containerName="proxy-httpd" Feb 17 16:16:35 crc kubenswrapper[4808]: E0217 16:16:35.499201 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8456642-c501-433c-9644-afbe5c7a43e6" containerName="sg-core" Feb 17 16:16:35 crc kubenswrapper[4808]: I0217 16:16:35.499207 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8456642-c501-433c-9644-afbe5c7a43e6" containerName="sg-core" Feb 17 16:16:35 crc kubenswrapper[4808]: E0217 16:16:35.499228 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8456642-c501-433c-9644-afbe5c7a43e6" containerName="ceilometer-notification-agent" Feb 17 16:16:35 crc kubenswrapper[4808]: I0217 16:16:35.499234 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8456642-c501-433c-9644-afbe5c7a43e6" containerName="ceilometer-notification-agent" Feb 17 16:16:35 crc kubenswrapper[4808]: I0217 16:16:35.499440 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8456642-c501-433c-9644-afbe5c7a43e6" containerName="proxy-httpd" Feb 17 16:16:35 crc kubenswrapper[4808]: I0217 16:16:35.499453 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8456642-c501-433c-9644-afbe5c7a43e6" containerName="ceilometer-central-agent" Feb 17 16:16:35 crc kubenswrapper[4808]: I0217 16:16:35.499465 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8456642-c501-433c-9644-afbe5c7a43e6" containerName="ceilometer-notification-agent" Feb 17 16:16:35 crc kubenswrapper[4808]: I0217 16:16:35.499484 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8456642-c501-433c-9644-afbe5c7a43e6" containerName="sg-core" Feb 17 16:16:35 crc kubenswrapper[4808]: I0217 16:16:35.503481 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:16:35 crc kubenswrapper[4808]: I0217 16:16:35.503728 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:16:35 crc kubenswrapper[4808]: I0217 16:16:35.508217 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 17 16:16:35 crc kubenswrapper[4808]: I0217 16:16:35.508482 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 17 16:16:35 crc kubenswrapper[4808]: I0217 16:16:35.532182 4808 scope.go:117] "RemoveContainer" containerID="3049e3aba53451516b070bb896da851c0303048da8bc21078f93399256594ef7" Feb 17 16:16:35 crc kubenswrapper[4808]: I0217 16:16:35.615978 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d522679-0f73-4d58-b7f7-ddb835a4dbc6-scripts\") pod \"ceilometer-0\" (UID: \"8d522679-0f73-4d58-b7f7-ddb835a4dbc6\") " pod="openstack/ceilometer-0" Feb 17 16:16:35 crc kubenswrapper[4808]: I0217 16:16:35.616096 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8d522679-0f73-4d58-b7f7-ddb835a4dbc6-run-httpd\") pod \"ceilometer-0\" (UID: \"8d522679-0f73-4d58-b7f7-ddb835a4dbc6\") " pod="openstack/ceilometer-0" Feb 17 16:16:35 crc kubenswrapper[4808]: I0217 16:16:35.616135 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d522679-0f73-4d58-b7f7-ddb835a4dbc6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8d522679-0f73-4d58-b7f7-ddb835a4dbc6\") " pod="openstack/ceilometer-0" Feb 17 16:16:35 crc kubenswrapper[4808]: I0217 16:16:35.616164 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8d522679-0f73-4d58-b7f7-ddb835a4dbc6-log-httpd\") pod \"ceilometer-0\" (UID: \"8d522679-0f73-4d58-b7f7-ddb835a4dbc6\") " pod="openstack/ceilometer-0" Feb 17 16:16:35 crc kubenswrapper[4808]: I0217 16:16:35.616310 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8d522679-0f73-4d58-b7f7-ddb835a4dbc6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8d522679-0f73-4d58-b7f7-ddb835a4dbc6\") " pod="openstack/ceilometer-0" Feb 17 16:16:35 crc kubenswrapper[4808]: I0217 16:16:35.616361 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5g5j\" (UniqueName: \"kubernetes.io/projected/8d522679-0f73-4d58-b7f7-ddb835a4dbc6-kube-api-access-n5g5j\") pod \"ceilometer-0\" (UID: \"8d522679-0f73-4d58-b7f7-ddb835a4dbc6\") " pod="openstack/ceilometer-0" Feb 17 16:16:35 crc kubenswrapper[4808]: I0217 16:16:35.616391 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d522679-0f73-4d58-b7f7-ddb835a4dbc6-config-data\") pod \"ceilometer-0\" (UID: \"8d522679-0f73-4d58-b7f7-ddb835a4dbc6\") " pod="openstack/ceilometer-0" Feb 17 16:16:35 crc kubenswrapper[4808]: I0217 16:16:35.718064 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8d522679-0f73-4d58-b7f7-ddb835a4dbc6-run-httpd\") pod \"ceilometer-0\" (UID: \"8d522679-0f73-4d58-b7f7-ddb835a4dbc6\") " pod="openstack/ceilometer-0" Feb 17 16:16:35 crc kubenswrapper[4808]: I0217 16:16:35.718124 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d522679-0f73-4d58-b7f7-ddb835a4dbc6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8d522679-0f73-4d58-b7f7-ddb835a4dbc6\") " pod="openstack/ceilometer-0" Feb 17 16:16:35 crc kubenswrapper[4808]: I0217 16:16:35.718158 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8d522679-0f73-4d58-b7f7-ddb835a4dbc6-log-httpd\") pod \"ceilometer-0\" (UID: \"8d522679-0f73-4d58-b7f7-ddb835a4dbc6\") " pod="openstack/ceilometer-0" Feb 17 16:16:35 crc kubenswrapper[4808]: I0217 16:16:35.718261 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8d522679-0f73-4d58-b7f7-ddb835a4dbc6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8d522679-0f73-4d58-b7f7-ddb835a4dbc6\") " pod="openstack/ceilometer-0" Feb 17 16:16:35 crc kubenswrapper[4808]: I0217 16:16:35.718312 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5g5j\" (UniqueName: \"kubernetes.io/projected/8d522679-0f73-4d58-b7f7-ddb835a4dbc6-kube-api-access-n5g5j\") pod \"ceilometer-0\" (UID: \"8d522679-0f73-4d58-b7f7-ddb835a4dbc6\") " pod="openstack/ceilometer-0" Feb 17 16:16:35 crc kubenswrapper[4808]: I0217 16:16:35.718339 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d522679-0f73-4d58-b7f7-ddb835a4dbc6-config-data\") pod \"ceilometer-0\" (UID: \"8d522679-0f73-4d58-b7f7-ddb835a4dbc6\") " pod="openstack/ceilometer-0" Feb 17 16:16:35 crc kubenswrapper[4808]: I0217 16:16:35.718422 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d522679-0f73-4d58-b7f7-ddb835a4dbc6-scripts\") pod \"ceilometer-0\" (UID: \"8d522679-0f73-4d58-b7f7-ddb835a4dbc6\") " pod="openstack/ceilometer-0" Feb 17 16:16:35 crc kubenswrapper[4808]: I0217 16:16:35.718657 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8d522679-0f73-4d58-b7f7-ddb835a4dbc6-run-httpd\") pod \"ceilometer-0\" (UID: \"8d522679-0f73-4d58-b7f7-ddb835a4dbc6\") " pod="openstack/ceilometer-0" Feb 17 16:16:35 crc kubenswrapper[4808]: I0217 16:16:35.719560 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8d522679-0f73-4d58-b7f7-ddb835a4dbc6-log-httpd\") pod \"ceilometer-0\" (UID: \"8d522679-0f73-4d58-b7f7-ddb835a4dbc6\") " pod="openstack/ceilometer-0" Feb 17 16:16:35 crc kubenswrapper[4808]: I0217 16:16:35.722748 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8d522679-0f73-4d58-b7f7-ddb835a4dbc6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8d522679-0f73-4d58-b7f7-ddb835a4dbc6\") " pod="openstack/ceilometer-0" Feb 17 16:16:35 crc kubenswrapper[4808]: I0217 16:16:35.723203 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d522679-0f73-4d58-b7f7-ddb835a4dbc6-config-data\") pod \"ceilometer-0\" (UID: \"8d522679-0f73-4d58-b7f7-ddb835a4dbc6\") " pod="openstack/ceilometer-0" Feb 17 16:16:35 crc kubenswrapper[4808]: I0217 16:16:35.723252 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d522679-0f73-4d58-b7f7-ddb835a4dbc6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8d522679-0f73-4d58-b7f7-ddb835a4dbc6\") " pod="openstack/ceilometer-0" Feb 17 16:16:35 crc kubenswrapper[4808]: I0217 16:16:35.732929 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d522679-0f73-4d58-b7f7-ddb835a4dbc6-scripts\") pod \"ceilometer-0\" (UID: \"8d522679-0f73-4d58-b7f7-ddb835a4dbc6\") " pod="openstack/ceilometer-0" Feb 17 16:16:35 crc kubenswrapper[4808]: I0217 16:16:35.733428 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5g5j\" (UniqueName: \"kubernetes.io/projected/8d522679-0f73-4d58-b7f7-ddb835a4dbc6-kube-api-access-n5g5j\") pod \"ceilometer-0\" (UID: \"8d522679-0f73-4d58-b7f7-ddb835a4dbc6\") " pod="openstack/ceilometer-0" Feb 17 16:16:35 crc kubenswrapper[4808]: I0217 16:16:35.853127 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:16:36 crc kubenswrapper[4808]: I0217 16:16:36.235896 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 17 16:16:36 crc kubenswrapper[4808]: I0217 16:16:36.301141 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 17 16:16:36 crc kubenswrapper[4808]: I0217 16:16:36.418347 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:16:37 crc kubenswrapper[4808]: I0217 16:16:37.184549 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8456642-c501-433c-9644-afbe5c7a43e6" path="/var/lib/kubelet/pods/e8456642-c501-433c-9644-afbe5c7a43e6/volumes" Feb 17 16:16:37 crc kubenswrapper[4808]: I0217 16:16:37.445692 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8d522679-0f73-4d58-b7f7-ddb835a4dbc6","Type":"ContainerStarted","Data":"450253ec624601825b2ade75676906be1f978ed00a8d079f0e7831c9dab89ee3"} Feb 17 16:16:37 crc kubenswrapper[4808]: I0217 16:16:37.446038 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8d522679-0f73-4d58-b7f7-ddb835a4dbc6","Type":"ContainerStarted","Data":"91d1642df2334e4f429a191525235bf1d0f2f6da6b1932c826f1850f30b2d130"} Feb 17 16:16:38 crc kubenswrapper[4808]: I0217 16:16:38.181984 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 17 16:16:38 crc kubenswrapper[4808]: I0217 16:16:38.182469 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="793e01c5-a9c7-4545-8244-34a6bae837dc" containerName="nova-cell0-conductor-conductor" containerID="cri-o://7228d7aa3cccfbefcccd5def675e46d2b68a93553954a7160ff2f9acc2f06183" gracePeriod=30 Feb 17 16:16:38 crc kubenswrapper[4808]: E0217 16:16:38.187488 4808 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7228d7aa3cccfbefcccd5def675e46d2b68a93553954a7160ff2f9acc2f06183" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 17 16:16:38 crc kubenswrapper[4808]: E0217 16:16:38.188890 4808 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7228d7aa3cccfbefcccd5def675e46d2b68a93553954a7160ff2f9acc2f06183" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 17 16:16:38 crc kubenswrapper[4808]: E0217 16:16:38.190106 4808 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7228d7aa3cccfbefcccd5def675e46d2b68a93553954a7160ff2f9acc2f06183" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 17 16:16:38 crc kubenswrapper[4808]: E0217 16:16:38.190138 4808 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="793e01c5-a9c7-4545-8244-34a6bae837dc" containerName="nova-cell0-conductor-conductor" Feb 17 16:16:38 crc kubenswrapper[4808]: I0217 16:16:38.459410 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8d522679-0f73-4d58-b7f7-ddb835a4dbc6","Type":"ContainerStarted","Data":"96437272da8dbbc5f00ffd256113919496f22a8bc78f00ba1c720a2e3dc2be0b"} Feb 17 16:16:38 crc kubenswrapper[4808]: E0217 16:16:38.745003 4808 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7228d7aa3cccfbefcccd5def675e46d2b68a93553954a7160ff2f9acc2f06183" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 17 16:16:38 crc kubenswrapper[4808]: E0217 16:16:38.746822 4808 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7228d7aa3cccfbefcccd5def675e46d2b68a93553954a7160ff2f9acc2f06183" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 17 16:16:38 crc kubenswrapper[4808]: E0217 16:16:38.747719 4808 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7228d7aa3cccfbefcccd5def675e46d2b68a93553954a7160ff2f9acc2f06183" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 17 16:16:38 crc kubenswrapper[4808]: E0217 16:16:38.747754 4808 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="793e01c5-a9c7-4545-8244-34a6bae837dc" containerName="nova-cell0-conductor-conductor" Feb 17 16:16:39 crc kubenswrapper[4808]: I0217 16:16:39.471008 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8d522679-0f73-4d58-b7f7-ddb835a4dbc6","Type":"ContainerStarted","Data":"9cca18216dca5f726c4eff2fcf22a755d97483924e20771afa5abfba085c3a60"} Feb 17 16:16:39 crc kubenswrapper[4808]: I0217 16:16:39.727886 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:16:40 crc kubenswrapper[4808]: I0217 16:16:40.489798 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8d522679-0f73-4d58-b7f7-ddb835a4dbc6","Type":"ContainerStarted","Data":"3a6dfdb0ccfc744dd33488cddc605d674671cc5457e3b826471944a3b570fd00"} Feb 17 16:16:40 crc kubenswrapper[4808]: I0217 16:16:40.490211 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8d522679-0f73-4d58-b7f7-ddb835a4dbc6" containerName="ceilometer-notification-agent" containerID="cri-o://96437272da8dbbc5f00ffd256113919496f22a8bc78f00ba1c720a2e3dc2be0b" gracePeriod=30 Feb 17 16:16:40 crc kubenswrapper[4808]: I0217 16:16:40.490162 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8d522679-0f73-4d58-b7f7-ddb835a4dbc6" containerName="sg-core" containerID="cri-o://9cca18216dca5f726c4eff2fcf22a755d97483924e20771afa5abfba085c3a60" gracePeriod=30 Feb 17 16:16:40 crc kubenswrapper[4808]: I0217 16:16:40.490193 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8d522679-0f73-4d58-b7f7-ddb835a4dbc6" containerName="proxy-httpd" containerID="cri-o://3a6dfdb0ccfc744dd33488cddc605d674671cc5457e3b826471944a3b570fd00" gracePeriod=30 Feb 17 16:16:40 crc kubenswrapper[4808]: I0217 16:16:40.490223 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 17 16:16:40 crc kubenswrapper[4808]: I0217 16:16:40.490102 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8d522679-0f73-4d58-b7f7-ddb835a4dbc6" containerName="ceilometer-central-agent" containerID="cri-o://450253ec624601825b2ade75676906be1f978ed00a8d079f0e7831c9dab89ee3" gracePeriod=30 Feb 17 16:16:40 crc kubenswrapper[4808]: I0217 16:16:40.521756 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.225791879 podStartE2EDuration="5.521728496s" podCreationTimestamp="2026-02-17 16:16:35 +0000 UTC" firstStartedPulling="2026-02-17 16:16:36.437055021 +0000 UTC m=+1359.953414094" lastFinishedPulling="2026-02-17 16:16:39.732991648 +0000 UTC m=+1363.249350711" observedRunningTime="2026-02-17 16:16:40.512130802 +0000 UTC m=+1364.028489875" watchObservedRunningTime="2026-02-17 16:16:40.521728496 +0000 UTC m=+1364.038087569" Feb 17 16:16:41 crc kubenswrapper[4808]: I0217 16:16:41.028212 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 17 16:16:41 crc kubenswrapper[4808]: I0217 16:16:41.230393 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/793e01c5-a9c7-4545-8244-34a6bae837dc-combined-ca-bundle\") pod \"793e01c5-a9c7-4545-8244-34a6bae837dc\" (UID: \"793e01c5-a9c7-4545-8244-34a6bae837dc\") " Feb 17 16:16:41 crc kubenswrapper[4808]: I0217 16:16:41.230750 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/793e01c5-a9c7-4545-8244-34a6bae837dc-config-data\") pod \"793e01c5-a9c7-4545-8244-34a6bae837dc\" (UID: \"793e01c5-a9c7-4545-8244-34a6bae837dc\") " Feb 17 16:16:41 crc kubenswrapper[4808]: I0217 16:16:41.230984 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s7rtr\" (UniqueName: \"kubernetes.io/projected/793e01c5-a9c7-4545-8244-34a6bae837dc-kube-api-access-s7rtr\") pod \"793e01c5-a9c7-4545-8244-34a6bae837dc\" (UID: \"793e01c5-a9c7-4545-8244-34a6bae837dc\") " Feb 17 16:16:41 crc kubenswrapper[4808]: I0217 16:16:41.236208 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/793e01c5-a9c7-4545-8244-34a6bae837dc-kube-api-access-s7rtr" (OuterVolumeSpecName: "kube-api-access-s7rtr") pod "793e01c5-a9c7-4545-8244-34a6bae837dc" (UID: "793e01c5-a9c7-4545-8244-34a6bae837dc"). InnerVolumeSpecName "kube-api-access-s7rtr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:16:41 crc kubenswrapper[4808]: I0217 16:16:41.263788 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/793e01c5-a9c7-4545-8244-34a6bae837dc-config-data" (OuterVolumeSpecName: "config-data") pod "793e01c5-a9c7-4545-8244-34a6bae837dc" (UID: "793e01c5-a9c7-4545-8244-34a6bae837dc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:16:41 crc kubenswrapper[4808]: I0217 16:16:41.266083 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/793e01c5-a9c7-4545-8244-34a6bae837dc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "793e01c5-a9c7-4545-8244-34a6bae837dc" (UID: "793e01c5-a9c7-4545-8244-34a6bae837dc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:16:41 crc kubenswrapper[4808]: I0217 16:16:41.333521 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/793e01c5-a9c7-4545-8244-34a6bae837dc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:41 crc kubenswrapper[4808]: I0217 16:16:41.333561 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/793e01c5-a9c7-4545-8244-34a6bae837dc-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:41 crc kubenswrapper[4808]: I0217 16:16:41.333588 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s7rtr\" (UniqueName: \"kubernetes.io/projected/793e01c5-a9c7-4545-8244-34a6bae837dc-kube-api-access-s7rtr\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:41 crc kubenswrapper[4808]: I0217 16:16:41.502818 4808 generic.go:334] "Generic (PLEG): container finished" podID="8d522679-0f73-4d58-b7f7-ddb835a4dbc6" containerID="3a6dfdb0ccfc744dd33488cddc605d674671cc5457e3b826471944a3b570fd00" exitCode=0 Feb 17 16:16:41 crc kubenswrapper[4808]: I0217 16:16:41.502860 4808 generic.go:334] "Generic (PLEG): container finished" podID="8d522679-0f73-4d58-b7f7-ddb835a4dbc6" containerID="9cca18216dca5f726c4eff2fcf22a755d97483924e20771afa5abfba085c3a60" exitCode=2 Feb 17 16:16:41 crc kubenswrapper[4808]: I0217 16:16:41.502871 4808 generic.go:334] "Generic (PLEG): container finished" podID="8d522679-0f73-4d58-b7f7-ddb835a4dbc6" containerID="96437272da8dbbc5f00ffd256113919496f22a8bc78f00ba1c720a2e3dc2be0b" exitCode=0 Feb 17 16:16:41 crc kubenswrapper[4808]: I0217 16:16:41.502890 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8d522679-0f73-4d58-b7f7-ddb835a4dbc6","Type":"ContainerDied","Data":"3a6dfdb0ccfc744dd33488cddc605d674671cc5457e3b826471944a3b570fd00"} Feb 17 16:16:41 crc kubenswrapper[4808]: I0217 16:16:41.502933 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8d522679-0f73-4d58-b7f7-ddb835a4dbc6","Type":"ContainerDied","Data":"9cca18216dca5f726c4eff2fcf22a755d97483924e20771afa5abfba085c3a60"} Feb 17 16:16:41 crc kubenswrapper[4808]: I0217 16:16:41.502944 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8d522679-0f73-4d58-b7f7-ddb835a4dbc6","Type":"ContainerDied","Data":"96437272da8dbbc5f00ffd256113919496f22a8bc78f00ba1c720a2e3dc2be0b"} Feb 17 16:16:41 crc kubenswrapper[4808]: I0217 16:16:41.505010 4808 generic.go:334] "Generic (PLEG): container finished" podID="793e01c5-a9c7-4545-8244-34a6bae837dc" containerID="7228d7aa3cccfbefcccd5def675e46d2b68a93553954a7160ff2f9acc2f06183" exitCode=0 Feb 17 16:16:41 crc kubenswrapper[4808]: I0217 16:16:41.505056 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 17 16:16:41 crc kubenswrapper[4808]: I0217 16:16:41.505088 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"793e01c5-a9c7-4545-8244-34a6bae837dc","Type":"ContainerDied","Data":"7228d7aa3cccfbefcccd5def675e46d2b68a93553954a7160ff2f9acc2f06183"} Feb 17 16:16:41 crc kubenswrapper[4808]: I0217 16:16:41.505142 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"793e01c5-a9c7-4545-8244-34a6bae837dc","Type":"ContainerDied","Data":"7caffc2a919e783df44efecca3e4d55e23b17b2dd4860e6c42d49a0f3c69fe6a"} Feb 17 16:16:41 crc kubenswrapper[4808]: I0217 16:16:41.505172 4808 scope.go:117] "RemoveContainer" containerID="7228d7aa3cccfbefcccd5def675e46d2b68a93553954a7160ff2f9acc2f06183" Feb 17 16:16:41 crc kubenswrapper[4808]: I0217 16:16:41.561173 4808 scope.go:117] "RemoveContainer" containerID="7228d7aa3cccfbefcccd5def675e46d2b68a93553954a7160ff2f9acc2f06183" Feb 17 16:16:41 crc kubenswrapper[4808]: E0217 16:16:41.562781 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7228d7aa3cccfbefcccd5def675e46d2b68a93553954a7160ff2f9acc2f06183\": container with ID starting with 7228d7aa3cccfbefcccd5def675e46d2b68a93553954a7160ff2f9acc2f06183 not found: ID does not exist" containerID="7228d7aa3cccfbefcccd5def675e46d2b68a93553954a7160ff2f9acc2f06183" Feb 17 16:16:41 crc kubenswrapper[4808]: I0217 16:16:41.562837 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7228d7aa3cccfbefcccd5def675e46d2b68a93553954a7160ff2f9acc2f06183"} err="failed to get container status \"7228d7aa3cccfbefcccd5def675e46d2b68a93553954a7160ff2f9acc2f06183\": rpc error: code = NotFound desc = could not find container \"7228d7aa3cccfbefcccd5def675e46d2b68a93553954a7160ff2f9acc2f06183\": container with ID starting with 7228d7aa3cccfbefcccd5def675e46d2b68a93553954a7160ff2f9acc2f06183 not found: ID does not exist" Feb 17 16:16:41 crc kubenswrapper[4808]: I0217 16:16:41.564807 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 17 16:16:41 crc kubenswrapper[4808]: I0217 16:16:41.588512 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 17 16:16:41 crc kubenswrapper[4808]: I0217 16:16:41.615651 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 17 16:16:41 crc kubenswrapper[4808]: E0217 16:16:41.616176 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="793e01c5-a9c7-4545-8244-34a6bae837dc" containerName="nova-cell0-conductor-conductor" Feb 17 16:16:41 crc kubenswrapper[4808]: I0217 16:16:41.616196 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="793e01c5-a9c7-4545-8244-34a6bae837dc" containerName="nova-cell0-conductor-conductor" Feb 17 16:16:41 crc kubenswrapper[4808]: I0217 16:16:41.616553 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="793e01c5-a9c7-4545-8244-34a6bae837dc" containerName="nova-cell0-conductor-conductor" Feb 17 16:16:41 crc kubenswrapper[4808]: I0217 16:16:41.617442 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 17 16:16:41 crc kubenswrapper[4808]: I0217 16:16:41.619967 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-tcmz6" Feb 17 16:16:41 crc kubenswrapper[4808]: I0217 16:16:41.620971 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 17 16:16:41 crc kubenswrapper[4808]: I0217 16:16:41.627037 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 17 16:16:41 crc kubenswrapper[4808]: I0217 16:16:41.743020 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd596411-c54c-4a8a-9b6a-420b6ab3c9ff-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"fd596411-c54c-4a8a-9b6a-420b6ab3c9ff\") " pod="openstack/nova-cell0-conductor-0" Feb 17 16:16:41 crc kubenswrapper[4808]: I0217 16:16:41.743139 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfl4m\" (UniqueName: \"kubernetes.io/projected/fd596411-c54c-4a8a-9b6a-420b6ab3c9ff-kube-api-access-lfl4m\") pod \"nova-cell0-conductor-0\" (UID: \"fd596411-c54c-4a8a-9b6a-420b6ab3c9ff\") " pod="openstack/nova-cell0-conductor-0" Feb 17 16:16:41 crc kubenswrapper[4808]: I0217 16:16:41.743182 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd596411-c54c-4a8a-9b6a-420b6ab3c9ff-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"fd596411-c54c-4a8a-9b6a-420b6ab3c9ff\") " pod="openstack/nova-cell0-conductor-0" Feb 17 16:16:41 crc kubenswrapper[4808]: I0217 16:16:41.844712 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd596411-c54c-4a8a-9b6a-420b6ab3c9ff-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"fd596411-c54c-4a8a-9b6a-420b6ab3c9ff\") " pod="openstack/nova-cell0-conductor-0" Feb 17 16:16:41 crc kubenswrapper[4808]: I0217 16:16:41.844822 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfl4m\" (UniqueName: \"kubernetes.io/projected/fd596411-c54c-4a8a-9b6a-420b6ab3c9ff-kube-api-access-lfl4m\") pod \"nova-cell0-conductor-0\" (UID: \"fd596411-c54c-4a8a-9b6a-420b6ab3c9ff\") " pod="openstack/nova-cell0-conductor-0" Feb 17 16:16:41 crc kubenswrapper[4808]: I0217 16:16:41.844865 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd596411-c54c-4a8a-9b6a-420b6ab3c9ff-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"fd596411-c54c-4a8a-9b6a-420b6ab3c9ff\") " pod="openstack/nova-cell0-conductor-0" Feb 17 16:16:41 crc kubenswrapper[4808]: I0217 16:16:41.857733 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd596411-c54c-4a8a-9b6a-420b6ab3c9ff-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"fd596411-c54c-4a8a-9b6a-420b6ab3c9ff\") " pod="openstack/nova-cell0-conductor-0" Feb 17 16:16:41 crc kubenswrapper[4808]: I0217 16:16:41.858246 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd596411-c54c-4a8a-9b6a-420b6ab3c9ff-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"fd596411-c54c-4a8a-9b6a-420b6ab3c9ff\") " pod="openstack/nova-cell0-conductor-0" Feb 17 16:16:41 crc kubenswrapper[4808]: I0217 16:16:41.883399 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfl4m\" (UniqueName: \"kubernetes.io/projected/fd596411-c54c-4a8a-9b6a-420b6ab3c9ff-kube-api-access-lfl4m\") pod \"nova-cell0-conductor-0\" (UID: \"fd596411-c54c-4a8a-9b6a-420b6ab3c9ff\") " pod="openstack/nova-cell0-conductor-0" Feb 17 16:16:41 crc kubenswrapper[4808]: I0217 16:16:41.943081 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 17 16:16:42 crc kubenswrapper[4808]: I0217 16:16:42.452211 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 17 16:16:42 crc kubenswrapper[4808]: W0217 16:16:42.465284 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfd596411_c54c_4a8a_9b6a_420b6ab3c9ff.slice/crio-98cc3a2f583961a448a76b0dea95c16aaa2d1129a8b03497108f967d9102a616 WatchSource:0}: Error finding container 98cc3a2f583961a448a76b0dea95c16aaa2d1129a8b03497108f967d9102a616: Status 404 returned error can't find the container with id 98cc3a2f583961a448a76b0dea95c16aaa2d1129a8b03497108f967d9102a616 Feb 17 16:16:42 crc kubenswrapper[4808]: I0217 16:16:42.528894 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"fd596411-c54c-4a8a-9b6a-420b6ab3c9ff","Type":"ContainerStarted","Data":"98cc3a2f583961a448a76b0dea95c16aaa2d1129a8b03497108f967d9102a616"} Feb 17 16:16:43 crc kubenswrapper[4808]: I0217 16:16:43.157874 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="793e01c5-a9c7-4545-8244-34a6bae837dc" path="/var/lib/kubelet/pods/793e01c5-a9c7-4545-8244-34a6bae837dc/volumes" Feb 17 16:16:43 crc kubenswrapper[4808]: I0217 16:16:43.538657 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"fd596411-c54c-4a8a-9b6a-420b6ab3c9ff","Type":"ContainerStarted","Data":"6b76ca9582a3f7fa8574efca5f8781ff8022549fc27b18113ef1087c180daf14"} Feb 17 16:16:43 crc kubenswrapper[4808]: I0217 16:16:43.538820 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 17 16:16:43 crc kubenswrapper[4808]: I0217 16:16:43.576725 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.576708034 podStartE2EDuration="2.576708034s" podCreationTimestamp="2026-02-17 16:16:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:16:43.568093046 +0000 UTC m=+1367.084452119" watchObservedRunningTime="2026-02-17 16:16:43.576708034 +0000 UTC m=+1367.093067107" Feb 17 16:16:44 crc kubenswrapper[4808]: I0217 16:16:44.549527 4808 generic.go:334] "Generic (PLEG): container finished" podID="8d522679-0f73-4d58-b7f7-ddb835a4dbc6" containerID="450253ec624601825b2ade75676906be1f978ed00a8d079f0e7831c9dab89ee3" exitCode=0 Feb 17 16:16:44 crc kubenswrapper[4808]: I0217 16:16:44.549614 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8d522679-0f73-4d58-b7f7-ddb835a4dbc6","Type":"ContainerDied","Data":"450253ec624601825b2ade75676906be1f978ed00a8d079f0e7831c9dab89ee3"} Feb 17 16:16:44 crc kubenswrapper[4808]: I0217 16:16:44.872450 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:16:45 crc kubenswrapper[4808]: I0217 16:16:45.011764 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d522679-0f73-4d58-b7f7-ddb835a4dbc6-config-data\") pod \"8d522679-0f73-4d58-b7f7-ddb835a4dbc6\" (UID: \"8d522679-0f73-4d58-b7f7-ddb835a4dbc6\") " Feb 17 16:16:45 crc kubenswrapper[4808]: I0217 16:16:45.012106 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8d522679-0f73-4d58-b7f7-ddb835a4dbc6-run-httpd\") pod \"8d522679-0f73-4d58-b7f7-ddb835a4dbc6\" (UID: \"8d522679-0f73-4d58-b7f7-ddb835a4dbc6\") " Feb 17 16:16:45 crc kubenswrapper[4808]: I0217 16:16:45.012151 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d522679-0f73-4d58-b7f7-ddb835a4dbc6-combined-ca-bundle\") pod \"8d522679-0f73-4d58-b7f7-ddb835a4dbc6\" (UID: \"8d522679-0f73-4d58-b7f7-ddb835a4dbc6\") " Feb 17 16:16:45 crc kubenswrapper[4808]: I0217 16:16:45.012208 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8d522679-0f73-4d58-b7f7-ddb835a4dbc6-sg-core-conf-yaml\") pod \"8d522679-0f73-4d58-b7f7-ddb835a4dbc6\" (UID: \"8d522679-0f73-4d58-b7f7-ddb835a4dbc6\") " Feb 17 16:16:45 crc kubenswrapper[4808]: I0217 16:16:45.012236 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8d522679-0f73-4d58-b7f7-ddb835a4dbc6-log-httpd\") pod \"8d522679-0f73-4d58-b7f7-ddb835a4dbc6\" (UID: \"8d522679-0f73-4d58-b7f7-ddb835a4dbc6\") " Feb 17 16:16:45 crc kubenswrapper[4808]: I0217 16:16:45.012260 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n5g5j\" (UniqueName: \"kubernetes.io/projected/8d522679-0f73-4d58-b7f7-ddb835a4dbc6-kube-api-access-n5g5j\") pod \"8d522679-0f73-4d58-b7f7-ddb835a4dbc6\" (UID: \"8d522679-0f73-4d58-b7f7-ddb835a4dbc6\") " Feb 17 16:16:45 crc kubenswrapper[4808]: I0217 16:16:45.012335 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d522679-0f73-4d58-b7f7-ddb835a4dbc6-scripts\") pod \"8d522679-0f73-4d58-b7f7-ddb835a4dbc6\" (UID: \"8d522679-0f73-4d58-b7f7-ddb835a4dbc6\") " Feb 17 16:16:45 crc kubenswrapper[4808]: I0217 16:16:45.012554 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d522679-0f73-4d58-b7f7-ddb835a4dbc6-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "8d522679-0f73-4d58-b7f7-ddb835a4dbc6" (UID: "8d522679-0f73-4d58-b7f7-ddb835a4dbc6"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:16:45 crc kubenswrapper[4808]: I0217 16:16:45.012892 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d522679-0f73-4d58-b7f7-ddb835a4dbc6-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "8d522679-0f73-4d58-b7f7-ddb835a4dbc6" (UID: "8d522679-0f73-4d58-b7f7-ddb835a4dbc6"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:16:45 crc kubenswrapper[4808]: I0217 16:16:45.013175 4808 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8d522679-0f73-4d58-b7f7-ddb835a4dbc6-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:45 crc kubenswrapper[4808]: I0217 16:16:45.013192 4808 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8d522679-0f73-4d58-b7f7-ddb835a4dbc6-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:45 crc kubenswrapper[4808]: I0217 16:16:45.018845 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d522679-0f73-4d58-b7f7-ddb835a4dbc6-scripts" (OuterVolumeSpecName: "scripts") pod "8d522679-0f73-4d58-b7f7-ddb835a4dbc6" (UID: "8d522679-0f73-4d58-b7f7-ddb835a4dbc6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:16:45 crc kubenswrapper[4808]: I0217 16:16:45.022975 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d522679-0f73-4d58-b7f7-ddb835a4dbc6-kube-api-access-n5g5j" (OuterVolumeSpecName: "kube-api-access-n5g5j") pod "8d522679-0f73-4d58-b7f7-ddb835a4dbc6" (UID: "8d522679-0f73-4d58-b7f7-ddb835a4dbc6"). InnerVolumeSpecName "kube-api-access-n5g5j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:16:45 crc kubenswrapper[4808]: I0217 16:16:45.048781 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d522679-0f73-4d58-b7f7-ddb835a4dbc6-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "8d522679-0f73-4d58-b7f7-ddb835a4dbc6" (UID: "8d522679-0f73-4d58-b7f7-ddb835a4dbc6"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:16:45 crc kubenswrapper[4808]: I0217 16:16:45.101718 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d522679-0f73-4d58-b7f7-ddb835a4dbc6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8d522679-0f73-4d58-b7f7-ddb835a4dbc6" (UID: "8d522679-0f73-4d58-b7f7-ddb835a4dbc6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:16:45 crc kubenswrapper[4808]: I0217 16:16:45.114928 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d522679-0f73-4d58-b7f7-ddb835a4dbc6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:45 crc kubenswrapper[4808]: I0217 16:16:45.114965 4808 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8d522679-0f73-4d58-b7f7-ddb835a4dbc6-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:45 crc kubenswrapper[4808]: I0217 16:16:45.114979 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n5g5j\" (UniqueName: \"kubernetes.io/projected/8d522679-0f73-4d58-b7f7-ddb835a4dbc6-kube-api-access-n5g5j\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:45 crc kubenswrapper[4808]: I0217 16:16:45.114996 4808 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d522679-0f73-4d58-b7f7-ddb835a4dbc6-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:45 crc kubenswrapper[4808]: I0217 16:16:45.117780 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d522679-0f73-4d58-b7f7-ddb835a4dbc6-config-data" (OuterVolumeSpecName: "config-data") pod "8d522679-0f73-4d58-b7f7-ddb835a4dbc6" (UID: "8d522679-0f73-4d58-b7f7-ddb835a4dbc6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:16:45 crc kubenswrapper[4808]: I0217 16:16:45.217015 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d522679-0f73-4d58-b7f7-ddb835a4dbc6-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:45 crc kubenswrapper[4808]: I0217 16:16:45.561938 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8d522679-0f73-4d58-b7f7-ddb835a4dbc6","Type":"ContainerDied","Data":"91d1642df2334e4f429a191525235bf1d0f2f6da6b1932c826f1850f30b2d130"} Feb 17 16:16:45 crc kubenswrapper[4808]: I0217 16:16:45.561983 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:16:45 crc kubenswrapper[4808]: I0217 16:16:45.562001 4808 scope.go:117] "RemoveContainer" containerID="3a6dfdb0ccfc744dd33488cddc605d674671cc5457e3b826471944a3b570fd00" Feb 17 16:16:45 crc kubenswrapper[4808]: I0217 16:16:45.599606 4808 scope.go:117] "RemoveContainer" containerID="9cca18216dca5f726c4eff2fcf22a755d97483924e20771afa5abfba085c3a60" Feb 17 16:16:45 crc kubenswrapper[4808]: I0217 16:16:45.607535 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:16:45 crc kubenswrapper[4808]: I0217 16:16:45.635094 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:16:45 crc kubenswrapper[4808]: I0217 16:16:45.638327 4808 scope.go:117] "RemoveContainer" containerID="96437272da8dbbc5f00ffd256113919496f22a8bc78f00ba1c720a2e3dc2be0b" Feb 17 16:16:45 crc kubenswrapper[4808]: I0217 16:16:45.645035 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:16:45 crc kubenswrapper[4808]: E0217 16:16:45.645442 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d522679-0f73-4d58-b7f7-ddb835a4dbc6" containerName="sg-core" Feb 17 16:16:45 crc kubenswrapper[4808]: I0217 16:16:45.645454 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d522679-0f73-4d58-b7f7-ddb835a4dbc6" containerName="sg-core" Feb 17 16:16:45 crc kubenswrapper[4808]: E0217 16:16:45.645463 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d522679-0f73-4d58-b7f7-ddb835a4dbc6" containerName="ceilometer-notification-agent" Feb 17 16:16:45 crc kubenswrapper[4808]: I0217 16:16:45.645469 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d522679-0f73-4d58-b7f7-ddb835a4dbc6" containerName="ceilometer-notification-agent" Feb 17 16:16:45 crc kubenswrapper[4808]: E0217 16:16:45.645482 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d522679-0f73-4d58-b7f7-ddb835a4dbc6" containerName="ceilometer-central-agent" Feb 17 16:16:45 crc kubenswrapper[4808]: I0217 16:16:45.645489 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d522679-0f73-4d58-b7f7-ddb835a4dbc6" containerName="ceilometer-central-agent" Feb 17 16:16:45 crc kubenswrapper[4808]: E0217 16:16:45.645501 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d522679-0f73-4d58-b7f7-ddb835a4dbc6" containerName="proxy-httpd" Feb 17 16:16:45 crc kubenswrapper[4808]: I0217 16:16:45.645507 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d522679-0f73-4d58-b7f7-ddb835a4dbc6" containerName="proxy-httpd" Feb 17 16:16:45 crc kubenswrapper[4808]: I0217 16:16:45.645825 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d522679-0f73-4d58-b7f7-ddb835a4dbc6" containerName="ceilometer-notification-agent" Feb 17 16:16:45 crc kubenswrapper[4808]: I0217 16:16:45.645863 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d522679-0f73-4d58-b7f7-ddb835a4dbc6" containerName="ceilometer-central-agent" Feb 17 16:16:45 crc kubenswrapper[4808]: I0217 16:16:45.645881 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d522679-0f73-4d58-b7f7-ddb835a4dbc6" containerName="sg-core" Feb 17 16:16:45 crc kubenswrapper[4808]: I0217 16:16:45.645896 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d522679-0f73-4d58-b7f7-ddb835a4dbc6" containerName="proxy-httpd" Feb 17 16:16:45 crc kubenswrapper[4808]: I0217 16:16:45.648177 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:16:45 crc kubenswrapper[4808]: I0217 16:16:45.658977 4808 scope.go:117] "RemoveContainer" containerID="450253ec624601825b2ade75676906be1f978ed00a8d079f0e7831c9dab89ee3" Feb 17 16:16:45 crc kubenswrapper[4808]: I0217 16:16:45.668003 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 17 16:16:45 crc kubenswrapper[4808]: I0217 16:16:45.668336 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 17 16:16:45 crc kubenswrapper[4808]: I0217 16:16:45.669431 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:16:45 crc kubenswrapper[4808]: I0217 16:16:45.728502 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e219b86-d82e-47f5-b071-c44ce0695362-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9e219b86-d82e-47f5-b071-c44ce0695362\") " pod="openstack/ceilometer-0" Feb 17 16:16:45 crc kubenswrapper[4808]: I0217 16:16:45.728808 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9e219b86-d82e-47f5-b071-c44ce0695362-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9e219b86-d82e-47f5-b071-c44ce0695362\") " pod="openstack/ceilometer-0" Feb 17 16:16:45 crc kubenswrapper[4808]: I0217 16:16:45.728858 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e219b86-d82e-47f5-b071-c44ce0695362-config-data\") pod \"ceilometer-0\" (UID: \"9e219b86-d82e-47f5-b071-c44ce0695362\") " pod="openstack/ceilometer-0" Feb 17 16:16:45 crc kubenswrapper[4808]: I0217 16:16:45.728894 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e219b86-d82e-47f5-b071-c44ce0695362-scripts\") pod \"ceilometer-0\" (UID: \"9e219b86-d82e-47f5-b071-c44ce0695362\") " pod="openstack/ceilometer-0" Feb 17 16:16:45 crc kubenswrapper[4808]: I0217 16:16:45.729018 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9e219b86-d82e-47f5-b071-c44ce0695362-run-httpd\") pod \"ceilometer-0\" (UID: \"9e219b86-d82e-47f5-b071-c44ce0695362\") " pod="openstack/ceilometer-0" Feb 17 16:16:45 crc kubenswrapper[4808]: I0217 16:16:45.729142 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9e219b86-d82e-47f5-b071-c44ce0695362-log-httpd\") pod \"ceilometer-0\" (UID: \"9e219b86-d82e-47f5-b071-c44ce0695362\") " pod="openstack/ceilometer-0" Feb 17 16:16:45 crc kubenswrapper[4808]: I0217 16:16:45.729267 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gj867\" (UniqueName: \"kubernetes.io/projected/9e219b86-d82e-47f5-b071-c44ce0695362-kube-api-access-gj867\") pod \"ceilometer-0\" (UID: \"9e219b86-d82e-47f5-b071-c44ce0695362\") " pod="openstack/ceilometer-0" Feb 17 16:16:45 crc kubenswrapper[4808]: I0217 16:16:45.831522 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gj867\" (UniqueName: \"kubernetes.io/projected/9e219b86-d82e-47f5-b071-c44ce0695362-kube-api-access-gj867\") pod \"ceilometer-0\" (UID: \"9e219b86-d82e-47f5-b071-c44ce0695362\") " pod="openstack/ceilometer-0" Feb 17 16:16:45 crc kubenswrapper[4808]: I0217 16:16:45.831727 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e219b86-d82e-47f5-b071-c44ce0695362-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9e219b86-d82e-47f5-b071-c44ce0695362\") " pod="openstack/ceilometer-0" Feb 17 16:16:45 crc kubenswrapper[4808]: I0217 16:16:45.831877 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9e219b86-d82e-47f5-b071-c44ce0695362-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9e219b86-d82e-47f5-b071-c44ce0695362\") " pod="openstack/ceilometer-0" Feb 17 16:16:45 crc kubenswrapper[4808]: I0217 16:16:45.832613 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e219b86-d82e-47f5-b071-c44ce0695362-config-data\") pod \"ceilometer-0\" (UID: \"9e219b86-d82e-47f5-b071-c44ce0695362\") " pod="openstack/ceilometer-0" Feb 17 16:16:45 crc kubenswrapper[4808]: I0217 16:16:45.832666 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e219b86-d82e-47f5-b071-c44ce0695362-scripts\") pod \"ceilometer-0\" (UID: \"9e219b86-d82e-47f5-b071-c44ce0695362\") " pod="openstack/ceilometer-0" Feb 17 16:16:45 crc kubenswrapper[4808]: I0217 16:16:45.832699 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9e219b86-d82e-47f5-b071-c44ce0695362-run-httpd\") pod \"ceilometer-0\" (UID: \"9e219b86-d82e-47f5-b071-c44ce0695362\") " pod="openstack/ceilometer-0" Feb 17 16:16:45 crc kubenswrapper[4808]: I0217 16:16:45.832750 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9e219b86-d82e-47f5-b071-c44ce0695362-log-httpd\") pod \"ceilometer-0\" (UID: \"9e219b86-d82e-47f5-b071-c44ce0695362\") " pod="openstack/ceilometer-0" Feb 17 16:16:45 crc kubenswrapper[4808]: I0217 16:16:45.833173 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9e219b86-d82e-47f5-b071-c44ce0695362-run-httpd\") pod \"ceilometer-0\" (UID: \"9e219b86-d82e-47f5-b071-c44ce0695362\") " pod="openstack/ceilometer-0" Feb 17 16:16:45 crc kubenswrapper[4808]: I0217 16:16:45.833397 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9e219b86-d82e-47f5-b071-c44ce0695362-log-httpd\") pod \"ceilometer-0\" (UID: \"9e219b86-d82e-47f5-b071-c44ce0695362\") " pod="openstack/ceilometer-0" Feb 17 16:16:45 crc kubenswrapper[4808]: I0217 16:16:45.837708 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e219b86-d82e-47f5-b071-c44ce0695362-config-data\") pod \"ceilometer-0\" (UID: \"9e219b86-d82e-47f5-b071-c44ce0695362\") " pod="openstack/ceilometer-0" Feb 17 16:16:45 crc kubenswrapper[4808]: I0217 16:16:45.837918 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e219b86-d82e-47f5-b071-c44ce0695362-scripts\") pod \"ceilometer-0\" (UID: \"9e219b86-d82e-47f5-b071-c44ce0695362\") " pod="openstack/ceilometer-0" Feb 17 16:16:45 crc kubenswrapper[4808]: I0217 16:16:45.845390 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e219b86-d82e-47f5-b071-c44ce0695362-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9e219b86-d82e-47f5-b071-c44ce0695362\") " pod="openstack/ceilometer-0" Feb 17 16:16:45 crc kubenswrapper[4808]: I0217 16:16:45.852565 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9e219b86-d82e-47f5-b071-c44ce0695362-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9e219b86-d82e-47f5-b071-c44ce0695362\") " pod="openstack/ceilometer-0" Feb 17 16:16:45 crc kubenswrapper[4808]: I0217 16:16:45.854078 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gj867\" (UniqueName: \"kubernetes.io/projected/9e219b86-d82e-47f5-b071-c44ce0695362-kube-api-access-gj867\") pod \"ceilometer-0\" (UID: \"9e219b86-d82e-47f5-b071-c44ce0695362\") " pod="openstack/ceilometer-0" Feb 17 16:16:45 crc kubenswrapper[4808]: I0217 16:16:45.991163 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:16:46 crc kubenswrapper[4808]: I0217 16:16:46.475136 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:16:46 crc kubenswrapper[4808]: I0217 16:16:46.572004 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9e219b86-d82e-47f5-b071-c44ce0695362","Type":"ContainerStarted","Data":"48499d1ccd18294cde816d0461ae46337409d9b91f256c480873ba6063c87133"} Feb 17 16:16:47 crc kubenswrapper[4808]: I0217 16:16:47.160997 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d522679-0f73-4d58-b7f7-ddb835a4dbc6" path="/var/lib/kubelet/pods/8d522679-0f73-4d58-b7f7-ddb835a4dbc6/volumes" Feb 17 16:16:47 crc kubenswrapper[4808]: I0217 16:16:47.593039 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9e219b86-d82e-47f5-b071-c44ce0695362","Type":"ContainerStarted","Data":"b2074f66b52d0ee5fc07e0dd48e5b9610e713f89e070fa2279a74046e30629e5"} Feb 17 16:16:48 crc kubenswrapper[4808]: I0217 16:16:48.606967 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9e219b86-d82e-47f5-b071-c44ce0695362","Type":"ContainerStarted","Data":"14e92a83abc11738c2e58494b921f0dba3aa3b66f55a3affc10d2417c6785a90"} Feb 17 16:16:48 crc kubenswrapper[4808]: I0217 16:16:48.607323 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9e219b86-d82e-47f5-b071-c44ce0695362","Type":"ContainerStarted","Data":"8a9460318021d21a8c095dc46b0f6d2b923e1d1fb20312230919800b64c327bf"} Feb 17 16:16:50 crc kubenswrapper[4808]: I0217 16:16:50.643905 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9e219b86-d82e-47f5-b071-c44ce0695362","Type":"ContainerStarted","Data":"d73ac62ad3bfcdefb51a665f43bfa062a8308099aae6c2d45cb612f3752adbbe"} Feb 17 16:16:50 crc kubenswrapper[4808]: I0217 16:16:50.644500 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 17 16:16:50 crc kubenswrapper[4808]: I0217 16:16:50.688292 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.323931145 podStartE2EDuration="5.688272212s" podCreationTimestamp="2026-02-17 16:16:45 +0000 UTC" firstStartedPulling="2026-02-17 16:16:46.469873257 +0000 UTC m=+1369.986232340" lastFinishedPulling="2026-02-17 16:16:49.834214334 +0000 UTC m=+1373.350573407" observedRunningTime="2026-02-17 16:16:50.666600288 +0000 UTC m=+1374.182959381" watchObservedRunningTime="2026-02-17 16:16:50.688272212 +0000 UTC m=+1374.204631285" Feb 17 16:16:51 crc kubenswrapper[4808]: I0217 16:16:51.973484 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Feb 17 16:16:52 crc kubenswrapper[4808]: I0217 16:16:52.540946 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-lhrsb"] Feb 17 16:16:52 crc kubenswrapper[4808]: I0217 16:16:52.546153 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-lhrsb" Feb 17 16:16:52 crc kubenswrapper[4808]: I0217 16:16:52.548761 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Feb 17 16:16:52 crc kubenswrapper[4808]: I0217 16:16:52.549684 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Feb 17 16:16:52 crc kubenswrapper[4808]: I0217 16:16:52.567657 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-lhrsb"] Feb 17 16:16:52 crc kubenswrapper[4808]: I0217 16:16:52.687730 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 17 16:16:52 crc kubenswrapper[4808]: I0217 16:16:52.690253 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3864d41e-915e-4b73-908e-c575d38863e9-scripts\") pod \"nova-cell0-cell-mapping-lhrsb\" (UID: \"3864d41e-915e-4b73-908e-c575d38863e9\") " pod="openstack/nova-cell0-cell-mapping-lhrsb" Feb 17 16:16:52 crc kubenswrapper[4808]: I0217 16:16:52.690328 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdrx2\" (UniqueName: \"kubernetes.io/projected/3864d41e-915e-4b73-908e-c575d38863e9-kube-api-access-zdrx2\") pod \"nova-cell0-cell-mapping-lhrsb\" (UID: \"3864d41e-915e-4b73-908e-c575d38863e9\") " pod="openstack/nova-cell0-cell-mapping-lhrsb" Feb 17 16:16:52 crc kubenswrapper[4808]: I0217 16:16:52.690398 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3864d41e-915e-4b73-908e-c575d38863e9-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-lhrsb\" (UID: \"3864d41e-915e-4b73-908e-c575d38863e9\") " pod="openstack/nova-cell0-cell-mapping-lhrsb" Feb 17 16:16:52 crc kubenswrapper[4808]: I0217 16:16:52.690634 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 16:16:52 crc kubenswrapper[4808]: I0217 16:16:52.690681 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3864d41e-915e-4b73-908e-c575d38863e9-config-data\") pod \"nova-cell0-cell-mapping-lhrsb\" (UID: \"3864d41e-915e-4b73-908e-c575d38863e9\") " pod="openstack/nova-cell0-cell-mapping-lhrsb" Feb 17 16:16:52 crc kubenswrapper[4808]: I0217 16:16:52.692793 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 17 16:16:52 crc kubenswrapper[4808]: I0217 16:16:52.701025 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 17 16:16:52 crc kubenswrapper[4808]: I0217 16:16:52.767053 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 16:16:52 crc kubenswrapper[4808]: I0217 16:16:52.768696 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 17 16:16:52 crc kubenswrapper[4808]: I0217 16:16:52.771837 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 17 16:16:52 crc kubenswrapper[4808]: I0217 16:16:52.786858 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 17 16:16:52 crc kubenswrapper[4808]: I0217 16:16:52.788291 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:16:52 crc kubenswrapper[4808]: I0217 16:16:52.792078 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d49b36d0-eee7-4656-a6d8-cdf627d181b4-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d49b36d0-eee7-4656-a6d8-cdf627d181b4\") " pod="openstack/nova-api-0" Feb 17 16:16:52 crc kubenswrapper[4808]: I0217 16:16:52.792204 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3864d41e-915e-4b73-908e-c575d38863e9-config-data\") pod \"nova-cell0-cell-mapping-lhrsb\" (UID: \"3864d41e-915e-4b73-908e-c575d38863e9\") " pod="openstack/nova-cell0-cell-mapping-lhrsb" Feb 17 16:16:52 crc kubenswrapper[4808]: I0217 16:16:52.792226 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d49b36d0-eee7-4656-a6d8-cdf627d181b4-config-data\") pod \"nova-api-0\" (UID: \"d49b36d0-eee7-4656-a6d8-cdf627d181b4\") " pod="openstack/nova-api-0" Feb 17 16:16:52 crc kubenswrapper[4808]: I0217 16:16:52.792245 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3864d41e-915e-4b73-908e-c575d38863e9-scripts\") pod \"nova-cell0-cell-mapping-lhrsb\" (UID: \"3864d41e-915e-4b73-908e-c575d38863e9\") " pod="openstack/nova-cell0-cell-mapping-lhrsb" Feb 17 16:16:52 crc kubenswrapper[4808]: I0217 16:16:52.792265 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d49b36d0-eee7-4656-a6d8-cdf627d181b4-logs\") pod \"nova-api-0\" (UID: \"d49b36d0-eee7-4656-a6d8-cdf627d181b4\") " pod="openstack/nova-api-0" Feb 17 16:16:52 crc kubenswrapper[4808]: I0217 16:16:52.792296 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdrx2\" (UniqueName: \"kubernetes.io/projected/3864d41e-915e-4b73-908e-c575d38863e9-kube-api-access-zdrx2\") pod \"nova-cell0-cell-mapping-lhrsb\" (UID: \"3864d41e-915e-4b73-908e-c575d38863e9\") " pod="openstack/nova-cell0-cell-mapping-lhrsb" Feb 17 16:16:52 crc kubenswrapper[4808]: I0217 16:16:52.792342 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3864d41e-915e-4b73-908e-c575d38863e9-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-lhrsb\" (UID: \"3864d41e-915e-4b73-908e-c575d38863e9\") " pod="openstack/nova-cell0-cell-mapping-lhrsb" Feb 17 16:16:52 crc kubenswrapper[4808]: I0217 16:16:52.792367 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnzg8\" (UniqueName: \"kubernetes.io/projected/d49b36d0-eee7-4656-a6d8-cdf627d181b4-kube-api-access-pnzg8\") pod \"nova-api-0\" (UID: \"d49b36d0-eee7-4656-a6d8-cdf627d181b4\") " pod="openstack/nova-api-0" Feb 17 16:16:52 crc kubenswrapper[4808]: I0217 16:16:52.797409 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 17 16:16:52 crc kubenswrapper[4808]: I0217 16:16:52.799249 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3864d41e-915e-4b73-908e-c575d38863e9-scripts\") pod \"nova-cell0-cell-mapping-lhrsb\" (UID: \"3864d41e-915e-4b73-908e-c575d38863e9\") " pod="openstack/nova-cell0-cell-mapping-lhrsb" Feb 17 16:16:52 crc kubenswrapper[4808]: I0217 16:16:52.801140 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3864d41e-915e-4b73-908e-c575d38863e9-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-lhrsb\" (UID: \"3864d41e-915e-4b73-908e-c575d38863e9\") " pod="openstack/nova-cell0-cell-mapping-lhrsb" Feb 17 16:16:52 crc kubenswrapper[4808]: I0217 16:16:52.818645 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 16:16:52 crc kubenswrapper[4808]: I0217 16:16:52.838305 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3864d41e-915e-4b73-908e-c575d38863e9-config-data\") pod \"nova-cell0-cell-mapping-lhrsb\" (UID: \"3864d41e-915e-4b73-908e-c575d38863e9\") " pod="openstack/nova-cell0-cell-mapping-lhrsb" Feb 17 16:16:52 crc kubenswrapper[4808]: I0217 16:16:52.854203 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdrx2\" (UniqueName: \"kubernetes.io/projected/3864d41e-915e-4b73-908e-c575d38863e9-kube-api-access-zdrx2\") pod \"nova-cell0-cell-mapping-lhrsb\" (UID: \"3864d41e-915e-4b73-908e-c575d38863e9\") " pod="openstack/nova-cell0-cell-mapping-lhrsb" Feb 17 16:16:52 crc kubenswrapper[4808]: I0217 16:16:52.869216 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-lhrsb" Feb 17 16:16:52 crc kubenswrapper[4808]: I0217 16:16:52.881998 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 17 16:16:52 crc kubenswrapper[4808]: I0217 16:16:52.897729 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d49b36d0-eee7-4656-a6d8-cdf627d181b4-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d49b36d0-eee7-4656-a6d8-cdf627d181b4\") " pod="openstack/nova-api-0" Feb 17 16:16:52 crc kubenswrapper[4808]: I0217 16:16:52.898006 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b35f2cf-f95a-4467-a797-79239af955c4-config-data\") pod \"nova-scheduler-0\" (UID: \"4b35f2cf-f95a-4467-a797-79239af955c4\") " pod="openstack/nova-scheduler-0" Feb 17 16:16:52 crc kubenswrapper[4808]: I0217 16:16:52.898135 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67800510-1957-448c-88a1-0d2898a6524b-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"67800510-1957-448c-88a1-0d2898a6524b\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:16:52 crc kubenswrapper[4808]: I0217 16:16:52.898267 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tml77\" (UniqueName: \"kubernetes.io/projected/67800510-1957-448c-88a1-0d2898a6524b-kube-api-access-tml77\") pod \"nova-cell1-novncproxy-0\" (UID: \"67800510-1957-448c-88a1-0d2898a6524b\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:16:52 crc kubenswrapper[4808]: I0217 16:16:52.898367 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67800510-1957-448c-88a1-0d2898a6524b-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"67800510-1957-448c-88a1-0d2898a6524b\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:16:52 crc kubenswrapper[4808]: I0217 16:16:52.898681 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d49b36d0-eee7-4656-a6d8-cdf627d181b4-config-data\") pod \"nova-api-0\" (UID: \"d49b36d0-eee7-4656-a6d8-cdf627d181b4\") " pod="openstack/nova-api-0" Feb 17 16:16:52 crc kubenswrapper[4808]: I0217 16:16:52.898839 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d49b36d0-eee7-4656-a6d8-cdf627d181b4-logs\") pod \"nova-api-0\" (UID: \"d49b36d0-eee7-4656-a6d8-cdf627d181b4\") " pod="openstack/nova-api-0" Feb 17 16:16:52 crc kubenswrapper[4808]: I0217 16:16:52.898983 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b35f2cf-f95a-4467-a797-79239af955c4-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4b35f2cf-f95a-4467-a797-79239af955c4\") " pod="openstack/nova-scheduler-0" Feb 17 16:16:52 crc kubenswrapper[4808]: I0217 16:16:52.899128 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnzg8\" (UniqueName: \"kubernetes.io/projected/d49b36d0-eee7-4656-a6d8-cdf627d181b4-kube-api-access-pnzg8\") pod \"nova-api-0\" (UID: \"d49b36d0-eee7-4656-a6d8-cdf627d181b4\") " pod="openstack/nova-api-0" Feb 17 16:16:52 crc kubenswrapper[4808]: I0217 16:16:52.899238 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nvbd\" (UniqueName: \"kubernetes.io/projected/4b35f2cf-f95a-4467-a797-79239af955c4-kube-api-access-9nvbd\") pod \"nova-scheduler-0\" (UID: \"4b35f2cf-f95a-4467-a797-79239af955c4\") " pod="openstack/nova-scheduler-0" Feb 17 16:16:52 crc kubenswrapper[4808]: I0217 16:16:52.905562 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d49b36d0-eee7-4656-a6d8-cdf627d181b4-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d49b36d0-eee7-4656-a6d8-cdf627d181b4\") " pod="openstack/nova-api-0" Feb 17 16:16:52 crc kubenswrapper[4808]: I0217 16:16:52.907218 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d49b36d0-eee7-4656-a6d8-cdf627d181b4-logs\") pod \"nova-api-0\" (UID: \"d49b36d0-eee7-4656-a6d8-cdf627d181b4\") " pod="openstack/nova-api-0" Feb 17 16:16:52 crc kubenswrapper[4808]: I0217 16:16:52.913933 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d49b36d0-eee7-4656-a6d8-cdf627d181b4-config-data\") pod \"nova-api-0\" (UID: \"d49b36d0-eee7-4656-a6d8-cdf627d181b4\") " pod="openstack/nova-api-0" Feb 17 16:16:52 crc kubenswrapper[4808]: I0217 16:16:52.944031 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnzg8\" (UniqueName: \"kubernetes.io/projected/d49b36d0-eee7-4656-a6d8-cdf627d181b4-kube-api-access-pnzg8\") pod \"nova-api-0\" (UID: \"d49b36d0-eee7-4656-a6d8-cdf627d181b4\") " pod="openstack/nova-api-0" Feb 17 16:16:52 crc kubenswrapper[4808]: I0217 16:16:52.972664 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 17 16:16:52 crc kubenswrapper[4808]: I0217 16:16:52.974322 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 16:16:52 crc kubenswrapper[4808]: I0217 16:16:52.981947 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 17 16:16:53 crc kubenswrapper[4808]: I0217 16:16:53.005336 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b35f2cf-f95a-4467-a797-79239af955c4-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4b35f2cf-f95a-4467-a797-79239af955c4\") " pod="openstack/nova-scheduler-0" Feb 17 16:16:53 crc kubenswrapper[4808]: I0217 16:16:53.005649 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9nvbd\" (UniqueName: \"kubernetes.io/projected/4b35f2cf-f95a-4467-a797-79239af955c4-kube-api-access-9nvbd\") pod \"nova-scheduler-0\" (UID: \"4b35f2cf-f95a-4467-a797-79239af955c4\") " pod="openstack/nova-scheduler-0" Feb 17 16:16:53 crc kubenswrapper[4808]: I0217 16:16:53.005922 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b35f2cf-f95a-4467-a797-79239af955c4-config-data\") pod \"nova-scheduler-0\" (UID: \"4b35f2cf-f95a-4467-a797-79239af955c4\") " pod="openstack/nova-scheduler-0" Feb 17 16:16:53 crc kubenswrapper[4808]: I0217 16:16:53.006043 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67800510-1957-448c-88a1-0d2898a6524b-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"67800510-1957-448c-88a1-0d2898a6524b\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:16:53 crc kubenswrapper[4808]: I0217 16:16:53.006188 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tml77\" (UniqueName: \"kubernetes.io/projected/67800510-1957-448c-88a1-0d2898a6524b-kube-api-access-tml77\") pod \"nova-cell1-novncproxy-0\" (UID: \"67800510-1957-448c-88a1-0d2898a6524b\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:16:53 crc kubenswrapper[4808]: I0217 16:16:53.006286 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67800510-1957-448c-88a1-0d2898a6524b-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"67800510-1957-448c-88a1-0d2898a6524b\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:16:53 crc kubenswrapper[4808]: I0217 16:16:53.014111 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 16:16:53 crc kubenswrapper[4808]: I0217 16:16:53.018739 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67800510-1957-448c-88a1-0d2898a6524b-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"67800510-1957-448c-88a1-0d2898a6524b\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:16:53 crc kubenswrapper[4808]: I0217 16:16:53.021034 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b35f2cf-f95a-4467-a797-79239af955c4-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4b35f2cf-f95a-4467-a797-79239af955c4\") " pod="openstack/nova-scheduler-0" Feb 17 16:16:53 crc kubenswrapper[4808]: I0217 16:16:53.022127 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b35f2cf-f95a-4467-a797-79239af955c4-config-data\") pod \"nova-scheduler-0\" (UID: \"4b35f2cf-f95a-4467-a797-79239af955c4\") " pod="openstack/nova-scheduler-0" Feb 17 16:16:53 crc kubenswrapper[4808]: I0217 16:16:53.039947 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67800510-1957-448c-88a1-0d2898a6524b-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"67800510-1957-448c-88a1-0d2898a6524b\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:16:53 crc kubenswrapper[4808]: I0217 16:16:53.050874 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9nvbd\" (UniqueName: \"kubernetes.io/projected/4b35f2cf-f95a-4467-a797-79239af955c4-kube-api-access-9nvbd\") pod \"nova-scheduler-0\" (UID: \"4b35f2cf-f95a-4467-a797-79239af955c4\") " pod="openstack/nova-scheduler-0" Feb 17 16:16:53 crc kubenswrapper[4808]: I0217 16:16:53.053530 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tml77\" (UniqueName: \"kubernetes.io/projected/67800510-1957-448c-88a1-0d2898a6524b-kube-api-access-tml77\") pod \"nova-cell1-novncproxy-0\" (UID: \"67800510-1957-448c-88a1-0d2898a6524b\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:16:53 crc kubenswrapper[4808]: I0217 16:16:53.066909 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 16:16:53 crc kubenswrapper[4808]: I0217 16:16:53.069075 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:16:53 crc kubenswrapper[4808]: I0217 16:16:53.094257 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 17 16:16:53 crc kubenswrapper[4808]: I0217 16:16:53.111197 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/018b3b96-1953-4437-83ab-99bc970bcd36-config-data\") pod \"nova-metadata-0\" (UID: \"018b3b96-1953-4437-83ab-99bc970bcd36\") " pod="openstack/nova-metadata-0" Feb 17 16:16:53 crc kubenswrapper[4808]: I0217 16:16:53.111342 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/018b3b96-1953-4437-83ab-99bc970bcd36-logs\") pod \"nova-metadata-0\" (UID: \"018b3b96-1953-4437-83ab-99bc970bcd36\") " pod="openstack/nova-metadata-0" Feb 17 16:16:53 crc kubenswrapper[4808]: I0217 16:16:53.111368 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mb4wj\" (UniqueName: \"kubernetes.io/projected/018b3b96-1953-4437-83ab-99bc970bcd36-kube-api-access-mb4wj\") pod \"nova-metadata-0\" (UID: \"018b3b96-1953-4437-83ab-99bc970bcd36\") " pod="openstack/nova-metadata-0" Feb 17 16:16:53 crc kubenswrapper[4808]: I0217 16:16:53.111416 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/018b3b96-1953-4437-83ab-99bc970bcd36-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"018b3b96-1953-4437-83ab-99bc970bcd36\") " pod="openstack/nova-metadata-0" Feb 17 16:16:53 crc kubenswrapper[4808]: I0217 16:16:53.111523 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78cd565959-ktqh6"] Feb 17 16:16:53 crc kubenswrapper[4808]: I0217 16:16:53.120938 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78cd565959-ktqh6" Feb 17 16:16:53 crc kubenswrapper[4808]: I0217 16:16:53.133963 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78cd565959-ktqh6"] Feb 17 16:16:53 crc kubenswrapper[4808]: I0217 16:16:53.215825 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/018b3b96-1953-4437-83ab-99bc970bcd36-logs\") pod \"nova-metadata-0\" (UID: \"018b3b96-1953-4437-83ab-99bc970bcd36\") " pod="openstack/nova-metadata-0" Feb 17 16:16:53 crc kubenswrapper[4808]: I0217 16:16:53.215881 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mb4wj\" (UniqueName: \"kubernetes.io/projected/018b3b96-1953-4437-83ab-99bc970bcd36-kube-api-access-mb4wj\") pod \"nova-metadata-0\" (UID: \"018b3b96-1953-4437-83ab-99bc970bcd36\") " pod="openstack/nova-metadata-0" Feb 17 16:16:53 crc kubenswrapper[4808]: I0217 16:16:53.215933 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/018b3b96-1953-4437-83ab-99bc970bcd36-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"018b3b96-1953-4437-83ab-99bc970bcd36\") " pod="openstack/nova-metadata-0" Feb 17 16:16:53 crc kubenswrapper[4808]: I0217 16:16:53.216007 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/018b3b96-1953-4437-83ab-99bc970bcd36-config-data\") pod \"nova-metadata-0\" (UID: \"018b3b96-1953-4437-83ab-99bc970bcd36\") " pod="openstack/nova-metadata-0" Feb 17 16:16:53 crc kubenswrapper[4808]: I0217 16:16:53.220441 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/018b3b96-1953-4437-83ab-99bc970bcd36-logs\") pod \"nova-metadata-0\" (UID: \"018b3b96-1953-4437-83ab-99bc970bcd36\") " pod="openstack/nova-metadata-0" Feb 17 16:16:53 crc kubenswrapper[4808]: I0217 16:16:53.243143 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mb4wj\" (UniqueName: \"kubernetes.io/projected/018b3b96-1953-4437-83ab-99bc970bcd36-kube-api-access-mb4wj\") pod \"nova-metadata-0\" (UID: \"018b3b96-1953-4437-83ab-99bc970bcd36\") " pod="openstack/nova-metadata-0" Feb 17 16:16:53 crc kubenswrapper[4808]: I0217 16:16:53.274130 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/018b3b96-1953-4437-83ab-99bc970bcd36-config-data\") pod \"nova-metadata-0\" (UID: \"018b3b96-1953-4437-83ab-99bc970bcd36\") " pod="openstack/nova-metadata-0" Feb 17 16:16:53 crc kubenswrapper[4808]: I0217 16:16:53.292426 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/018b3b96-1953-4437-83ab-99bc970bcd36-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"018b3b96-1953-4437-83ab-99bc970bcd36\") " pod="openstack/nova-metadata-0" Feb 17 16:16:53 crc kubenswrapper[4808]: I0217 16:16:53.319495 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dghr7\" (UniqueName: \"kubernetes.io/projected/17dd9003-af7c-4ead-bd8a-69dd599672e1-kube-api-access-dghr7\") pod \"dnsmasq-dns-78cd565959-ktqh6\" (UID: \"17dd9003-af7c-4ead-bd8a-69dd599672e1\") " pod="openstack/dnsmasq-dns-78cd565959-ktqh6" Feb 17 16:16:53 crc kubenswrapper[4808]: I0217 16:16:53.319531 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/17dd9003-af7c-4ead-bd8a-69dd599672e1-ovsdbserver-sb\") pod \"dnsmasq-dns-78cd565959-ktqh6\" (UID: \"17dd9003-af7c-4ead-bd8a-69dd599672e1\") " pod="openstack/dnsmasq-dns-78cd565959-ktqh6" Feb 17 16:16:53 crc kubenswrapper[4808]: I0217 16:16:53.319586 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/17dd9003-af7c-4ead-bd8a-69dd599672e1-dns-svc\") pod \"dnsmasq-dns-78cd565959-ktqh6\" (UID: \"17dd9003-af7c-4ead-bd8a-69dd599672e1\") " pod="openstack/dnsmasq-dns-78cd565959-ktqh6" Feb 17 16:16:53 crc kubenswrapper[4808]: I0217 16:16:53.319637 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17dd9003-af7c-4ead-bd8a-69dd599672e1-config\") pod \"dnsmasq-dns-78cd565959-ktqh6\" (UID: \"17dd9003-af7c-4ead-bd8a-69dd599672e1\") " pod="openstack/dnsmasq-dns-78cd565959-ktqh6" Feb 17 16:16:53 crc kubenswrapper[4808]: I0217 16:16:53.319672 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/17dd9003-af7c-4ead-bd8a-69dd599672e1-ovsdbserver-nb\") pod \"dnsmasq-dns-78cd565959-ktqh6\" (UID: \"17dd9003-af7c-4ead-bd8a-69dd599672e1\") " pod="openstack/dnsmasq-dns-78cd565959-ktqh6" Feb 17 16:16:53 crc kubenswrapper[4808]: I0217 16:16:53.319688 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/17dd9003-af7c-4ead-bd8a-69dd599672e1-dns-swift-storage-0\") pod \"dnsmasq-dns-78cd565959-ktqh6\" (UID: \"17dd9003-af7c-4ead-bd8a-69dd599672e1\") " pod="openstack/dnsmasq-dns-78cd565959-ktqh6" Feb 17 16:16:53 crc kubenswrapper[4808]: I0217 16:16:53.386097 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 16:16:53 crc kubenswrapper[4808]: I0217 16:16:53.422395 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dghr7\" (UniqueName: \"kubernetes.io/projected/17dd9003-af7c-4ead-bd8a-69dd599672e1-kube-api-access-dghr7\") pod \"dnsmasq-dns-78cd565959-ktqh6\" (UID: \"17dd9003-af7c-4ead-bd8a-69dd599672e1\") " pod="openstack/dnsmasq-dns-78cd565959-ktqh6" Feb 17 16:16:53 crc kubenswrapper[4808]: I0217 16:16:53.422455 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/17dd9003-af7c-4ead-bd8a-69dd599672e1-ovsdbserver-sb\") pod \"dnsmasq-dns-78cd565959-ktqh6\" (UID: \"17dd9003-af7c-4ead-bd8a-69dd599672e1\") " pod="openstack/dnsmasq-dns-78cd565959-ktqh6" Feb 17 16:16:53 crc kubenswrapper[4808]: I0217 16:16:53.422488 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/17dd9003-af7c-4ead-bd8a-69dd599672e1-dns-svc\") pod \"dnsmasq-dns-78cd565959-ktqh6\" (UID: \"17dd9003-af7c-4ead-bd8a-69dd599672e1\") " pod="openstack/dnsmasq-dns-78cd565959-ktqh6" Feb 17 16:16:53 crc kubenswrapper[4808]: I0217 16:16:53.422552 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17dd9003-af7c-4ead-bd8a-69dd599672e1-config\") pod \"dnsmasq-dns-78cd565959-ktqh6\" (UID: \"17dd9003-af7c-4ead-bd8a-69dd599672e1\") " pod="openstack/dnsmasq-dns-78cd565959-ktqh6" Feb 17 16:16:53 crc kubenswrapper[4808]: I0217 16:16:53.422622 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/17dd9003-af7c-4ead-bd8a-69dd599672e1-ovsdbserver-nb\") pod \"dnsmasq-dns-78cd565959-ktqh6\" (UID: \"17dd9003-af7c-4ead-bd8a-69dd599672e1\") " pod="openstack/dnsmasq-dns-78cd565959-ktqh6" Feb 17 16:16:53 crc kubenswrapper[4808]: I0217 16:16:53.422649 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/17dd9003-af7c-4ead-bd8a-69dd599672e1-dns-swift-storage-0\") pod \"dnsmasq-dns-78cd565959-ktqh6\" (UID: \"17dd9003-af7c-4ead-bd8a-69dd599672e1\") " pod="openstack/dnsmasq-dns-78cd565959-ktqh6" Feb 17 16:16:53 crc kubenswrapper[4808]: I0217 16:16:53.425800 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/17dd9003-af7c-4ead-bd8a-69dd599672e1-dns-svc\") pod \"dnsmasq-dns-78cd565959-ktqh6\" (UID: \"17dd9003-af7c-4ead-bd8a-69dd599672e1\") " pod="openstack/dnsmasq-dns-78cd565959-ktqh6" Feb 17 16:16:53 crc kubenswrapper[4808]: I0217 16:16:53.428221 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/17dd9003-af7c-4ead-bd8a-69dd599672e1-ovsdbserver-sb\") pod \"dnsmasq-dns-78cd565959-ktqh6\" (UID: \"17dd9003-af7c-4ead-bd8a-69dd599672e1\") " pod="openstack/dnsmasq-dns-78cd565959-ktqh6" Feb 17 16:16:53 crc kubenswrapper[4808]: I0217 16:16:53.428273 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/17dd9003-af7c-4ead-bd8a-69dd599672e1-dns-swift-storage-0\") pod \"dnsmasq-dns-78cd565959-ktqh6\" (UID: \"17dd9003-af7c-4ead-bd8a-69dd599672e1\") " pod="openstack/dnsmasq-dns-78cd565959-ktqh6" Feb 17 16:16:53 crc kubenswrapper[4808]: I0217 16:16:53.428683 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17dd9003-af7c-4ead-bd8a-69dd599672e1-config\") pod \"dnsmasq-dns-78cd565959-ktqh6\" (UID: \"17dd9003-af7c-4ead-bd8a-69dd599672e1\") " pod="openstack/dnsmasq-dns-78cd565959-ktqh6" Feb 17 16:16:53 crc kubenswrapper[4808]: I0217 16:16:53.429136 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/17dd9003-af7c-4ead-bd8a-69dd599672e1-ovsdbserver-nb\") pod \"dnsmasq-dns-78cd565959-ktqh6\" (UID: \"17dd9003-af7c-4ead-bd8a-69dd599672e1\") " pod="openstack/dnsmasq-dns-78cd565959-ktqh6" Feb 17 16:16:53 crc kubenswrapper[4808]: I0217 16:16:53.436979 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dghr7\" (UniqueName: \"kubernetes.io/projected/17dd9003-af7c-4ead-bd8a-69dd599672e1-kube-api-access-dghr7\") pod \"dnsmasq-dns-78cd565959-ktqh6\" (UID: \"17dd9003-af7c-4ead-bd8a-69dd599672e1\") " pod="openstack/dnsmasq-dns-78cd565959-ktqh6" Feb 17 16:16:53 crc kubenswrapper[4808]: I0217 16:16:53.499118 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78cd565959-ktqh6" Feb 17 16:16:53 crc kubenswrapper[4808]: I0217 16:16:53.627894 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-lhrsb"] Feb 17 16:16:53 crc kubenswrapper[4808]: I0217 16:16:53.684825 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-lhrsb" event={"ID":"3864d41e-915e-4b73-908e-c575d38863e9","Type":"ContainerStarted","Data":"8246e1d9e27ac063f20e993837fefe05ee7faed0616a81f38ae63adc17f5680c"} Feb 17 16:16:53 crc kubenswrapper[4808]: I0217 16:16:53.776805 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-46chh"] Feb 17 16:16:53 crc kubenswrapper[4808]: I0217 16:16:53.778210 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-46chh" Feb 17 16:16:53 crc kubenswrapper[4808]: I0217 16:16:53.780986 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 17 16:16:53 crc kubenswrapper[4808]: I0217 16:16:53.781175 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Feb 17 16:16:53 crc kubenswrapper[4808]: I0217 16:16:53.795436 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-46chh"] Feb 17 16:16:53 crc kubenswrapper[4808]: I0217 16:16:53.932443 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d64831b-aec0-42cd-96ec-831ec911d921-config-data\") pod \"nova-cell1-conductor-db-sync-46chh\" (UID: \"8d64831b-aec0-42cd-96ec-831ec911d921\") " pod="openstack/nova-cell1-conductor-db-sync-46chh" Feb 17 16:16:53 crc kubenswrapper[4808]: I0217 16:16:53.932802 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2krh6\" (UniqueName: \"kubernetes.io/projected/8d64831b-aec0-42cd-96ec-831ec911d921-kube-api-access-2krh6\") pod \"nova-cell1-conductor-db-sync-46chh\" (UID: \"8d64831b-aec0-42cd-96ec-831ec911d921\") " pod="openstack/nova-cell1-conductor-db-sync-46chh" Feb 17 16:16:53 crc kubenswrapper[4808]: I0217 16:16:53.932956 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d64831b-aec0-42cd-96ec-831ec911d921-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-46chh\" (UID: \"8d64831b-aec0-42cd-96ec-831ec911d921\") " pod="openstack/nova-cell1-conductor-db-sync-46chh" Feb 17 16:16:53 crc kubenswrapper[4808]: I0217 16:16:53.933189 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d64831b-aec0-42cd-96ec-831ec911d921-scripts\") pod \"nova-cell1-conductor-db-sync-46chh\" (UID: \"8d64831b-aec0-42cd-96ec-831ec911d921\") " pod="openstack/nova-cell1-conductor-db-sync-46chh" Feb 17 16:16:54 crc kubenswrapper[4808]: I0217 16:16:54.034746 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d64831b-aec0-42cd-96ec-831ec911d921-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-46chh\" (UID: \"8d64831b-aec0-42cd-96ec-831ec911d921\") " pod="openstack/nova-cell1-conductor-db-sync-46chh" Feb 17 16:16:54 crc kubenswrapper[4808]: I0217 16:16:54.035788 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d64831b-aec0-42cd-96ec-831ec911d921-scripts\") pod \"nova-cell1-conductor-db-sync-46chh\" (UID: \"8d64831b-aec0-42cd-96ec-831ec911d921\") " pod="openstack/nova-cell1-conductor-db-sync-46chh" Feb 17 16:16:54 crc kubenswrapper[4808]: I0217 16:16:54.035863 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d64831b-aec0-42cd-96ec-831ec911d921-config-data\") pod \"nova-cell1-conductor-db-sync-46chh\" (UID: \"8d64831b-aec0-42cd-96ec-831ec911d921\") " pod="openstack/nova-cell1-conductor-db-sync-46chh" Feb 17 16:16:54 crc kubenswrapper[4808]: I0217 16:16:54.035931 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2krh6\" (UniqueName: \"kubernetes.io/projected/8d64831b-aec0-42cd-96ec-831ec911d921-kube-api-access-2krh6\") pod \"nova-cell1-conductor-db-sync-46chh\" (UID: \"8d64831b-aec0-42cd-96ec-831ec911d921\") " pod="openstack/nova-cell1-conductor-db-sync-46chh" Feb 17 16:16:54 crc kubenswrapper[4808]: I0217 16:16:54.040986 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d64831b-aec0-42cd-96ec-831ec911d921-scripts\") pod \"nova-cell1-conductor-db-sync-46chh\" (UID: \"8d64831b-aec0-42cd-96ec-831ec911d921\") " pod="openstack/nova-cell1-conductor-db-sync-46chh" Feb 17 16:16:54 crc kubenswrapper[4808]: I0217 16:16:54.041118 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d64831b-aec0-42cd-96ec-831ec911d921-config-data\") pod \"nova-cell1-conductor-db-sync-46chh\" (UID: \"8d64831b-aec0-42cd-96ec-831ec911d921\") " pod="openstack/nova-cell1-conductor-db-sync-46chh" Feb 17 16:16:54 crc kubenswrapper[4808]: I0217 16:16:54.044686 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d64831b-aec0-42cd-96ec-831ec911d921-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-46chh\" (UID: \"8d64831b-aec0-42cd-96ec-831ec911d921\") " pod="openstack/nova-cell1-conductor-db-sync-46chh" Feb 17 16:16:54 crc kubenswrapper[4808]: I0217 16:16:54.067146 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2krh6\" (UniqueName: \"kubernetes.io/projected/8d64831b-aec0-42cd-96ec-831ec911d921-kube-api-access-2krh6\") pod \"nova-cell1-conductor-db-sync-46chh\" (UID: \"8d64831b-aec0-42cd-96ec-831ec911d921\") " pod="openstack/nova-cell1-conductor-db-sync-46chh" Feb 17 16:16:54 crc kubenswrapper[4808]: I0217 16:16:54.079024 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 17 16:16:54 crc kubenswrapper[4808]: I0217 16:16:54.089754 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 16:16:54 crc kubenswrapper[4808]: I0217 16:16:54.109376 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 17 16:16:54 crc kubenswrapper[4808]: I0217 16:16:54.127738 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78cd565959-ktqh6"] Feb 17 16:16:54 crc kubenswrapper[4808]: W0217 16:16:54.133823 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod17dd9003_af7c_4ead_bd8a_69dd599672e1.slice/crio-6041d8f48336fb9f3aea4819de5b72096ec393680040db5b6c883b60b9ab2c94 WatchSource:0}: Error finding container 6041d8f48336fb9f3aea4819de5b72096ec393680040db5b6c883b60b9ab2c94: Status 404 returned error can't find the container with id 6041d8f48336fb9f3aea4819de5b72096ec393680040db5b6c883b60b9ab2c94 Feb 17 16:16:54 crc kubenswrapper[4808]: I0217 16:16:54.149810 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 16:16:54 crc kubenswrapper[4808]: I0217 16:16:54.197334 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-46chh" Feb 17 16:16:54 crc kubenswrapper[4808]: I0217 16:16:54.707922 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d49b36d0-eee7-4656-a6d8-cdf627d181b4","Type":"ContainerStarted","Data":"ee5e98cadb90446acabe123662e49b6a4cd2eca56be18b81e05b30047bcff9c1"} Feb 17 16:16:54 crc kubenswrapper[4808]: I0217 16:16:54.715067 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-lhrsb" event={"ID":"3864d41e-915e-4b73-908e-c575d38863e9","Type":"ContainerStarted","Data":"c7ce5a6ab108ae38e41b41038e16d03130e5c8bb91a8cb5bfd28423f0687dfdc"} Feb 17 16:16:54 crc kubenswrapper[4808]: I0217 16:16:54.718679 4808 generic.go:334] "Generic (PLEG): container finished" podID="17dd9003-af7c-4ead-bd8a-69dd599672e1" containerID="3ef21441db2673d8cb4a73235d72eeb9fb765f3ab14514345fdd78ed72a42293" exitCode=0 Feb 17 16:16:54 crc kubenswrapper[4808]: I0217 16:16:54.718729 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78cd565959-ktqh6" event={"ID":"17dd9003-af7c-4ead-bd8a-69dd599672e1","Type":"ContainerDied","Data":"3ef21441db2673d8cb4a73235d72eeb9fb765f3ab14514345fdd78ed72a42293"} Feb 17 16:16:54 crc kubenswrapper[4808]: I0217 16:16:54.718748 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78cd565959-ktqh6" event={"ID":"17dd9003-af7c-4ead-bd8a-69dd599672e1","Type":"ContainerStarted","Data":"6041d8f48336fb9f3aea4819de5b72096ec393680040db5b6c883b60b9ab2c94"} Feb 17 16:16:54 crc kubenswrapper[4808]: I0217 16:16:54.719538 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-46chh"] Feb 17 16:16:54 crc kubenswrapper[4808]: I0217 16:16:54.721720 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"67800510-1957-448c-88a1-0d2898a6524b","Type":"ContainerStarted","Data":"b5824b16acbd91bc8be7043e9329004ce8288b6bdf03b1752a9c0085eb731c99"} Feb 17 16:16:54 crc kubenswrapper[4808]: I0217 16:16:54.722869 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"018b3b96-1953-4437-83ab-99bc970bcd36","Type":"ContainerStarted","Data":"21c9110345aef4dc69cbeac414de965fd822d356a427b405912ce038ca889eb8"} Feb 17 16:16:54 crc kubenswrapper[4808]: I0217 16:16:54.724842 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4b35f2cf-f95a-4467-a797-79239af955c4","Type":"ContainerStarted","Data":"70be81454915c76edbe1bd9f9a80641c32a52e8409743ccd53fcf3858d18b2d6"} Feb 17 16:16:54 crc kubenswrapper[4808]: I0217 16:16:54.735117 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-lhrsb" podStartSLOduration=2.735099446 podStartE2EDuration="2.735099446s" podCreationTimestamp="2026-02-17 16:16:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:16:54.733704918 +0000 UTC m=+1378.250083742" watchObservedRunningTime="2026-02-17 16:16:54.735099446 +0000 UTC m=+1378.251458519" Feb 17 16:16:55 crc kubenswrapper[4808]: I0217 16:16:55.743725 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-46chh" event={"ID":"8d64831b-aec0-42cd-96ec-831ec911d921","Type":"ContainerStarted","Data":"531034a194c4af62f0c8e11015f026a45e10d027a70d8384a365f5385731c096"} Feb 17 16:16:55 crc kubenswrapper[4808]: I0217 16:16:55.744079 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-46chh" event={"ID":"8d64831b-aec0-42cd-96ec-831ec911d921","Type":"ContainerStarted","Data":"2d3829a8cd87e1e7493f796b94998c113c1da2acebe2d18b959cae6d8ec1e0ba"} Feb 17 16:16:55 crc kubenswrapper[4808]: I0217 16:16:55.746557 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78cd565959-ktqh6" event={"ID":"17dd9003-af7c-4ead-bd8a-69dd599672e1","Type":"ContainerStarted","Data":"60ea09e4f101b5eefb07143e634305b321a92f4dcd3e620b2c5a1a60a199bdae"} Feb 17 16:16:55 crc kubenswrapper[4808]: I0217 16:16:55.787812 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-78cd565959-ktqh6" podStartSLOduration=2.787791221 podStartE2EDuration="2.787791221s" podCreationTimestamp="2026-02-17 16:16:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:16:55.786550598 +0000 UTC m=+1379.302909671" watchObservedRunningTime="2026-02-17 16:16:55.787791221 +0000 UTC m=+1379.304150294" Feb 17 16:16:55 crc kubenswrapper[4808]: I0217 16:16:55.795307 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-46chh" podStartSLOduration=2.795285729 podStartE2EDuration="2.795285729s" podCreationTimestamp="2026-02-17 16:16:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:16:55.760207381 +0000 UTC m=+1379.276566464" watchObservedRunningTime="2026-02-17 16:16:55.795285729 +0000 UTC m=+1379.311644802" Feb 17 16:16:56 crc kubenswrapper[4808]: I0217 16:16:56.385219 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 16:16:56 crc kubenswrapper[4808]: I0217 16:16:56.435290 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 17 16:16:56 crc kubenswrapper[4808]: I0217 16:16:56.754378 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-78cd565959-ktqh6" Feb 17 16:16:57 crc kubenswrapper[4808]: I0217 16:16:57.765863 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"018b3b96-1953-4437-83ab-99bc970bcd36","Type":"ContainerStarted","Data":"6ef8e3bebfc9cfcadeefd087d4fa6251ebd40b4d37426989452bb671f4dca959"} Feb 17 16:16:57 crc kubenswrapper[4808]: I0217 16:16:57.766483 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"018b3b96-1953-4437-83ab-99bc970bcd36","Type":"ContainerStarted","Data":"b61b15418b3bd37da0c8b8ccd088976fe8d71ecad15624d7a4fc984f84514eef"} Feb 17 16:16:57 crc kubenswrapper[4808]: I0217 16:16:57.766134 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="018b3b96-1953-4437-83ab-99bc970bcd36" containerName="nova-metadata-metadata" containerID="cri-o://6ef8e3bebfc9cfcadeefd087d4fa6251ebd40b4d37426989452bb671f4dca959" gracePeriod=30 Feb 17 16:16:57 crc kubenswrapper[4808]: I0217 16:16:57.766068 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="018b3b96-1953-4437-83ab-99bc970bcd36" containerName="nova-metadata-log" containerID="cri-o://b61b15418b3bd37da0c8b8ccd088976fe8d71ecad15624d7a4fc984f84514eef" gracePeriod=30 Feb 17 16:16:57 crc kubenswrapper[4808]: I0217 16:16:57.770080 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"67800510-1957-448c-88a1-0d2898a6524b","Type":"ContainerStarted","Data":"93feefbbf60d56afc10b9bf64ecb3070c5634d6555929b547ee15577ff50a6aa"} Feb 17 16:16:57 crc kubenswrapper[4808]: I0217 16:16:57.770227 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="67800510-1957-448c-88a1-0d2898a6524b" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://93feefbbf60d56afc10b9bf64ecb3070c5634d6555929b547ee15577ff50a6aa" gracePeriod=30 Feb 17 16:16:57 crc kubenswrapper[4808]: I0217 16:16:57.773496 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4b35f2cf-f95a-4467-a797-79239af955c4","Type":"ContainerStarted","Data":"e515390cffb4ded639584839b29e5e7f5a819a4fb088e1aca8a2d5cd4b56159f"} Feb 17 16:16:57 crc kubenswrapper[4808]: I0217 16:16:57.782123 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d49b36d0-eee7-4656-a6d8-cdf627d181b4","Type":"ContainerStarted","Data":"56a71b058c9c5e5186facb8c41dbcbe7e8bd3a8aec3a171c84f15d63846949cc"} Feb 17 16:16:57 crc kubenswrapper[4808]: I0217 16:16:57.782164 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d49b36d0-eee7-4656-a6d8-cdf627d181b4","Type":"ContainerStarted","Data":"5b2d1102f6f02c603c50170454469936029b3e4b59fdb0bc3ba9eef7842c5f96"} Feb 17 16:16:57 crc kubenswrapper[4808]: I0217 16:16:57.787527 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.753312787 podStartE2EDuration="5.787510685s" podCreationTimestamp="2026-02-17 16:16:52 +0000 UTC" firstStartedPulling="2026-02-17 16:16:54.141297897 +0000 UTC m=+1377.657656960" lastFinishedPulling="2026-02-17 16:16:57.175495785 +0000 UTC m=+1380.691854858" observedRunningTime="2026-02-17 16:16:57.784435883 +0000 UTC m=+1381.300794956" watchObservedRunningTime="2026-02-17 16:16:57.787510685 +0000 UTC m=+1381.303869758" Feb 17 16:16:57 crc kubenswrapper[4808]: I0217 16:16:57.806192 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.72881628 podStartE2EDuration="5.806169779s" podCreationTimestamp="2026-02-17 16:16:52 +0000 UTC" firstStartedPulling="2026-02-17 16:16:54.107736279 +0000 UTC m=+1377.624095352" lastFinishedPulling="2026-02-17 16:16:57.185089778 +0000 UTC m=+1380.701448851" observedRunningTime="2026-02-17 16:16:57.805591514 +0000 UTC m=+1381.321950587" watchObservedRunningTime="2026-02-17 16:16:57.806169779 +0000 UTC m=+1381.322528852" Feb 17 16:16:57 crc kubenswrapper[4808]: I0217 16:16:57.822761 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.696271637 podStartE2EDuration="5.822736107s" podCreationTimestamp="2026-02-17 16:16:52 +0000 UTC" firstStartedPulling="2026-02-17 16:16:54.049122737 +0000 UTC m=+1377.565481810" lastFinishedPulling="2026-02-17 16:16:57.175587207 +0000 UTC m=+1380.691946280" observedRunningTime="2026-02-17 16:16:57.822044359 +0000 UTC m=+1381.338403422" watchObservedRunningTime="2026-02-17 16:16:57.822736107 +0000 UTC m=+1381.339095180" Feb 17 16:16:57 crc kubenswrapper[4808]: I0217 16:16:57.851160 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.795982097 podStartE2EDuration="5.851141839s" podCreationTimestamp="2026-02-17 16:16:52 +0000 UTC" firstStartedPulling="2026-02-17 16:16:54.121967345 +0000 UTC m=+1377.638326418" lastFinishedPulling="2026-02-17 16:16:57.177127087 +0000 UTC m=+1380.693486160" observedRunningTime="2026-02-17 16:16:57.846038054 +0000 UTC m=+1381.362397127" watchObservedRunningTime="2026-02-17 16:16:57.851141839 +0000 UTC m=+1381.367500912" Feb 17 16:16:58 crc kubenswrapper[4808]: I0217 16:16:58.070382 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:16:58 crc kubenswrapper[4808]: I0217 16:16:58.095563 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 17 16:16:58 crc kubenswrapper[4808]: I0217 16:16:58.386696 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 17 16:16:58 crc kubenswrapper[4808]: I0217 16:16:58.386753 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 17 16:16:58 crc kubenswrapper[4808]: I0217 16:16:58.800507 4808 generic.go:334] "Generic (PLEG): container finished" podID="018b3b96-1953-4437-83ab-99bc970bcd36" containerID="b61b15418b3bd37da0c8b8ccd088976fe8d71ecad15624d7a4fc984f84514eef" exitCode=143 Feb 17 16:16:58 crc kubenswrapper[4808]: I0217 16:16:58.800750 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"018b3b96-1953-4437-83ab-99bc970bcd36","Type":"ContainerDied","Data":"b61b15418b3bd37da0c8b8ccd088976fe8d71ecad15624d7a4fc984f84514eef"} Feb 17 16:17:01 crc kubenswrapper[4808]: I0217 16:17:01.842390 4808 generic.go:334] "Generic (PLEG): container finished" podID="8d64831b-aec0-42cd-96ec-831ec911d921" containerID="531034a194c4af62f0c8e11015f026a45e10d027a70d8384a365f5385731c096" exitCode=0 Feb 17 16:17:01 crc kubenswrapper[4808]: I0217 16:17:01.842641 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-46chh" event={"ID":"8d64831b-aec0-42cd-96ec-831ec911d921","Type":"ContainerDied","Data":"531034a194c4af62f0c8e11015f026a45e10d027a70d8384a365f5385731c096"} Feb 17 16:17:01 crc kubenswrapper[4808]: I0217 16:17:01.848073 4808 generic.go:334] "Generic (PLEG): container finished" podID="3864d41e-915e-4b73-908e-c575d38863e9" containerID="c7ce5a6ab108ae38e41b41038e16d03130e5c8bb91a8cb5bfd28423f0687dfdc" exitCode=0 Feb 17 16:17:01 crc kubenswrapper[4808]: I0217 16:17:01.848150 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-lhrsb" event={"ID":"3864d41e-915e-4b73-908e-c575d38863e9","Type":"ContainerDied","Data":"c7ce5a6ab108ae38e41b41038e16d03130e5c8bb91a8cb5bfd28423f0687dfdc"} Feb 17 16:17:03 crc kubenswrapper[4808]: I0217 16:17:03.015052 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 17 16:17:03 crc kubenswrapper[4808]: I0217 16:17:03.015184 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 17 16:17:03 crc kubenswrapper[4808]: I0217 16:17:03.095600 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 17 16:17:03 crc kubenswrapper[4808]: E0217 16:17:03.123960 4808 kubelet_node_status.go:756] "Failed to set some node status fields" err="failed to validate nodeIP: route ip+net: no such network interface" node="crc" Feb 17 16:17:03 crc kubenswrapper[4808]: I0217 16:17:03.133149 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 17 16:17:03 crc kubenswrapper[4808]: I0217 16:17:03.500864 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-78cd565959-ktqh6" Feb 17 16:17:03 crc kubenswrapper[4808]: I0217 16:17:03.512002 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-lhrsb" Feb 17 16:17:03 crc kubenswrapper[4808]: I0217 16:17:03.518241 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-46chh" Feb 17 16:17:03 crc kubenswrapper[4808]: I0217 16:17:03.600354 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67bdc55879-786qn"] Feb 17 16:17:03 crc kubenswrapper[4808]: I0217 16:17:03.600704 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-67bdc55879-786qn" podUID="ef386302-14e1-4b00-b816-e85da8d23114" containerName="dnsmasq-dns" containerID="cri-o://893c1ea963c8e724fa2b9baa335921cef2a62410cb7f634726388e519c6b4a53" gracePeriod=10 Feb 17 16:17:03 crc kubenswrapper[4808]: I0217 16:17:03.669153 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3864d41e-915e-4b73-908e-c575d38863e9-combined-ca-bundle\") pod \"3864d41e-915e-4b73-908e-c575d38863e9\" (UID: \"3864d41e-915e-4b73-908e-c575d38863e9\") " Feb 17 16:17:03 crc kubenswrapper[4808]: I0217 16:17:03.669305 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2krh6\" (UniqueName: \"kubernetes.io/projected/8d64831b-aec0-42cd-96ec-831ec911d921-kube-api-access-2krh6\") pod \"8d64831b-aec0-42cd-96ec-831ec911d921\" (UID: \"8d64831b-aec0-42cd-96ec-831ec911d921\") " Feb 17 16:17:03 crc kubenswrapper[4808]: I0217 16:17:03.669327 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3864d41e-915e-4b73-908e-c575d38863e9-scripts\") pod \"3864d41e-915e-4b73-908e-c575d38863e9\" (UID: \"3864d41e-915e-4b73-908e-c575d38863e9\") " Feb 17 16:17:03 crc kubenswrapper[4808]: I0217 16:17:03.669458 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zdrx2\" (UniqueName: \"kubernetes.io/projected/3864d41e-915e-4b73-908e-c575d38863e9-kube-api-access-zdrx2\") pod \"3864d41e-915e-4b73-908e-c575d38863e9\" (UID: \"3864d41e-915e-4b73-908e-c575d38863e9\") " Feb 17 16:17:03 crc kubenswrapper[4808]: I0217 16:17:03.669513 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d64831b-aec0-42cd-96ec-831ec911d921-scripts\") pod \"8d64831b-aec0-42cd-96ec-831ec911d921\" (UID: \"8d64831b-aec0-42cd-96ec-831ec911d921\") " Feb 17 16:17:03 crc kubenswrapper[4808]: I0217 16:17:03.669533 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3864d41e-915e-4b73-908e-c575d38863e9-config-data\") pod \"3864d41e-915e-4b73-908e-c575d38863e9\" (UID: \"3864d41e-915e-4b73-908e-c575d38863e9\") " Feb 17 16:17:03 crc kubenswrapper[4808]: I0217 16:17:03.669585 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d64831b-aec0-42cd-96ec-831ec911d921-combined-ca-bundle\") pod \"8d64831b-aec0-42cd-96ec-831ec911d921\" (UID: \"8d64831b-aec0-42cd-96ec-831ec911d921\") " Feb 17 16:17:03 crc kubenswrapper[4808]: I0217 16:17:03.669604 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d64831b-aec0-42cd-96ec-831ec911d921-config-data\") pod \"8d64831b-aec0-42cd-96ec-831ec911d921\" (UID: \"8d64831b-aec0-42cd-96ec-831ec911d921\") " Feb 17 16:17:03 crc kubenswrapper[4808]: I0217 16:17:03.697761 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3864d41e-915e-4b73-908e-c575d38863e9-kube-api-access-zdrx2" (OuterVolumeSpecName: "kube-api-access-zdrx2") pod "3864d41e-915e-4b73-908e-c575d38863e9" (UID: "3864d41e-915e-4b73-908e-c575d38863e9"). InnerVolumeSpecName "kube-api-access-zdrx2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:17:03 crc kubenswrapper[4808]: I0217 16:17:03.709834 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d64831b-aec0-42cd-96ec-831ec911d921-kube-api-access-2krh6" (OuterVolumeSpecName: "kube-api-access-2krh6") pod "8d64831b-aec0-42cd-96ec-831ec911d921" (UID: "8d64831b-aec0-42cd-96ec-831ec911d921"). InnerVolumeSpecName "kube-api-access-2krh6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:17:03 crc kubenswrapper[4808]: I0217 16:17:03.709977 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d64831b-aec0-42cd-96ec-831ec911d921-scripts" (OuterVolumeSpecName: "scripts") pod "8d64831b-aec0-42cd-96ec-831ec911d921" (UID: "8d64831b-aec0-42cd-96ec-831ec911d921"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:17:03 crc kubenswrapper[4808]: I0217 16:17:03.730603 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3864d41e-915e-4b73-908e-c575d38863e9-scripts" (OuterVolumeSpecName: "scripts") pod "3864d41e-915e-4b73-908e-c575d38863e9" (UID: "3864d41e-915e-4b73-908e-c575d38863e9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:17:03 crc kubenswrapper[4808]: I0217 16:17:03.772077 4808 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d64831b-aec0-42cd-96ec-831ec911d921-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:03 crc kubenswrapper[4808]: I0217 16:17:03.772103 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2krh6\" (UniqueName: \"kubernetes.io/projected/8d64831b-aec0-42cd-96ec-831ec911d921-kube-api-access-2krh6\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:03 crc kubenswrapper[4808]: I0217 16:17:03.772115 4808 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3864d41e-915e-4b73-908e-c575d38863e9-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:03 crc kubenswrapper[4808]: I0217 16:17:03.772124 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zdrx2\" (UniqueName: \"kubernetes.io/projected/3864d41e-915e-4b73-908e-c575d38863e9-kube-api-access-zdrx2\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:03 crc kubenswrapper[4808]: I0217 16:17:03.794712 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3864d41e-915e-4b73-908e-c575d38863e9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3864d41e-915e-4b73-908e-c575d38863e9" (UID: "3864d41e-915e-4b73-908e-c575d38863e9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:17:03 crc kubenswrapper[4808]: I0217 16:17:03.822770 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d64831b-aec0-42cd-96ec-831ec911d921-config-data" (OuterVolumeSpecName: "config-data") pod "8d64831b-aec0-42cd-96ec-831ec911d921" (UID: "8d64831b-aec0-42cd-96ec-831ec911d921"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:17:03 crc kubenswrapper[4808]: I0217 16:17:03.835077 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d64831b-aec0-42cd-96ec-831ec911d921-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8d64831b-aec0-42cd-96ec-831ec911d921" (UID: "8d64831b-aec0-42cd-96ec-831ec911d921"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:17:03 crc kubenswrapper[4808]: I0217 16:17:03.843401 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3864d41e-915e-4b73-908e-c575d38863e9-config-data" (OuterVolumeSpecName: "config-data") pod "3864d41e-915e-4b73-908e-c575d38863e9" (UID: "3864d41e-915e-4b73-908e-c575d38863e9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:17:03 crc kubenswrapper[4808]: I0217 16:17:03.873678 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3864d41e-915e-4b73-908e-c575d38863e9-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:03 crc kubenswrapper[4808]: I0217 16:17:03.873722 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d64831b-aec0-42cd-96ec-831ec911d921-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:03 crc kubenswrapper[4808]: I0217 16:17:03.873737 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d64831b-aec0-42cd-96ec-831ec911d921-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:03 crc kubenswrapper[4808]: I0217 16:17:03.873747 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3864d41e-915e-4b73-908e-c575d38863e9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:03 crc kubenswrapper[4808]: I0217 16:17:03.900264 4808 generic.go:334] "Generic (PLEG): container finished" podID="ef386302-14e1-4b00-b816-e85da8d23114" containerID="893c1ea963c8e724fa2b9baa335921cef2a62410cb7f634726388e519c6b4a53" exitCode=0 Feb 17 16:17:03 crc kubenswrapper[4808]: I0217 16:17:03.900342 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67bdc55879-786qn" event={"ID":"ef386302-14e1-4b00-b816-e85da8d23114","Type":"ContainerDied","Data":"893c1ea963c8e724fa2b9baa335921cef2a62410cb7f634726388e519c6b4a53"} Feb 17 16:17:03 crc kubenswrapper[4808]: I0217 16:17:03.903622 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-46chh" event={"ID":"8d64831b-aec0-42cd-96ec-831ec911d921","Type":"ContainerDied","Data":"2d3829a8cd87e1e7493f796b94998c113c1da2acebe2d18b959cae6d8ec1e0ba"} Feb 17 16:17:03 crc kubenswrapper[4808]: I0217 16:17:03.903649 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2d3829a8cd87e1e7493f796b94998c113c1da2acebe2d18b959cae6d8ec1e0ba" Feb 17 16:17:03 crc kubenswrapper[4808]: I0217 16:17:03.903711 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-46chh" Feb 17 16:17:03 crc kubenswrapper[4808]: I0217 16:17:03.909162 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-lhrsb" Feb 17 16:17:03 crc kubenswrapper[4808]: I0217 16:17:03.912674 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-lhrsb" event={"ID":"3864d41e-915e-4b73-908e-c575d38863e9","Type":"ContainerDied","Data":"8246e1d9e27ac063f20e993837fefe05ee7faed0616a81f38ae63adc17f5680c"} Feb 17 16:17:03 crc kubenswrapper[4808]: I0217 16:17:03.912724 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8246e1d9e27ac063f20e993837fefe05ee7faed0616a81f38ae63adc17f5680c" Feb 17 16:17:03 crc kubenswrapper[4808]: I0217 16:17:03.963134 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 17 16:17:03 crc kubenswrapper[4808]: I0217 16:17:03.964791 4808 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-67bdc55879-786qn" podUID="ef386302-14e1-4b00-b816-e85da8d23114" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.187:5353: connect: connection refused" Feb 17 16:17:03 crc kubenswrapper[4808]: I0217 16:17:03.972566 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 17 16:17:03 crc kubenswrapper[4808]: E0217 16:17:03.973557 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3864d41e-915e-4b73-908e-c575d38863e9" containerName="nova-manage" Feb 17 16:17:03 crc kubenswrapper[4808]: I0217 16:17:03.973590 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="3864d41e-915e-4b73-908e-c575d38863e9" containerName="nova-manage" Feb 17 16:17:03 crc kubenswrapper[4808]: E0217 16:17:03.973625 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d64831b-aec0-42cd-96ec-831ec911d921" containerName="nova-cell1-conductor-db-sync" Feb 17 16:17:03 crc kubenswrapper[4808]: I0217 16:17:03.973631 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d64831b-aec0-42cd-96ec-831ec911d921" containerName="nova-cell1-conductor-db-sync" Feb 17 16:17:03 crc kubenswrapper[4808]: I0217 16:17:03.973832 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="3864d41e-915e-4b73-908e-c575d38863e9" containerName="nova-manage" Feb 17 16:17:03 crc kubenswrapper[4808]: I0217 16:17:03.973856 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d64831b-aec0-42cd-96ec-831ec911d921" containerName="nova-cell1-conductor-db-sync" Feb 17 16:17:03 crc kubenswrapper[4808]: I0217 16:17:03.975960 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 17 16:17:03 crc kubenswrapper[4808]: I0217 16:17:03.978512 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 17 16:17:03 crc kubenswrapper[4808]: I0217 16:17:03.992383 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 17 16:17:04 crc kubenswrapper[4808]: I0217 16:17:04.077422 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kb56m\" (UniqueName: \"kubernetes.io/projected/1c30e340-2218-46f6-97d6-aaf96a54d84d-kube-api-access-kb56m\") pod \"nova-cell1-conductor-0\" (UID: \"1c30e340-2218-46f6-97d6-aaf96a54d84d\") " pod="openstack/nova-cell1-conductor-0" Feb 17 16:17:04 crc kubenswrapper[4808]: I0217 16:17:04.077528 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c30e340-2218-46f6-97d6-aaf96a54d84d-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"1c30e340-2218-46f6-97d6-aaf96a54d84d\") " pod="openstack/nova-cell1-conductor-0" Feb 17 16:17:04 crc kubenswrapper[4808]: I0217 16:17:04.077736 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c30e340-2218-46f6-97d6-aaf96a54d84d-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"1c30e340-2218-46f6-97d6-aaf96a54d84d\") " pod="openstack/nova-cell1-conductor-0" Feb 17 16:17:04 crc kubenswrapper[4808]: I0217 16:17:04.084056 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 17 16:17:04 crc kubenswrapper[4808]: I0217 16:17:04.084315 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="d49b36d0-eee7-4656-a6d8-cdf627d181b4" containerName="nova-api-log" containerID="cri-o://5b2d1102f6f02c603c50170454469936029b3e4b59fdb0bc3ba9eef7842c5f96" gracePeriod=30 Feb 17 16:17:04 crc kubenswrapper[4808]: I0217 16:17:04.084405 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="d49b36d0-eee7-4656-a6d8-cdf627d181b4" containerName="nova-api-api" containerID="cri-o://56a71b058c9c5e5186facb8c41dbcbe7e8bd3a8aec3a171c84f15d63846949cc" gracePeriod=30 Feb 17 16:17:04 crc kubenswrapper[4808]: I0217 16:17:04.089847 4808 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="d49b36d0-eee7-4656-a6d8-cdf627d181b4" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.212:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 17 16:17:04 crc kubenswrapper[4808]: I0217 16:17:04.089962 4808 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="d49b36d0-eee7-4656-a6d8-cdf627d181b4" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.212:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 17 16:17:04 crc kubenswrapper[4808]: I0217 16:17:04.179679 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c30e340-2218-46f6-97d6-aaf96a54d84d-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"1c30e340-2218-46f6-97d6-aaf96a54d84d\") " pod="openstack/nova-cell1-conductor-0" Feb 17 16:17:04 crc kubenswrapper[4808]: I0217 16:17:04.180257 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kb56m\" (UniqueName: \"kubernetes.io/projected/1c30e340-2218-46f6-97d6-aaf96a54d84d-kube-api-access-kb56m\") pod \"nova-cell1-conductor-0\" (UID: \"1c30e340-2218-46f6-97d6-aaf96a54d84d\") " pod="openstack/nova-cell1-conductor-0" Feb 17 16:17:04 crc kubenswrapper[4808]: I0217 16:17:04.180342 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c30e340-2218-46f6-97d6-aaf96a54d84d-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"1c30e340-2218-46f6-97d6-aaf96a54d84d\") " pod="openstack/nova-cell1-conductor-0" Feb 17 16:17:04 crc kubenswrapper[4808]: I0217 16:17:04.185250 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c30e340-2218-46f6-97d6-aaf96a54d84d-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"1c30e340-2218-46f6-97d6-aaf96a54d84d\") " pod="openstack/nova-cell1-conductor-0" Feb 17 16:17:04 crc kubenswrapper[4808]: I0217 16:17:04.185360 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c30e340-2218-46f6-97d6-aaf96a54d84d-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"1c30e340-2218-46f6-97d6-aaf96a54d84d\") " pod="openstack/nova-cell1-conductor-0" Feb 17 16:17:04 crc kubenswrapper[4808]: I0217 16:17:04.197841 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kb56m\" (UniqueName: \"kubernetes.io/projected/1c30e340-2218-46f6-97d6-aaf96a54d84d-kube-api-access-kb56m\") pod \"nova-cell1-conductor-0\" (UID: \"1c30e340-2218-46f6-97d6-aaf96a54d84d\") " pod="openstack/nova-cell1-conductor-0" Feb 17 16:17:04 crc kubenswrapper[4808]: I0217 16:17:04.295257 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 17 16:17:04 crc kubenswrapper[4808]: I0217 16:17:04.483603 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 16:17:04 crc kubenswrapper[4808]: I0217 16:17:04.781367 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67bdc55879-786qn" Feb 17 16:17:04 crc kubenswrapper[4808]: I0217 16:17:04.896891 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ef386302-14e1-4b00-b816-e85da8d23114-dns-swift-storage-0\") pod \"ef386302-14e1-4b00-b816-e85da8d23114\" (UID: \"ef386302-14e1-4b00-b816-e85da8d23114\") " Feb 17 16:17:04 crc kubenswrapper[4808]: I0217 16:17:04.896954 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ef386302-14e1-4b00-b816-e85da8d23114-ovsdbserver-sb\") pod \"ef386302-14e1-4b00-b816-e85da8d23114\" (UID: \"ef386302-14e1-4b00-b816-e85da8d23114\") " Feb 17 16:17:04 crc kubenswrapper[4808]: I0217 16:17:04.897018 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ef386302-14e1-4b00-b816-e85da8d23114-dns-svc\") pod \"ef386302-14e1-4b00-b816-e85da8d23114\" (UID: \"ef386302-14e1-4b00-b816-e85da8d23114\") " Feb 17 16:17:04 crc kubenswrapper[4808]: I0217 16:17:04.897127 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ef386302-14e1-4b00-b816-e85da8d23114-ovsdbserver-nb\") pod \"ef386302-14e1-4b00-b816-e85da8d23114\" (UID: \"ef386302-14e1-4b00-b816-e85da8d23114\") " Feb 17 16:17:04 crc kubenswrapper[4808]: I0217 16:17:04.897173 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef386302-14e1-4b00-b816-e85da8d23114-config\") pod \"ef386302-14e1-4b00-b816-e85da8d23114\" (UID: \"ef386302-14e1-4b00-b816-e85da8d23114\") " Feb 17 16:17:04 crc kubenswrapper[4808]: I0217 16:17:04.897261 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zrdlq\" (UniqueName: \"kubernetes.io/projected/ef386302-14e1-4b00-b816-e85da8d23114-kube-api-access-zrdlq\") pod \"ef386302-14e1-4b00-b816-e85da8d23114\" (UID: \"ef386302-14e1-4b00-b816-e85da8d23114\") " Feb 17 16:17:04 crc kubenswrapper[4808]: I0217 16:17:04.906446 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef386302-14e1-4b00-b816-e85da8d23114-kube-api-access-zrdlq" (OuterVolumeSpecName: "kube-api-access-zrdlq") pod "ef386302-14e1-4b00-b816-e85da8d23114" (UID: "ef386302-14e1-4b00-b816-e85da8d23114"). InnerVolumeSpecName "kube-api-access-zrdlq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:17:04 crc kubenswrapper[4808]: I0217 16:17:04.907510 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 17 16:17:04 crc kubenswrapper[4808]: I0217 16:17:04.962403 4808 generic.go:334] "Generic (PLEG): container finished" podID="d49b36d0-eee7-4656-a6d8-cdf627d181b4" containerID="5b2d1102f6f02c603c50170454469936029b3e4b59fdb0bc3ba9eef7842c5f96" exitCode=143 Feb 17 16:17:04 crc kubenswrapper[4808]: I0217 16:17:04.962507 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d49b36d0-eee7-4656-a6d8-cdf627d181b4","Type":"ContainerDied","Data":"5b2d1102f6f02c603c50170454469936029b3e4b59fdb0bc3ba9eef7842c5f96"} Feb 17 16:17:04 crc kubenswrapper[4808]: I0217 16:17:04.970886 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67bdc55879-786qn" Feb 17 16:17:04 crc kubenswrapper[4808]: I0217 16:17:04.971419 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67bdc55879-786qn" event={"ID":"ef386302-14e1-4b00-b816-e85da8d23114","Type":"ContainerDied","Data":"d83fa5a20f760435e6a158fc895b5bd4256f47d348c4b60bfa4934c4b8383f1a"} Feb 17 16:17:04 crc kubenswrapper[4808]: I0217 16:17:04.971497 4808 scope.go:117] "RemoveContainer" containerID="893c1ea963c8e724fa2b9baa335921cef2a62410cb7f634726388e519c6b4a53" Feb 17 16:17:04 crc kubenswrapper[4808]: I0217 16:17:04.982239 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef386302-14e1-4b00-b816-e85da8d23114-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ef386302-14e1-4b00-b816-e85da8d23114" (UID: "ef386302-14e1-4b00-b816-e85da8d23114"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:17:04 crc kubenswrapper[4808]: I0217 16:17:04.982774 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef386302-14e1-4b00-b816-e85da8d23114-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ef386302-14e1-4b00-b816-e85da8d23114" (UID: "ef386302-14e1-4b00-b816-e85da8d23114"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:17:04 crc kubenswrapper[4808]: I0217 16:17:04.986387 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef386302-14e1-4b00-b816-e85da8d23114-config" (OuterVolumeSpecName: "config") pod "ef386302-14e1-4b00-b816-e85da8d23114" (UID: "ef386302-14e1-4b00-b816-e85da8d23114"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:17:05 crc kubenswrapper[4808]: I0217 16:17:04.998089 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef386302-14e1-4b00-b816-e85da8d23114-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ef386302-14e1-4b00-b816-e85da8d23114" (UID: "ef386302-14e1-4b00-b816-e85da8d23114"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:17:05 crc kubenswrapper[4808]: I0217 16:17:04.999450 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef386302-14e1-4b00-b816-e85da8d23114-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:05 crc kubenswrapper[4808]: I0217 16:17:04.999463 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zrdlq\" (UniqueName: \"kubernetes.io/projected/ef386302-14e1-4b00-b816-e85da8d23114-kube-api-access-zrdlq\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:05 crc kubenswrapper[4808]: I0217 16:17:04.999475 4808 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ef386302-14e1-4b00-b816-e85da8d23114-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:05 crc kubenswrapper[4808]: I0217 16:17:04.999484 4808 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ef386302-14e1-4b00-b816-e85da8d23114-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:05 crc kubenswrapper[4808]: I0217 16:17:04.999494 4808 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ef386302-14e1-4b00-b816-e85da8d23114-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:05 crc kubenswrapper[4808]: I0217 16:17:05.020684 4808 scope.go:117] "RemoveContainer" containerID="76cc030230faf69f3923cb1665482598e8d9c392060ca1c1353369b5c8628b5a" Feb 17 16:17:05 crc kubenswrapper[4808]: I0217 16:17:05.025185 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef386302-14e1-4b00-b816-e85da8d23114-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ef386302-14e1-4b00-b816-e85da8d23114" (UID: "ef386302-14e1-4b00-b816-e85da8d23114"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:17:05 crc kubenswrapper[4808]: I0217 16:17:05.101379 4808 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ef386302-14e1-4b00-b816-e85da8d23114-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:05 crc kubenswrapper[4808]: I0217 16:17:05.317160 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67bdc55879-786qn"] Feb 17 16:17:05 crc kubenswrapper[4808]: I0217 16:17:05.330802 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-67bdc55879-786qn"] Feb 17 16:17:05 crc kubenswrapper[4808]: I0217 16:17:05.983979 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"1c30e340-2218-46f6-97d6-aaf96a54d84d","Type":"ContainerStarted","Data":"ea3f7e40f80522c56e37edd559cc9bf1d030dcd18d47b61ac14f3758eb66a051"} Feb 17 16:17:05 crc kubenswrapper[4808]: I0217 16:17:05.984307 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"1c30e340-2218-46f6-97d6-aaf96a54d84d","Type":"ContainerStarted","Data":"f2a3d5bab03f7e6dd5b1dff5b8d3d24458a7173b132eada3357778ca0ca4e724"} Feb 17 16:17:05 crc kubenswrapper[4808]: I0217 16:17:05.984334 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Feb 17 16:17:05 crc kubenswrapper[4808]: I0217 16:17:05.986002 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="4b35f2cf-f95a-4467-a797-79239af955c4" containerName="nova-scheduler-scheduler" containerID="cri-o://e515390cffb4ded639584839b29e5e7f5a819a4fb088e1aca8a2d5cd4b56159f" gracePeriod=30 Feb 17 16:17:06 crc kubenswrapper[4808]: I0217 16:17:06.014310 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=3.014283814 podStartE2EDuration="3.014283814s" podCreationTimestamp="2026-02-17 16:17:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:17:06.003691104 +0000 UTC m=+1389.520050177" watchObservedRunningTime="2026-02-17 16:17:06.014283814 +0000 UTC m=+1389.530642887" Feb 17 16:17:07 crc kubenswrapper[4808]: I0217 16:17:07.159443 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef386302-14e1-4b00-b816-e85da8d23114" path="/var/lib/kubelet/pods/ef386302-14e1-4b00-b816-e85da8d23114/volumes" Feb 17 16:17:08 crc kubenswrapper[4808]: E0217 16:17:08.097550 4808 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e515390cffb4ded639584839b29e5e7f5a819a4fb088e1aca8a2d5cd4b56159f" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 17 16:17:08 crc kubenswrapper[4808]: E0217 16:17:08.099125 4808 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e515390cffb4ded639584839b29e5e7f5a819a4fb088e1aca8a2d5cd4b56159f" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 17 16:17:08 crc kubenswrapper[4808]: E0217 16:17:08.101280 4808 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e515390cffb4ded639584839b29e5e7f5a819a4fb088e1aca8a2d5cd4b56159f" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 17 16:17:08 crc kubenswrapper[4808]: E0217 16:17:08.101460 4808 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="4b35f2cf-f95a-4467-a797-79239af955c4" containerName="nova-scheduler-scheduler" Feb 17 16:17:09 crc kubenswrapper[4808]: I0217 16:17:09.014678 4808 generic.go:334] "Generic (PLEG): container finished" podID="4b35f2cf-f95a-4467-a797-79239af955c4" containerID="e515390cffb4ded639584839b29e5e7f5a819a4fb088e1aca8a2d5cd4b56159f" exitCode=0 Feb 17 16:17:09 crc kubenswrapper[4808]: I0217 16:17:09.014857 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4b35f2cf-f95a-4467-a797-79239af955c4","Type":"ContainerDied","Data":"e515390cffb4ded639584839b29e5e7f5a819a4fb088e1aca8a2d5cd4b56159f"} Feb 17 16:17:09 crc kubenswrapper[4808]: I0217 16:17:09.439843 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 17 16:17:09 crc kubenswrapper[4808]: I0217 16:17:09.490228 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9nvbd\" (UniqueName: \"kubernetes.io/projected/4b35f2cf-f95a-4467-a797-79239af955c4-kube-api-access-9nvbd\") pod \"4b35f2cf-f95a-4467-a797-79239af955c4\" (UID: \"4b35f2cf-f95a-4467-a797-79239af955c4\") " Feb 17 16:17:09 crc kubenswrapper[4808]: I0217 16:17:09.490362 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b35f2cf-f95a-4467-a797-79239af955c4-config-data\") pod \"4b35f2cf-f95a-4467-a797-79239af955c4\" (UID: \"4b35f2cf-f95a-4467-a797-79239af955c4\") " Feb 17 16:17:09 crc kubenswrapper[4808]: I0217 16:17:09.490481 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b35f2cf-f95a-4467-a797-79239af955c4-combined-ca-bundle\") pod \"4b35f2cf-f95a-4467-a797-79239af955c4\" (UID: \"4b35f2cf-f95a-4467-a797-79239af955c4\") " Feb 17 16:17:09 crc kubenswrapper[4808]: I0217 16:17:09.496348 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b35f2cf-f95a-4467-a797-79239af955c4-kube-api-access-9nvbd" (OuterVolumeSpecName: "kube-api-access-9nvbd") pod "4b35f2cf-f95a-4467-a797-79239af955c4" (UID: "4b35f2cf-f95a-4467-a797-79239af955c4"). InnerVolumeSpecName "kube-api-access-9nvbd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:17:09 crc kubenswrapper[4808]: I0217 16:17:09.523990 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b35f2cf-f95a-4467-a797-79239af955c4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4b35f2cf-f95a-4467-a797-79239af955c4" (UID: "4b35f2cf-f95a-4467-a797-79239af955c4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:17:09 crc kubenswrapper[4808]: I0217 16:17:09.527049 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b35f2cf-f95a-4467-a797-79239af955c4-config-data" (OuterVolumeSpecName: "config-data") pod "4b35f2cf-f95a-4467-a797-79239af955c4" (UID: "4b35f2cf-f95a-4467-a797-79239af955c4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:17:09 crc kubenswrapper[4808]: I0217 16:17:09.593514 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b35f2cf-f95a-4467-a797-79239af955c4-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:09 crc kubenswrapper[4808]: I0217 16:17:09.593548 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b35f2cf-f95a-4467-a797-79239af955c4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:09 crc kubenswrapper[4808]: I0217 16:17:09.593559 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9nvbd\" (UniqueName: \"kubernetes.io/projected/4b35f2cf-f95a-4467-a797-79239af955c4-kube-api-access-9nvbd\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.001353 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.027843 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4b35f2cf-f95a-4467-a797-79239af955c4","Type":"ContainerDied","Data":"70be81454915c76edbe1bd9f9a80641c32a52e8409743ccd53fcf3858d18b2d6"} Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.027901 4808 scope.go:117] "RemoveContainer" containerID="e515390cffb4ded639584839b29e5e7f5a819a4fb088e1aca8a2d5cd4b56159f" Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.027921 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.036514 4808 generic.go:334] "Generic (PLEG): container finished" podID="d49b36d0-eee7-4656-a6d8-cdf627d181b4" containerID="56a71b058c9c5e5186facb8c41dbcbe7e8bd3a8aec3a171c84f15d63846949cc" exitCode=0 Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.036556 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d49b36d0-eee7-4656-a6d8-cdf627d181b4","Type":"ContainerDied","Data":"56a71b058c9c5e5186facb8c41dbcbe7e8bd3a8aec3a171c84f15d63846949cc"} Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.036632 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.036640 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d49b36d0-eee7-4656-a6d8-cdf627d181b4","Type":"ContainerDied","Data":"ee5e98cadb90446acabe123662e49b6a4cd2eca56be18b81e05b30047bcff9c1"} Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.057114 4808 scope.go:117] "RemoveContainer" containerID="56a71b058c9c5e5186facb8c41dbcbe7e8bd3a8aec3a171c84f15d63846949cc" Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.080507 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.085300 4808 scope.go:117] "RemoveContainer" containerID="5b2d1102f6f02c603c50170454469936029b3e4b59fdb0bc3ba9eef7842c5f96" Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.099211 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.100038 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d49b36d0-eee7-4656-a6d8-cdf627d181b4-config-data\") pod \"d49b36d0-eee7-4656-a6d8-cdf627d181b4\" (UID: \"d49b36d0-eee7-4656-a6d8-cdf627d181b4\") " Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.100244 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d49b36d0-eee7-4656-a6d8-cdf627d181b4-logs\") pod \"d49b36d0-eee7-4656-a6d8-cdf627d181b4\" (UID: \"d49b36d0-eee7-4656-a6d8-cdf627d181b4\") " Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.100357 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d49b36d0-eee7-4656-a6d8-cdf627d181b4-combined-ca-bundle\") pod \"d49b36d0-eee7-4656-a6d8-cdf627d181b4\" (UID: \"d49b36d0-eee7-4656-a6d8-cdf627d181b4\") " Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.100562 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pnzg8\" (UniqueName: \"kubernetes.io/projected/d49b36d0-eee7-4656-a6d8-cdf627d181b4-kube-api-access-pnzg8\") pod \"d49b36d0-eee7-4656-a6d8-cdf627d181b4\" (UID: \"d49b36d0-eee7-4656-a6d8-cdf627d181b4\") " Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.101040 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d49b36d0-eee7-4656-a6d8-cdf627d181b4-logs" (OuterVolumeSpecName: "logs") pod "d49b36d0-eee7-4656-a6d8-cdf627d181b4" (UID: "d49b36d0-eee7-4656-a6d8-cdf627d181b4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.101837 4808 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d49b36d0-eee7-4656-a6d8-cdf627d181b4-logs\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.113737 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 16:17:10 crc kubenswrapper[4808]: E0217 16:17:10.114230 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef386302-14e1-4b00-b816-e85da8d23114" containerName="init" Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.114250 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef386302-14e1-4b00-b816-e85da8d23114" containerName="init" Feb 17 16:17:10 crc kubenswrapper[4808]: E0217 16:17:10.114275 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef386302-14e1-4b00-b816-e85da8d23114" containerName="dnsmasq-dns" Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.114283 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef386302-14e1-4b00-b816-e85da8d23114" containerName="dnsmasq-dns" Feb 17 16:17:10 crc kubenswrapper[4808]: E0217 16:17:10.114299 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b35f2cf-f95a-4467-a797-79239af955c4" containerName="nova-scheduler-scheduler" Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.114307 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b35f2cf-f95a-4467-a797-79239af955c4" containerName="nova-scheduler-scheduler" Feb 17 16:17:10 crc kubenswrapper[4808]: E0217 16:17:10.114328 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d49b36d0-eee7-4656-a6d8-cdf627d181b4" containerName="nova-api-api" Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.114335 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="d49b36d0-eee7-4656-a6d8-cdf627d181b4" containerName="nova-api-api" Feb 17 16:17:10 crc kubenswrapper[4808]: E0217 16:17:10.114350 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d49b36d0-eee7-4656-a6d8-cdf627d181b4" containerName="nova-api-log" Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.114357 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="d49b36d0-eee7-4656-a6d8-cdf627d181b4" containerName="nova-api-log" Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.114614 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b35f2cf-f95a-4467-a797-79239af955c4" containerName="nova-scheduler-scheduler" Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.114645 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="d49b36d0-eee7-4656-a6d8-cdf627d181b4" containerName="nova-api-log" Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.114655 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef386302-14e1-4b00-b816-e85da8d23114" containerName="dnsmasq-dns" Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.114683 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="d49b36d0-eee7-4656-a6d8-cdf627d181b4" containerName="nova-api-api" Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.115206 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d49b36d0-eee7-4656-a6d8-cdf627d181b4-kube-api-access-pnzg8" (OuterVolumeSpecName: "kube-api-access-pnzg8") pod "d49b36d0-eee7-4656-a6d8-cdf627d181b4" (UID: "d49b36d0-eee7-4656-a6d8-cdf627d181b4"). InnerVolumeSpecName "kube-api-access-pnzg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.120132 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.123625 4808 scope.go:117] "RemoveContainer" containerID="56a71b058c9c5e5186facb8c41dbcbe7e8bd3a8aec3a171c84f15d63846949cc" Feb 17 16:17:10 crc kubenswrapper[4808]: E0217 16:17:10.125160 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56a71b058c9c5e5186facb8c41dbcbe7e8bd3a8aec3a171c84f15d63846949cc\": container with ID starting with 56a71b058c9c5e5186facb8c41dbcbe7e8bd3a8aec3a171c84f15d63846949cc not found: ID does not exist" containerID="56a71b058c9c5e5186facb8c41dbcbe7e8bd3a8aec3a171c84f15d63846949cc" Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.125205 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56a71b058c9c5e5186facb8c41dbcbe7e8bd3a8aec3a171c84f15d63846949cc"} err="failed to get container status \"56a71b058c9c5e5186facb8c41dbcbe7e8bd3a8aec3a171c84f15d63846949cc\": rpc error: code = NotFound desc = could not find container \"56a71b058c9c5e5186facb8c41dbcbe7e8bd3a8aec3a171c84f15d63846949cc\": container with ID starting with 56a71b058c9c5e5186facb8c41dbcbe7e8bd3a8aec3a171c84f15d63846949cc not found: ID does not exist" Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.125232 4808 scope.go:117] "RemoveContainer" containerID="5b2d1102f6f02c603c50170454469936029b3e4b59fdb0bc3ba9eef7842c5f96" Feb 17 16:17:10 crc kubenswrapper[4808]: E0217 16:17:10.126035 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b2d1102f6f02c603c50170454469936029b3e4b59fdb0bc3ba9eef7842c5f96\": container with ID starting with 5b2d1102f6f02c603c50170454469936029b3e4b59fdb0bc3ba9eef7842c5f96 not found: ID does not exist" containerID="5b2d1102f6f02c603c50170454469936029b3e4b59fdb0bc3ba9eef7842c5f96" Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.126067 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b2d1102f6f02c603c50170454469936029b3e4b59fdb0bc3ba9eef7842c5f96"} err="failed to get container status \"5b2d1102f6f02c603c50170454469936029b3e4b59fdb0bc3ba9eef7842c5f96\": rpc error: code = NotFound desc = could not find container \"5b2d1102f6f02c603c50170454469936029b3e4b59fdb0bc3ba9eef7842c5f96\": container with ID starting with 5b2d1102f6f02c603c50170454469936029b3e4b59fdb0bc3ba9eef7842c5f96 not found: ID does not exist" Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.139251 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.141429 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.144827 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d49b36d0-eee7-4656-a6d8-cdf627d181b4-config-data" (OuterVolumeSpecName: "config-data") pod "d49b36d0-eee7-4656-a6d8-cdf627d181b4" (UID: "d49b36d0-eee7-4656-a6d8-cdf627d181b4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.174842 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d49b36d0-eee7-4656-a6d8-cdf627d181b4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d49b36d0-eee7-4656-a6d8-cdf627d181b4" (UID: "d49b36d0-eee7-4656-a6d8-cdf627d181b4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.204631 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c906d5a8-4187-4f58-a352-fa7faea85309-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c906d5a8-4187-4f58-a352-fa7faea85309\") " pod="openstack/nova-scheduler-0" Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.204997 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crb6r\" (UniqueName: \"kubernetes.io/projected/c906d5a8-4187-4f58-a352-fa7faea85309-kube-api-access-crb6r\") pod \"nova-scheduler-0\" (UID: \"c906d5a8-4187-4f58-a352-fa7faea85309\") " pod="openstack/nova-scheduler-0" Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.205092 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c906d5a8-4187-4f58-a352-fa7faea85309-config-data\") pod \"nova-scheduler-0\" (UID: \"c906d5a8-4187-4f58-a352-fa7faea85309\") " pod="openstack/nova-scheduler-0" Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.205255 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pnzg8\" (UniqueName: \"kubernetes.io/projected/d49b36d0-eee7-4656-a6d8-cdf627d181b4-kube-api-access-pnzg8\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.205277 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d49b36d0-eee7-4656-a6d8-cdf627d181b4-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.205290 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d49b36d0-eee7-4656-a6d8-cdf627d181b4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.306911 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c906d5a8-4187-4f58-a352-fa7faea85309-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c906d5a8-4187-4f58-a352-fa7faea85309\") " pod="openstack/nova-scheduler-0" Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.306978 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crb6r\" (UniqueName: \"kubernetes.io/projected/c906d5a8-4187-4f58-a352-fa7faea85309-kube-api-access-crb6r\") pod \"nova-scheduler-0\" (UID: \"c906d5a8-4187-4f58-a352-fa7faea85309\") " pod="openstack/nova-scheduler-0" Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.307015 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c906d5a8-4187-4f58-a352-fa7faea85309-config-data\") pod \"nova-scheduler-0\" (UID: \"c906d5a8-4187-4f58-a352-fa7faea85309\") " pod="openstack/nova-scheduler-0" Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.310656 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c906d5a8-4187-4f58-a352-fa7faea85309-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c906d5a8-4187-4f58-a352-fa7faea85309\") " pod="openstack/nova-scheduler-0" Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.310729 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c906d5a8-4187-4f58-a352-fa7faea85309-config-data\") pod \"nova-scheduler-0\" (UID: \"c906d5a8-4187-4f58-a352-fa7faea85309\") " pod="openstack/nova-scheduler-0" Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.328233 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crb6r\" (UniqueName: \"kubernetes.io/projected/c906d5a8-4187-4f58-a352-fa7faea85309-kube-api-access-crb6r\") pod \"nova-scheduler-0\" (UID: \"c906d5a8-4187-4f58-a352-fa7faea85309\") " pod="openstack/nova-scheduler-0" Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.372994 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.386948 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.406048 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.407766 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.421170 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.442010 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.512130 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/646d437b-8ce5-47ba-8fc6-9c6451caacc8-config-data\") pod \"nova-api-0\" (UID: \"646d437b-8ce5-47ba-8fc6-9c6451caacc8\") " pod="openstack/nova-api-0" Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.512477 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/646d437b-8ce5-47ba-8fc6-9c6451caacc8-logs\") pod \"nova-api-0\" (UID: \"646d437b-8ce5-47ba-8fc6-9c6451caacc8\") " pod="openstack/nova-api-0" Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.512629 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/646d437b-8ce5-47ba-8fc6-9c6451caacc8-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"646d437b-8ce5-47ba-8fc6-9c6451caacc8\") " pod="openstack/nova-api-0" Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.512709 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7629p\" (UniqueName: \"kubernetes.io/projected/646d437b-8ce5-47ba-8fc6-9c6451caacc8-kube-api-access-7629p\") pod \"nova-api-0\" (UID: \"646d437b-8ce5-47ba-8fc6-9c6451caacc8\") " pod="openstack/nova-api-0" Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.562064 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.614265 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/646d437b-8ce5-47ba-8fc6-9c6451caacc8-config-data\") pod \"nova-api-0\" (UID: \"646d437b-8ce5-47ba-8fc6-9c6451caacc8\") " pod="openstack/nova-api-0" Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.614398 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/646d437b-8ce5-47ba-8fc6-9c6451caacc8-logs\") pod \"nova-api-0\" (UID: \"646d437b-8ce5-47ba-8fc6-9c6451caacc8\") " pod="openstack/nova-api-0" Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.614477 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/646d437b-8ce5-47ba-8fc6-9c6451caacc8-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"646d437b-8ce5-47ba-8fc6-9c6451caacc8\") " pod="openstack/nova-api-0" Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.614497 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7629p\" (UniqueName: \"kubernetes.io/projected/646d437b-8ce5-47ba-8fc6-9c6451caacc8-kube-api-access-7629p\") pod \"nova-api-0\" (UID: \"646d437b-8ce5-47ba-8fc6-9c6451caacc8\") " pod="openstack/nova-api-0" Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.615755 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/646d437b-8ce5-47ba-8fc6-9c6451caacc8-logs\") pod \"nova-api-0\" (UID: \"646d437b-8ce5-47ba-8fc6-9c6451caacc8\") " pod="openstack/nova-api-0" Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.618691 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/646d437b-8ce5-47ba-8fc6-9c6451caacc8-config-data\") pod \"nova-api-0\" (UID: \"646d437b-8ce5-47ba-8fc6-9c6451caacc8\") " pod="openstack/nova-api-0" Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.620201 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/646d437b-8ce5-47ba-8fc6-9c6451caacc8-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"646d437b-8ce5-47ba-8fc6-9c6451caacc8\") " pod="openstack/nova-api-0" Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.634278 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7629p\" (UniqueName: \"kubernetes.io/projected/646d437b-8ce5-47ba-8fc6-9c6451caacc8-kube-api-access-7629p\") pod \"nova-api-0\" (UID: \"646d437b-8ce5-47ba-8fc6-9c6451caacc8\") " pod="openstack/nova-api-0" Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.742393 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 16:17:11 crc kubenswrapper[4808]: I0217 16:17:11.032955 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 16:17:11 crc kubenswrapper[4808]: W0217 16:17:11.039495 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc906d5a8_4187_4f58_a352_fa7faea85309.slice/crio-3a1dc36f880b404ebe891876f34b6e341baecb45367f34a30cd20f2687eeede8 WatchSource:0}: Error finding container 3a1dc36f880b404ebe891876f34b6e341baecb45367f34a30cd20f2687eeede8: Status 404 returned error can't find the container with id 3a1dc36f880b404ebe891876f34b6e341baecb45367f34a30cd20f2687eeede8 Feb 17 16:17:11 crc kubenswrapper[4808]: I0217 16:17:11.163135 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b35f2cf-f95a-4467-a797-79239af955c4" path="/var/lib/kubelet/pods/4b35f2cf-f95a-4467-a797-79239af955c4/volumes" Feb 17 16:17:11 crc kubenswrapper[4808]: I0217 16:17:11.164388 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d49b36d0-eee7-4656-a6d8-cdf627d181b4" path="/var/lib/kubelet/pods/d49b36d0-eee7-4656-a6d8-cdf627d181b4/volumes" Feb 17 16:17:11 crc kubenswrapper[4808]: I0217 16:17:11.230868 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 17 16:17:12 crc kubenswrapper[4808]: I0217 16:17:12.071221 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"646d437b-8ce5-47ba-8fc6-9c6451caacc8","Type":"ContainerStarted","Data":"8ef043aeb841feb7820cafa9458135b261212780ed4c47c6422beb21b665b0f8"} Feb 17 16:17:12 crc kubenswrapper[4808]: I0217 16:17:12.071490 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"646d437b-8ce5-47ba-8fc6-9c6451caacc8","Type":"ContainerStarted","Data":"8bfe96313fc0880ba2b05de73386c3a0141557df7597d80f4ca352d193fcea90"} Feb 17 16:17:12 crc kubenswrapper[4808]: I0217 16:17:12.071499 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"646d437b-8ce5-47ba-8fc6-9c6451caacc8","Type":"ContainerStarted","Data":"98396bda825cd064a21268c85ea75ac821bba4f4fc3e844ab94ef3298d308124"} Feb 17 16:17:12 crc kubenswrapper[4808]: I0217 16:17:12.073791 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c906d5a8-4187-4f58-a352-fa7faea85309","Type":"ContainerStarted","Data":"d5693756f54d942082122949e8141932a3315f36a027840738a229e012a32372"} Feb 17 16:17:12 crc kubenswrapper[4808]: I0217 16:17:12.073809 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c906d5a8-4187-4f58-a352-fa7faea85309","Type":"ContainerStarted","Data":"3a1dc36f880b404ebe891876f34b6e341baecb45367f34a30cd20f2687eeede8"} Feb 17 16:17:12 crc kubenswrapper[4808]: I0217 16:17:12.117049 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.117025449 podStartE2EDuration="2.117025449s" podCreationTimestamp="2026-02-17 16:17:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:17:12.095566161 +0000 UTC m=+1395.611925234" watchObservedRunningTime="2026-02-17 16:17:12.117025449 +0000 UTC m=+1395.633384522" Feb 17 16:17:12 crc kubenswrapper[4808]: I0217 16:17:12.120334 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.120321256 podStartE2EDuration="2.120321256s" podCreationTimestamp="2026-02-17 16:17:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:17:12.111483252 +0000 UTC m=+1395.627842385" watchObservedRunningTime="2026-02-17 16:17:12.120321256 +0000 UTC m=+1395.636680319" Feb 17 16:17:14 crc kubenswrapper[4808]: I0217 16:17:14.332979 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Feb 17 16:17:15 crc kubenswrapper[4808]: I0217 16:17:15.562612 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 17 16:17:16 crc kubenswrapper[4808]: I0217 16:17:16.002639 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 17 16:17:19 crc kubenswrapper[4808]: I0217 16:17:19.849865 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 17 16:17:19 crc kubenswrapper[4808]: I0217 16:17:19.851562 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="0a2bf674-1881-41e9-9c0f-93e8f14ac222" containerName="kube-state-metrics" containerID="cri-o://b8838c518fb8b535c043a526b61b1b74b26af147fff1399fef7427934840abb3" gracePeriod=30 Feb 17 16:17:20 crc kubenswrapper[4808]: I0217 16:17:20.164838 4808 generic.go:334] "Generic (PLEG): container finished" podID="0a2bf674-1881-41e9-9c0f-93e8f14ac222" containerID="b8838c518fb8b535c043a526b61b1b74b26af147fff1399fef7427934840abb3" exitCode=2 Feb 17 16:17:20 crc kubenswrapper[4808]: I0217 16:17:20.164960 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"0a2bf674-1881-41e9-9c0f-93e8f14ac222","Type":"ContainerDied","Data":"b8838c518fb8b535c043a526b61b1b74b26af147fff1399fef7427934840abb3"} Feb 17 16:17:20 crc kubenswrapper[4808]: I0217 16:17:20.452412 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 17 16:17:20 crc kubenswrapper[4808]: I0217 16:17:20.563069 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 17 16:17:20 crc kubenswrapper[4808]: I0217 16:17:20.593169 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 17 16:17:20 crc kubenswrapper[4808]: I0217 16:17:20.619349 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jrnn8\" (UniqueName: \"kubernetes.io/projected/0a2bf674-1881-41e9-9c0f-93e8f14ac222-kube-api-access-jrnn8\") pod \"0a2bf674-1881-41e9-9c0f-93e8f14ac222\" (UID: \"0a2bf674-1881-41e9-9c0f-93e8f14ac222\") " Feb 17 16:17:20 crc kubenswrapper[4808]: I0217 16:17:20.629483 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a2bf674-1881-41e9-9c0f-93e8f14ac222-kube-api-access-jrnn8" (OuterVolumeSpecName: "kube-api-access-jrnn8") pod "0a2bf674-1881-41e9-9c0f-93e8f14ac222" (UID: "0a2bf674-1881-41e9-9c0f-93e8f14ac222"). InnerVolumeSpecName "kube-api-access-jrnn8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:17:20 crc kubenswrapper[4808]: I0217 16:17:20.722428 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jrnn8\" (UniqueName: \"kubernetes.io/projected/0a2bf674-1881-41e9-9c0f-93e8f14ac222-kube-api-access-jrnn8\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:20 crc kubenswrapper[4808]: I0217 16:17:20.744514 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 17 16:17:20 crc kubenswrapper[4808]: I0217 16:17:20.744549 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 17 16:17:21 crc kubenswrapper[4808]: I0217 16:17:21.176790 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 17 16:17:21 crc kubenswrapper[4808]: I0217 16:17:21.177432 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"0a2bf674-1881-41e9-9c0f-93e8f14ac222","Type":"ContainerDied","Data":"fe6c047a841d65d85a9f0e609ea1b96b4c6bc76859984c45d4fc65974fb15811"} Feb 17 16:17:21 crc kubenswrapper[4808]: I0217 16:17:21.177470 4808 scope.go:117] "RemoveContainer" containerID="b8838c518fb8b535c043a526b61b1b74b26af147fff1399fef7427934840abb3" Feb 17 16:17:21 crc kubenswrapper[4808]: I0217 16:17:21.227101 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 17 16:17:21 crc kubenswrapper[4808]: I0217 16:17:21.241040 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 17 16:17:21 crc kubenswrapper[4808]: I0217 16:17:21.276380 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 17 16:17:21 crc kubenswrapper[4808]: E0217 16:17:21.276785 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a2bf674-1881-41e9-9c0f-93e8f14ac222" containerName="kube-state-metrics" Feb 17 16:17:21 crc kubenswrapper[4808]: I0217 16:17:21.276802 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a2bf674-1881-41e9-9c0f-93e8f14ac222" containerName="kube-state-metrics" Feb 17 16:17:21 crc kubenswrapper[4808]: I0217 16:17:21.277174 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a2bf674-1881-41e9-9c0f-93e8f14ac222" containerName="kube-state-metrics" Feb 17 16:17:21 crc kubenswrapper[4808]: I0217 16:17:21.278028 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 17 16:17:21 crc kubenswrapper[4808]: I0217 16:17:21.280935 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Feb 17 16:17:21 crc kubenswrapper[4808]: I0217 16:17:21.281142 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Feb 17 16:17:21 crc kubenswrapper[4808]: I0217 16:17:21.323168 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 17 16:17:21 crc kubenswrapper[4808]: I0217 16:17:21.351177 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 17 16:17:21 crc kubenswrapper[4808]: I0217 16:17:21.437245 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65ea994e-22f1-4dbf-8b79-8810148fad94-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"65ea994e-22f1-4dbf-8b79-8810148fad94\") " pod="openstack/kube-state-metrics-0" Feb 17 16:17:21 crc kubenswrapper[4808]: I0217 16:17:21.437479 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/65ea994e-22f1-4dbf-8b79-8810148fad94-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"65ea994e-22f1-4dbf-8b79-8810148fad94\") " pod="openstack/kube-state-metrics-0" Feb 17 16:17:21 crc kubenswrapper[4808]: I0217 16:17:21.437522 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/65ea994e-22f1-4dbf-8b79-8810148fad94-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"65ea994e-22f1-4dbf-8b79-8810148fad94\") " pod="openstack/kube-state-metrics-0" Feb 17 16:17:21 crc kubenswrapper[4808]: I0217 16:17:21.437555 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ffdb\" (UniqueName: \"kubernetes.io/projected/65ea994e-22f1-4dbf-8b79-8810148fad94-kube-api-access-4ffdb\") pod \"kube-state-metrics-0\" (UID: \"65ea994e-22f1-4dbf-8b79-8810148fad94\") " pod="openstack/kube-state-metrics-0" Feb 17 16:17:21 crc kubenswrapper[4808]: I0217 16:17:21.538968 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/65ea994e-22f1-4dbf-8b79-8810148fad94-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"65ea994e-22f1-4dbf-8b79-8810148fad94\") " pod="openstack/kube-state-metrics-0" Feb 17 16:17:21 crc kubenswrapper[4808]: I0217 16:17:21.539040 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/65ea994e-22f1-4dbf-8b79-8810148fad94-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"65ea994e-22f1-4dbf-8b79-8810148fad94\") " pod="openstack/kube-state-metrics-0" Feb 17 16:17:21 crc kubenswrapper[4808]: I0217 16:17:21.539089 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ffdb\" (UniqueName: \"kubernetes.io/projected/65ea994e-22f1-4dbf-8b79-8810148fad94-kube-api-access-4ffdb\") pod \"kube-state-metrics-0\" (UID: \"65ea994e-22f1-4dbf-8b79-8810148fad94\") " pod="openstack/kube-state-metrics-0" Feb 17 16:17:21 crc kubenswrapper[4808]: I0217 16:17:21.539159 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65ea994e-22f1-4dbf-8b79-8810148fad94-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"65ea994e-22f1-4dbf-8b79-8810148fad94\") " pod="openstack/kube-state-metrics-0" Feb 17 16:17:21 crc kubenswrapper[4808]: I0217 16:17:21.544627 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65ea994e-22f1-4dbf-8b79-8810148fad94-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"65ea994e-22f1-4dbf-8b79-8810148fad94\") " pod="openstack/kube-state-metrics-0" Feb 17 16:17:21 crc kubenswrapper[4808]: I0217 16:17:21.545564 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/65ea994e-22f1-4dbf-8b79-8810148fad94-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"65ea994e-22f1-4dbf-8b79-8810148fad94\") " pod="openstack/kube-state-metrics-0" Feb 17 16:17:21 crc kubenswrapper[4808]: I0217 16:17:21.552161 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/65ea994e-22f1-4dbf-8b79-8810148fad94-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"65ea994e-22f1-4dbf-8b79-8810148fad94\") " pod="openstack/kube-state-metrics-0" Feb 17 16:17:21 crc kubenswrapper[4808]: I0217 16:17:21.580560 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ffdb\" (UniqueName: \"kubernetes.io/projected/65ea994e-22f1-4dbf-8b79-8810148fad94-kube-api-access-4ffdb\") pod \"kube-state-metrics-0\" (UID: \"65ea994e-22f1-4dbf-8b79-8810148fad94\") " pod="openstack/kube-state-metrics-0" Feb 17 16:17:21 crc kubenswrapper[4808]: I0217 16:17:21.616886 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 17 16:17:21 crc kubenswrapper[4808]: I0217 16:17:21.786103 4808 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="646d437b-8ce5-47ba-8fc6-9c6451caacc8" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.220:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 17 16:17:21 crc kubenswrapper[4808]: I0217 16:17:21.829953 4808 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="646d437b-8ce5-47ba-8fc6-9c6451caacc8" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.220:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 17 16:17:22 crc kubenswrapper[4808]: I0217 16:17:22.196201 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 17 16:17:22 crc kubenswrapper[4808]: I0217 16:17:22.253535 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:17:22 crc kubenswrapper[4808]: I0217 16:17:22.253951 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9e219b86-d82e-47f5-b071-c44ce0695362" containerName="ceilometer-notification-agent" containerID="cri-o://8a9460318021d21a8c095dc46b0f6d2b923e1d1fb20312230919800b64c327bf" gracePeriod=30 Feb 17 16:17:22 crc kubenswrapper[4808]: I0217 16:17:22.253998 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9e219b86-d82e-47f5-b071-c44ce0695362" containerName="sg-core" containerID="cri-o://14e92a83abc11738c2e58494b921f0dba3aa3b66f55a3affc10d2417c6785a90" gracePeriod=30 Feb 17 16:17:22 crc kubenswrapper[4808]: I0217 16:17:22.253953 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9e219b86-d82e-47f5-b071-c44ce0695362" containerName="proxy-httpd" containerID="cri-o://d73ac62ad3bfcdefb51a665f43bfa062a8308099aae6c2d45cb612f3752adbbe" gracePeriod=30 Feb 17 16:17:22 crc kubenswrapper[4808]: I0217 16:17:22.253897 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9e219b86-d82e-47f5-b071-c44ce0695362" containerName="ceilometer-central-agent" containerID="cri-o://b2074f66b52d0ee5fc07e0dd48e5b9610e713f89e070fa2279a74046e30629e5" gracePeriod=30 Feb 17 16:17:23 crc kubenswrapper[4808]: I0217 16:17:23.188332 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a2bf674-1881-41e9-9c0f-93e8f14ac222" path="/var/lib/kubelet/pods/0a2bf674-1881-41e9-9c0f-93e8f14ac222/volumes" Feb 17 16:17:23 crc kubenswrapper[4808]: I0217 16:17:23.202662 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"65ea994e-22f1-4dbf-8b79-8810148fad94","Type":"ContainerStarted","Data":"e7d21c872fa4c721be582bc5512fce9ea8639756444f3305678af814ac6cbd4d"} Feb 17 16:17:23 crc kubenswrapper[4808]: I0217 16:17:23.202723 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"65ea994e-22f1-4dbf-8b79-8810148fad94","Type":"ContainerStarted","Data":"1aaa23450d14170763e407fef48c651573ad4a50cf0158720864da2982c04494"} Feb 17 16:17:23 crc kubenswrapper[4808]: I0217 16:17:23.202782 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 17 16:17:23 crc kubenswrapper[4808]: I0217 16:17:23.206013 4808 generic.go:334] "Generic (PLEG): container finished" podID="9e219b86-d82e-47f5-b071-c44ce0695362" containerID="d73ac62ad3bfcdefb51a665f43bfa062a8308099aae6c2d45cb612f3752adbbe" exitCode=0 Feb 17 16:17:23 crc kubenswrapper[4808]: I0217 16:17:23.206035 4808 generic.go:334] "Generic (PLEG): container finished" podID="9e219b86-d82e-47f5-b071-c44ce0695362" containerID="14e92a83abc11738c2e58494b921f0dba3aa3b66f55a3affc10d2417c6785a90" exitCode=2 Feb 17 16:17:23 crc kubenswrapper[4808]: I0217 16:17:23.206044 4808 generic.go:334] "Generic (PLEG): container finished" podID="9e219b86-d82e-47f5-b071-c44ce0695362" containerID="b2074f66b52d0ee5fc07e0dd48e5b9610e713f89e070fa2279a74046e30629e5" exitCode=0 Feb 17 16:17:23 crc kubenswrapper[4808]: I0217 16:17:23.206063 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9e219b86-d82e-47f5-b071-c44ce0695362","Type":"ContainerDied","Data":"d73ac62ad3bfcdefb51a665f43bfa062a8308099aae6c2d45cb612f3752adbbe"} Feb 17 16:17:23 crc kubenswrapper[4808]: I0217 16:17:23.206085 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9e219b86-d82e-47f5-b071-c44ce0695362","Type":"ContainerDied","Data":"14e92a83abc11738c2e58494b921f0dba3aa3b66f55a3affc10d2417c6785a90"} Feb 17 16:17:23 crc kubenswrapper[4808]: I0217 16:17:23.206095 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9e219b86-d82e-47f5-b071-c44ce0695362","Type":"ContainerDied","Data":"b2074f66b52d0ee5fc07e0dd48e5b9610e713f89e070fa2279a74046e30629e5"} Feb 17 16:17:23 crc kubenswrapper[4808]: I0217 16:17:23.222369 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=1.7956094390000001 podStartE2EDuration="2.222352426s" podCreationTimestamp="2026-02-17 16:17:21 +0000 UTC" firstStartedPulling="2026-02-17 16:17:22.204533312 +0000 UTC m=+1405.720892375" lastFinishedPulling="2026-02-17 16:17:22.631276289 +0000 UTC m=+1406.147635362" observedRunningTime="2026-02-17 16:17:23.220917857 +0000 UTC m=+1406.737276940" watchObservedRunningTime="2026-02-17 16:17:23.222352426 +0000 UTC m=+1406.738711499" Feb 17 16:17:25 crc kubenswrapper[4808]: I0217 16:17:25.227550 4808 generic.go:334] "Generic (PLEG): container finished" podID="9e219b86-d82e-47f5-b071-c44ce0695362" containerID="8a9460318021d21a8c095dc46b0f6d2b923e1d1fb20312230919800b64c327bf" exitCode=0 Feb 17 16:17:25 crc kubenswrapper[4808]: I0217 16:17:25.227606 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9e219b86-d82e-47f5-b071-c44ce0695362","Type":"ContainerDied","Data":"8a9460318021d21a8c095dc46b0f6d2b923e1d1fb20312230919800b64c327bf"} Feb 17 16:17:25 crc kubenswrapper[4808]: I0217 16:17:25.228061 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9e219b86-d82e-47f5-b071-c44ce0695362","Type":"ContainerDied","Data":"48499d1ccd18294cde816d0461ae46337409d9b91f256c480873ba6063c87133"} Feb 17 16:17:25 crc kubenswrapper[4808]: I0217 16:17:25.228075 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="48499d1ccd18294cde816d0461ae46337409d9b91f256c480873ba6063c87133" Feb 17 16:17:25 crc kubenswrapper[4808]: I0217 16:17:25.254973 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:17:25 crc kubenswrapper[4808]: I0217 16:17:25.423761 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e219b86-d82e-47f5-b071-c44ce0695362-scripts\") pod \"9e219b86-d82e-47f5-b071-c44ce0695362\" (UID: \"9e219b86-d82e-47f5-b071-c44ce0695362\") " Feb 17 16:17:25 crc kubenswrapper[4808]: I0217 16:17:25.423814 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gj867\" (UniqueName: \"kubernetes.io/projected/9e219b86-d82e-47f5-b071-c44ce0695362-kube-api-access-gj867\") pod \"9e219b86-d82e-47f5-b071-c44ce0695362\" (UID: \"9e219b86-d82e-47f5-b071-c44ce0695362\") " Feb 17 16:17:25 crc kubenswrapper[4808]: I0217 16:17:25.423879 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9e219b86-d82e-47f5-b071-c44ce0695362-run-httpd\") pod \"9e219b86-d82e-47f5-b071-c44ce0695362\" (UID: \"9e219b86-d82e-47f5-b071-c44ce0695362\") " Feb 17 16:17:25 crc kubenswrapper[4808]: I0217 16:17:25.423969 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9e219b86-d82e-47f5-b071-c44ce0695362-sg-core-conf-yaml\") pod \"9e219b86-d82e-47f5-b071-c44ce0695362\" (UID: \"9e219b86-d82e-47f5-b071-c44ce0695362\") " Feb 17 16:17:25 crc kubenswrapper[4808]: I0217 16:17:25.424002 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9e219b86-d82e-47f5-b071-c44ce0695362-log-httpd\") pod \"9e219b86-d82e-47f5-b071-c44ce0695362\" (UID: \"9e219b86-d82e-47f5-b071-c44ce0695362\") " Feb 17 16:17:25 crc kubenswrapper[4808]: I0217 16:17:25.424120 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e219b86-d82e-47f5-b071-c44ce0695362-config-data\") pod \"9e219b86-d82e-47f5-b071-c44ce0695362\" (UID: \"9e219b86-d82e-47f5-b071-c44ce0695362\") " Feb 17 16:17:25 crc kubenswrapper[4808]: I0217 16:17:25.424167 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e219b86-d82e-47f5-b071-c44ce0695362-combined-ca-bundle\") pod \"9e219b86-d82e-47f5-b071-c44ce0695362\" (UID: \"9e219b86-d82e-47f5-b071-c44ce0695362\") " Feb 17 16:17:25 crc kubenswrapper[4808]: I0217 16:17:25.424820 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e219b86-d82e-47f5-b071-c44ce0695362-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "9e219b86-d82e-47f5-b071-c44ce0695362" (UID: "9e219b86-d82e-47f5-b071-c44ce0695362"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:17:25 crc kubenswrapper[4808]: I0217 16:17:25.424903 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e219b86-d82e-47f5-b071-c44ce0695362-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "9e219b86-d82e-47f5-b071-c44ce0695362" (UID: "9e219b86-d82e-47f5-b071-c44ce0695362"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:17:25 crc kubenswrapper[4808]: I0217 16:17:25.429672 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e219b86-d82e-47f5-b071-c44ce0695362-scripts" (OuterVolumeSpecName: "scripts") pod "9e219b86-d82e-47f5-b071-c44ce0695362" (UID: "9e219b86-d82e-47f5-b071-c44ce0695362"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:17:25 crc kubenswrapper[4808]: I0217 16:17:25.432458 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e219b86-d82e-47f5-b071-c44ce0695362-kube-api-access-gj867" (OuterVolumeSpecName: "kube-api-access-gj867") pod "9e219b86-d82e-47f5-b071-c44ce0695362" (UID: "9e219b86-d82e-47f5-b071-c44ce0695362"). InnerVolumeSpecName "kube-api-access-gj867". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:17:25 crc kubenswrapper[4808]: I0217 16:17:25.464949 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e219b86-d82e-47f5-b071-c44ce0695362-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "9e219b86-d82e-47f5-b071-c44ce0695362" (UID: "9e219b86-d82e-47f5-b071-c44ce0695362"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:17:25 crc kubenswrapper[4808]: I0217 16:17:25.524433 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e219b86-d82e-47f5-b071-c44ce0695362-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9e219b86-d82e-47f5-b071-c44ce0695362" (UID: "9e219b86-d82e-47f5-b071-c44ce0695362"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:17:25 crc kubenswrapper[4808]: I0217 16:17:25.526234 4808 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9e219b86-d82e-47f5-b071-c44ce0695362-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:25 crc kubenswrapper[4808]: I0217 16:17:25.526267 4808 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9e219b86-d82e-47f5-b071-c44ce0695362-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:25 crc kubenswrapper[4808]: I0217 16:17:25.526282 4808 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9e219b86-d82e-47f5-b071-c44ce0695362-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:25 crc kubenswrapper[4808]: I0217 16:17:25.526293 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e219b86-d82e-47f5-b071-c44ce0695362-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:25 crc kubenswrapper[4808]: I0217 16:17:25.526305 4808 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e219b86-d82e-47f5-b071-c44ce0695362-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:25 crc kubenswrapper[4808]: I0217 16:17:25.526316 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gj867\" (UniqueName: \"kubernetes.io/projected/9e219b86-d82e-47f5-b071-c44ce0695362-kube-api-access-gj867\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:25 crc kubenswrapper[4808]: I0217 16:17:25.560425 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e219b86-d82e-47f5-b071-c44ce0695362-config-data" (OuterVolumeSpecName: "config-data") pod "9e219b86-d82e-47f5-b071-c44ce0695362" (UID: "9e219b86-d82e-47f5-b071-c44ce0695362"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:17:25 crc kubenswrapper[4808]: I0217 16:17:25.628165 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e219b86-d82e-47f5-b071-c44ce0695362-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:26 crc kubenswrapper[4808]: I0217 16:17:26.237640 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:17:26 crc kubenswrapper[4808]: I0217 16:17:26.276990 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:17:26 crc kubenswrapper[4808]: I0217 16:17:26.288167 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:17:26 crc kubenswrapper[4808]: I0217 16:17:26.299425 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:17:26 crc kubenswrapper[4808]: E0217 16:17:26.299822 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e219b86-d82e-47f5-b071-c44ce0695362" containerName="ceilometer-notification-agent" Feb 17 16:17:26 crc kubenswrapper[4808]: I0217 16:17:26.299840 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e219b86-d82e-47f5-b071-c44ce0695362" containerName="ceilometer-notification-agent" Feb 17 16:17:26 crc kubenswrapper[4808]: E0217 16:17:26.299854 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e219b86-d82e-47f5-b071-c44ce0695362" containerName="proxy-httpd" Feb 17 16:17:26 crc kubenswrapper[4808]: I0217 16:17:26.299861 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e219b86-d82e-47f5-b071-c44ce0695362" containerName="proxy-httpd" Feb 17 16:17:26 crc kubenswrapper[4808]: E0217 16:17:26.299879 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e219b86-d82e-47f5-b071-c44ce0695362" containerName="sg-core" Feb 17 16:17:26 crc kubenswrapper[4808]: I0217 16:17:26.299885 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e219b86-d82e-47f5-b071-c44ce0695362" containerName="sg-core" Feb 17 16:17:26 crc kubenswrapper[4808]: E0217 16:17:26.299902 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e219b86-d82e-47f5-b071-c44ce0695362" containerName="ceilometer-central-agent" Feb 17 16:17:26 crc kubenswrapper[4808]: I0217 16:17:26.299908 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e219b86-d82e-47f5-b071-c44ce0695362" containerName="ceilometer-central-agent" Feb 17 16:17:26 crc kubenswrapper[4808]: I0217 16:17:26.300076 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e219b86-d82e-47f5-b071-c44ce0695362" containerName="ceilometer-central-agent" Feb 17 16:17:26 crc kubenswrapper[4808]: I0217 16:17:26.300092 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e219b86-d82e-47f5-b071-c44ce0695362" containerName="ceilometer-notification-agent" Feb 17 16:17:26 crc kubenswrapper[4808]: I0217 16:17:26.300107 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e219b86-d82e-47f5-b071-c44ce0695362" containerName="proxy-httpd" Feb 17 16:17:26 crc kubenswrapper[4808]: I0217 16:17:26.300118 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e219b86-d82e-47f5-b071-c44ce0695362" containerName="sg-core" Feb 17 16:17:26 crc kubenswrapper[4808]: I0217 16:17:26.302025 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:17:26 crc kubenswrapper[4808]: I0217 16:17:26.304422 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 17 16:17:26 crc kubenswrapper[4808]: I0217 16:17:26.304876 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 17 16:17:26 crc kubenswrapper[4808]: I0217 16:17:26.308917 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 17 16:17:26 crc kubenswrapper[4808]: I0217 16:17:26.318825 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:17:26 crc kubenswrapper[4808]: I0217 16:17:26.446601 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/28d43ac9-e802-4679-a989-5032d56ea9dd-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"28d43ac9-e802-4679-a989-5032d56ea9dd\") " pod="openstack/ceilometer-0" Feb 17 16:17:26 crc kubenswrapper[4808]: I0217 16:17:26.446668 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/28d43ac9-e802-4679-a989-5032d56ea9dd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"28d43ac9-e802-4679-a989-5032d56ea9dd\") " pod="openstack/ceilometer-0" Feb 17 16:17:26 crc kubenswrapper[4808]: I0217 16:17:26.446923 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28d43ac9-e802-4679-a989-5032d56ea9dd-scripts\") pod \"ceilometer-0\" (UID: \"28d43ac9-e802-4679-a989-5032d56ea9dd\") " pod="openstack/ceilometer-0" Feb 17 16:17:26 crc kubenswrapper[4808]: I0217 16:17:26.447023 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28d43ac9-e802-4679-a989-5032d56ea9dd-config-data\") pod \"ceilometer-0\" (UID: \"28d43ac9-e802-4679-a989-5032d56ea9dd\") " pod="openstack/ceilometer-0" Feb 17 16:17:26 crc kubenswrapper[4808]: I0217 16:17:26.447203 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwssk\" (UniqueName: \"kubernetes.io/projected/28d43ac9-e802-4679-a989-5032d56ea9dd-kube-api-access-fwssk\") pod \"ceilometer-0\" (UID: \"28d43ac9-e802-4679-a989-5032d56ea9dd\") " pod="openstack/ceilometer-0" Feb 17 16:17:26 crc kubenswrapper[4808]: I0217 16:17:26.447248 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28d43ac9-e802-4679-a989-5032d56ea9dd-run-httpd\") pod \"ceilometer-0\" (UID: \"28d43ac9-e802-4679-a989-5032d56ea9dd\") " pod="openstack/ceilometer-0" Feb 17 16:17:26 crc kubenswrapper[4808]: I0217 16:17:26.447306 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28d43ac9-e802-4679-a989-5032d56ea9dd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"28d43ac9-e802-4679-a989-5032d56ea9dd\") " pod="openstack/ceilometer-0" Feb 17 16:17:26 crc kubenswrapper[4808]: I0217 16:17:26.447480 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28d43ac9-e802-4679-a989-5032d56ea9dd-log-httpd\") pod \"ceilometer-0\" (UID: \"28d43ac9-e802-4679-a989-5032d56ea9dd\") " pod="openstack/ceilometer-0" Feb 17 16:17:26 crc kubenswrapper[4808]: I0217 16:17:26.549081 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28d43ac9-e802-4679-a989-5032d56ea9dd-log-httpd\") pod \"ceilometer-0\" (UID: \"28d43ac9-e802-4679-a989-5032d56ea9dd\") " pod="openstack/ceilometer-0" Feb 17 16:17:26 crc kubenswrapper[4808]: I0217 16:17:26.549209 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/28d43ac9-e802-4679-a989-5032d56ea9dd-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"28d43ac9-e802-4679-a989-5032d56ea9dd\") " pod="openstack/ceilometer-0" Feb 17 16:17:26 crc kubenswrapper[4808]: I0217 16:17:26.549235 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/28d43ac9-e802-4679-a989-5032d56ea9dd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"28d43ac9-e802-4679-a989-5032d56ea9dd\") " pod="openstack/ceilometer-0" Feb 17 16:17:26 crc kubenswrapper[4808]: I0217 16:17:26.549294 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28d43ac9-e802-4679-a989-5032d56ea9dd-scripts\") pod \"ceilometer-0\" (UID: \"28d43ac9-e802-4679-a989-5032d56ea9dd\") " pod="openstack/ceilometer-0" Feb 17 16:17:26 crc kubenswrapper[4808]: I0217 16:17:26.549334 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28d43ac9-e802-4679-a989-5032d56ea9dd-config-data\") pod \"ceilometer-0\" (UID: \"28d43ac9-e802-4679-a989-5032d56ea9dd\") " pod="openstack/ceilometer-0" Feb 17 16:17:26 crc kubenswrapper[4808]: I0217 16:17:26.549406 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwssk\" (UniqueName: \"kubernetes.io/projected/28d43ac9-e802-4679-a989-5032d56ea9dd-kube-api-access-fwssk\") pod \"ceilometer-0\" (UID: \"28d43ac9-e802-4679-a989-5032d56ea9dd\") " pod="openstack/ceilometer-0" Feb 17 16:17:26 crc kubenswrapper[4808]: I0217 16:17:26.549434 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28d43ac9-e802-4679-a989-5032d56ea9dd-run-httpd\") pod \"ceilometer-0\" (UID: \"28d43ac9-e802-4679-a989-5032d56ea9dd\") " pod="openstack/ceilometer-0" Feb 17 16:17:26 crc kubenswrapper[4808]: I0217 16:17:26.549660 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28d43ac9-e802-4679-a989-5032d56ea9dd-log-httpd\") pod \"ceilometer-0\" (UID: \"28d43ac9-e802-4679-a989-5032d56ea9dd\") " pod="openstack/ceilometer-0" Feb 17 16:17:26 crc kubenswrapper[4808]: I0217 16:17:26.550429 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28d43ac9-e802-4679-a989-5032d56ea9dd-run-httpd\") pod \"ceilometer-0\" (UID: \"28d43ac9-e802-4679-a989-5032d56ea9dd\") " pod="openstack/ceilometer-0" Feb 17 16:17:26 crc kubenswrapper[4808]: I0217 16:17:26.550462 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28d43ac9-e802-4679-a989-5032d56ea9dd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"28d43ac9-e802-4679-a989-5032d56ea9dd\") " pod="openstack/ceilometer-0" Feb 17 16:17:26 crc kubenswrapper[4808]: I0217 16:17:26.555471 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28d43ac9-e802-4679-a989-5032d56ea9dd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"28d43ac9-e802-4679-a989-5032d56ea9dd\") " pod="openstack/ceilometer-0" Feb 17 16:17:26 crc kubenswrapper[4808]: I0217 16:17:26.555616 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/28d43ac9-e802-4679-a989-5032d56ea9dd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"28d43ac9-e802-4679-a989-5032d56ea9dd\") " pod="openstack/ceilometer-0" Feb 17 16:17:26 crc kubenswrapper[4808]: I0217 16:17:26.556541 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28d43ac9-e802-4679-a989-5032d56ea9dd-config-data\") pod \"ceilometer-0\" (UID: \"28d43ac9-e802-4679-a989-5032d56ea9dd\") " pod="openstack/ceilometer-0" Feb 17 16:17:26 crc kubenswrapper[4808]: I0217 16:17:26.556691 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/28d43ac9-e802-4679-a989-5032d56ea9dd-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"28d43ac9-e802-4679-a989-5032d56ea9dd\") " pod="openstack/ceilometer-0" Feb 17 16:17:26 crc kubenswrapper[4808]: I0217 16:17:26.560801 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28d43ac9-e802-4679-a989-5032d56ea9dd-scripts\") pod \"ceilometer-0\" (UID: \"28d43ac9-e802-4679-a989-5032d56ea9dd\") " pod="openstack/ceilometer-0" Feb 17 16:17:26 crc kubenswrapper[4808]: I0217 16:17:26.574220 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwssk\" (UniqueName: \"kubernetes.io/projected/28d43ac9-e802-4679-a989-5032d56ea9dd-kube-api-access-fwssk\") pod \"ceilometer-0\" (UID: \"28d43ac9-e802-4679-a989-5032d56ea9dd\") " pod="openstack/ceilometer-0" Feb 17 16:17:26 crc kubenswrapper[4808]: I0217 16:17:26.620069 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:17:27 crc kubenswrapper[4808]: I0217 16:17:27.169722 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e219b86-d82e-47f5-b071-c44ce0695362" path="/var/lib/kubelet/pods/9e219b86-d82e-47f5-b071-c44ce0695362/volumes" Feb 17 16:17:27 crc kubenswrapper[4808]: I0217 16:17:27.171609 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:17:27 crc kubenswrapper[4808]: I0217 16:17:27.249215 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"28d43ac9-e802-4679-a989-5032d56ea9dd","Type":"ContainerStarted","Data":"ab32feefa5626c6c7de2470473cdca164dd77fd77015ec801b8e2ecef92b4ac6"} Feb 17 16:17:28 crc kubenswrapper[4808]: I0217 16:17:28.279545 4808 generic.go:334] "Generic (PLEG): container finished" podID="67800510-1957-448c-88a1-0d2898a6524b" containerID="93feefbbf60d56afc10b9bf64ecb3070c5634d6555929b547ee15577ff50a6aa" exitCode=137 Feb 17 16:17:28 crc kubenswrapper[4808]: I0217 16:17:28.279920 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"67800510-1957-448c-88a1-0d2898a6524b","Type":"ContainerDied","Data":"93feefbbf60d56afc10b9bf64ecb3070c5634d6555929b547ee15577ff50a6aa"} Feb 17 16:17:28 crc kubenswrapper[4808]: I0217 16:17:28.282379 4808 generic.go:334] "Generic (PLEG): container finished" podID="018b3b96-1953-4437-83ab-99bc970bcd36" containerID="6ef8e3bebfc9cfcadeefd087d4fa6251ebd40b4d37426989452bb671f4dca959" exitCode=137 Feb 17 16:17:28 crc kubenswrapper[4808]: I0217 16:17:28.282406 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"018b3b96-1953-4437-83ab-99bc970bcd36","Type":"ContainerDied","Data":"6ef8e3bebfc9cfcadeefd087d4fa6251ebd40b4d37426989452bb671f4dca959"} Feb 17 16:17:28 crc kubenswrapper[4808]: I0217 16:17:28.554843 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 16:17:28 crc kubenswrapper[4808]: I0217 16:17:28.561116 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:17:28 crc kubenswrapper[4808]: I0217 16:17:28.698784 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tml77\" (UniqueName: \"kubernetes.io/projected/67800510-1957-448c-88a1-0d2898a6524b-kube-api-access-tml77\") pod \"67800510-1957-448c-88a1-0d2898a6524b\" (UID: \"67800510-1957-448c-88a1-0d2898a6524b\") " Feb 17 16:17:28 crc kubenswrapper[4808]: I0217 16:17:28.698915 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67800510-1957-448c-88a1-0d2898a6524b-config-data\") pod \"67800510-1957-448c-88a1-0d2898a6524b\" (UID: \"67800510-1957-448c-88a1-0d2898a6524b\") " Feb 17 16:17:28 crc kubenswrapper[4808]: I0217 16:17:28.698956 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mb4wj\" (UniqueName: \"kubernetes.io/projected/018b3b96-1953-4437-83ab-99bc970bcd36-kube-api-access-mb4wj\") pod \"018b3b96-1953-4437-83ab-99bc970bcd36\" (UID: \"018b3b96-1953-4437-83ab-99bc970bcd36\") " Feb 17 16:17:28 crc kubenswrapper[4808]: I0217 16:17:28.699034 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/018b3b96-1953-4437-83ab-99bc970bcd36-logs\") pod \"018b3b96-1953-4437-83ab-99bc970bcd36\" (UID: \"018b3b96-1953-4437-83ab-99bc970bcd36\") " Feb 17 16:17:28 crc kubenswrapper[4808]: I0217 16:17:28.699084 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/018b3b96-1953-4437-83ab-99bc970bcd36-combined-ca-bundle\") pod \"018b3b96-1953-4437-83ab-99bc970bcd36\" (UID: \"018b3b96-1953-4437-83ab-99bc970bcd36\") " Feb 17 16:17:28 crc kubenswrapper[4808]: I0217 16:17:28.699151 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/018b3b96-1953-4437-83ab-99bc970bcd36-config-data\") pod \"018b3b96-1953-4437-83ab-99bc970bcd36\" (UID: \"018b3b96-1953-4437-83ab-99bc970bcd36\") " Feb 17 16:17:28 crc kubenswrapper[4808]: I0217 16:17:28.699245 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67800510-1957-448c-88a1-0d2898a6524b-combined-ca-bundle\") pod \"67800510-1957-448c-88a1-0d2898a6524b\" (UID: \"67800510-1957-448c-88a1-0d2898a6524b\") " Feb 17 16:17:28 crc kubenswrapper[4808]: I0217 16:17:28.700058 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/018b3b96-1953-4437-83ab-99bc970bcd36-logs" (OuterVolumeSpecName: "logs") pod "018b3b96-1953-4437-83ab-99bc970bcd36" (UID: "018b3b96-1953-4437-83ab-99bc970bcd36"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:17:28 crc kubenswrapper[4808]: I0217 16:17:28.710839 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67800510-1957-448c-88a1-0d2898a6524b-kube-api-access-tml77" (OuterVolumeSpecName: "kube-api-access-tml77") pod "67800510-1957-448c-88a1-0d2898a6524b" (UID: "67800510-1957-448c-88a1-0d2898a6524b"). InnerVolumeSpecName "kube-api-access-tml77". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:17:28 crc kubenswrapper[4808]: I0217 16:17:28.711887 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/018b3b96-1953-4437-83ab-99bc970bcd36-kube-api-access-mb4wj" (OuterVolumeSpecName: "kube-api-access-mb4wj") pod "018b3b96-1953-4437-83ab-99bc970bcd36" (UID: "018b3b96-1953-4437-83ab-99bc970bcd36"). InnerVolumeSpecName "kube-api-access-mb4wj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:17:28 crc kubenswrapper[4808]: I0217 16:17:28.733467 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/018b3b96-1953-4437-83ab-99bc970bcd36-config-data" (OuterVolumeSpecName: "config-data") pod "018b3b96-1953-4437-83ab-99bc970bcd36" (UID: "018b3b96-1953-4437-83ab-99bc970bcd36"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:17:28 crc kubenswrapper[4808]: I0217 16:17:28.735457 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/018b3b96-1953-4437-83ab-99bc970bcd36-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "018b3b96-1953-4437-83ab-99bc970bcd36" (UID: "018b3b96-1953-4437-83ab-99bc970bcd36"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:17:28 crc kubenswrapper[4808]: I0217 16:17:28.740903 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67800510-1957-448c-88a1-0d2898a6524b-config-data" (OuterVolumeSpecName: "config-data") pod "67800510-1957-448c-88a1-0d2898a6524b" (UID: "67800510-1957-448c-88a1-0d2898a6524b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:17:28 crc kubenswrapper[4808]: I0217 16:17:28.740472 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67800510-1957-448c-88a1-0d2898a6524b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "67800510-1957-448c-88a1-0d2898a6524b" (UID: "67800510-1957-448c-88a1-0d2898a6524b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:17:28 crc kubenswrapper[4808]: I0217 16:17:28.801670 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67800510-1957-448c-88a1-0d2898a6524b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:28 crc kubenswrapper[4808]: I0217 16:17:28.801709 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tml77\" (UniqueName: \"kubernetes.io/projected/67800510-1957-448c-88a1-0d2898a6524b-kube-api-access-tml77\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:28 crc kubenswrapper[4808]: I0217 16:17:28.801721 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67800510-1957-448c-88a1-0d2898a6524b-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:28 crc kubenswrapper[4808]: I0217 16:17:28.801730 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mb4wj\" (UniqueName: \"kubernetes.io/projected/018b3b96-1953-4437-83ab-99bc970bcd36-kube-api-access-mb4wj\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:28 crc kubenswrapper[4808]: I0217 16:17:28.801741 4808 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/018b3b96-1953-4437-83ab-99bc970bcd36-logs\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:28 crc kubenswrapper[4808]: I0217 16:17:28.801750 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/018b3b96-1953-4437-83ab-99bc970bcd36-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:28 crc kubenswrapper[4808]: I0217 16:17:28.801758 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/018b3b96-1953-4437-83ab-99bc970bcd36-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:29 crc kubenswrapper[4808]: I0217 16:17:29.293671 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"28d43ac9-e802-4679-a989-5032d56ea9dd","Type":"ContainerStarted","Data":"d280b23c1a5b1af2bcce4dd612c258d4f33571abef294ea93665969a086afee4"} Feb 17 16:17:29 crc kubenswrapper[4808]: I0217 16:17:29.295781 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"018b3b96-1953-4437-83ab-99bc970bcd36","Type":"ContainerDied","Data":"21c9110345aef4dc69cbeac414de965fd822d356a427b405912ce038ca889eb8"} Feb 17 16:17:29 crc kubenswrapper[4808]: I0217 16:17:29.295815 4808 scope.go:117] "RemoveContainer" containerID="6ef8e3bebfc9cfcadeefd087d4fa6251ebd40b4d37426989452bb671f4dca959" Feb 17 16:17:29 crc kubenswrapper[4808]: I0217 16:17:29.295941 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 16:17:29 crc kubenswrapper[4808]: I0217 16:17:29.300489 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"67800510-1957-448c-88a1-0d2898a6524b","Type":"ContainerDied","Data":"b5824b16acbd91bc8be7043e9329004ce8288b6bdf03b1752a9c0085eb731c99"} Feb 17 16:17:29 crc kubenswrapper[4808]: I0217 16:17:29.300811 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:17:29 crc kubenswrapper[4808]: I0217 16:17:29.337768 4808 scope.go:117] "RemoveContainer" containerID="b61b15418b3bd37da0c8b8ccd088976fe8d71ecad15624d7a4fc984f84514eef" Feb 17 16:17:29 crc kubenswrapper[4808]: I0217 16:17:29.354022 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 16:17:29 crc kubenswrapper[4808]: I0217 16:17:29.376933 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 16:17:29 crc kubenswrapper[4808]: I0217 16:17:29.408113 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 17 16:17:29 crc kubenswrapper[4808]: I0217 16:17:29.418324 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 17 16:17:29 crc kubenswrapper[4808]: E0217 16:17:29.418976 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="018b3b96-1953-4437-83ab-99bc970bcd36" containerName="nova-metadata-metadata" Feb 17 16:17:29 crc kubenswrapper[4808]: I0217 16:17:29.419085 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="018b3b96-1953-4437-83ab-99bc970bcd36" containerName="nova-metadata-metadata" Feb 17 16:17:29 crc kubenswrapper[4808]: E0217 16:17:29.419164 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="018b3b96-1953-4437-83ab-99bc970bcd36" containerName="nova-metadata-log" Feb 17 16:17:29 crc kubenswrapper[4808]: I0217 16:17:29.419217 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="018b3b96-1953-4437-83ab-99bc970bcd36" containerName="nova-metadata-log" Feb 17 16:17:29 crc kubenswrapper[4808]: E0217 16:17:29.419279 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67800510-1957-448c-88a1-0d2898a6524b" containerName="nova-cell1-novncproxy-novncproxy" Feb 17 16:17:29 crc kubenswrapper[4808]: I0217 16:17:29.419328 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="67800510-1957-448c-88a1-0d2898a6524b" containerName="nova-cell1-novncproxy-novncproxy" Feb 17 16:17:29 crc kubenswrapper[4808]: I0217 16:17:29.420200 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="67800510-1957-448c-88a1-0d2898a6524b" containerName="nova-cell1-novncproxy-novncproxy" Feb 17 16:17:29 crc kubenswrapper[4808]: I0217 16:17:29.420298 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="018b3b96-1953-4437-83ab-99bc970bcd36" containerName="nova-metadata-log" Feb 17 16:17:29 crc kubenswrapper[4808]: I0217 16:17:29.420365 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="018b3b96-1953-4437-83ab-99bc970bcd36" containerName="nova-metadata-metadata" Feb 17 16:17:29 crc kubenswrapper[4808]: I0217 16:17:29.421523 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 16:17:29 crc kubenswrapper[4808]: I0217 16:17:29.426830 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 17 16:17:29 crc kubenswrapper[4808]: I0217 16:17:29.432340 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 17 16:17:29 crc kubenswrapper[4808]: I0217 16:17:29.435170 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 17 16:17:29 crc kubenswrapper[4808]: I0217 16:17:29.449664 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 16:17:29 crc kubenswrapper[4808]: I0217 16:17:29.462548 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 17 16:17:29 crc kubenswrapper[4808]: I0217 16:17:29.464001 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:17:29 crc kubenswrapper[4808]: I0217 16:17:29.465957 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Feb 17 16:17:29 crc kubenswrapper[4808]: I0217 16:17:29.466168 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 17 16:17:29 crc kubenswrapper[4808]: I0217 16:17:29.466481 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Feb 17 16:17:29 crc kubenswrapper[4808]: I0217 16:17:29.476523 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 17 16:17:29 crc kubenswrapper[4808]: I0217 16:17:29.496850 4808 scope.go:117] "RemoveContainer" containerID="93feefbbf60d56afc10b9bf64ecb3070c5634d6555929b547ee15577ff50a6aa" Feb 17 16:17:29 crc kubenswrapper[4808]: I0217 16:17:29.515617 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f4225bf1-ce01-4830-b857-2201d4e67fd6-logs\") pod \"nova-metadata-0\" (UID: \"f4225bf1-ce01-4830-b857-2201d4e67fd6\") " pod="openstack/nova-metadata-0" Feb 17 16:17:29 crc kubenswrapper[4808]: I0217 16:17:29.515685 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4225bf1-ce01-4830-b857-2201d4e67fd6-config-data\") pod \"nova-metadata-0\" (UID: \"f4225bf1-ce01-4830-b857-2201d4e67fd6\") " pod="openstack/nova-metadata-0" Feb 17 16:17:29 crc kubenswrapper[4808]: I0217 16:17:29.515709 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzbxx\" (UniqueName: \"kubernetes.io/projected/f4225bf1-ce01-4830-b857-2201d4e67fd6-kube-api-access-nzbxx\") pod \"nova-metadata-0\" (UID: \"f4225bf1-ce01-4830-b857-2201d4e67fd6\") " pod="openstack/nova-metadata-0" Feb 17 16:17:29 crc kubenswrapper[4808]: I0217 16:17:29.515739 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4225bf1-ce01-4830-b857-2201d4e67fd6-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f4225bf1-ce01-4830-b857-2201d4e67fd6\") " pod="openstack/nova-metadata-0" Feb 17 16:17:29 crc kubenswrapper[4808]: I0217 16:17:29.515866 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4225bf1-ce01-4830-b857-2201d4e67fd6-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"f4225bf1-ce01-4830-b857-2201d4e67fd6\") " pod="openstack/nova-metadata-0" Feb 17 16:17:29 crc kubenswrapper[4808]: I0217 16:17:29.617887 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhp2l\" (UniqueName: \"kubernetes.io/projected/e1acfe51-1173-4ce1-a645-d757d30e3312-kube-api-access-dhp2l\") pod \"nova-cell1-novncproxy-0\" (UID: \"e1acfe51-1173-4ce1-a645-d757d30e3312\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:17:29 crc kubenswrapper[4808]: I0217 16:17:29.617979 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1acfe51-1173-4ce1-a645-d757d30e3312-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"e1acfe51-1173-4ce1-a645-d757d30e3312\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:17:29 crc kubenswrapper[4808]: I0217 16:17:29.618004 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/e1acfe51-1173-4ce1-a645-d757d30e3312-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"e1acfe51-1173-4ce1-a645-d757d30e3312\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:17:29 crc kubenswrapper[4808]: I0217 16:17:29.618223 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4225bf1-ce01-4830-b857-2201d4e67fd6-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"f4225bf1-ce01-4830-b857-2201d4e67fd6\") " pod="openstack/nova-metadata-0" Feb 17 16:17:29 crc kubenswrapper[4808]: I0217 16:17:29.618781 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f4225bf1-ce01-4830-b857-2201d4e67fd6-logs\") pod \"nova-metadata-0\" (UID: \"f4225bf1-ce01-4830-b857-2201d4e67fd6\") " pod="openstack/nova-metadata-0" Feb 17 16:17:29 crc kubenswrapper[4808]: I0217 16:17:29.618882 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/e1acfe51-1173-4ce1-a645-d757d30e3312-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"e1acfe51-1173-4ce1-a645-d757d30e3312\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:17:29 crc kubenswrapper[4808]: I0217 16:17:29.618922 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4225bf1-ce01-4830-b857-2201d4e67fd6-config-data\") pod \"nova-metadata-0\" (UID: \"f4225bf1-ce01-4830-b857-2201d4e67fd6\") " pod="openstack/nova-metadata-0" Feb 17 16:17:29 crc kubenswrapper[4808]: I0217 16:17:29.618988 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nzbxx\" (UniqueName: \"kubernetes.io/projected/f4225bf1-ce01-4830-b857-2201d4e67fd6-kube-api-access-nzbxx\") pod \"nova-metadata-0\" (UID: \"f4225bf1-ce01-4830-b857-2201d4e67fd6\") " pod="openstack/nova-metadata-0" Feb 17 16:17:29 crc kubenswrapper[4808]: I0217 16:17:29.619263 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f4225bf1-ce01-4830-b857-2201d4e67fd6-logs\") pod \"nova-metadata-0\" (UID: \"f4225bf1-ce01-4830-b857-2201d4e67fd6\") " pod="openstack/nova-metadata-0" Feb 17 16:17:29 crc kubenswrapper[4808]: I0217 16:17:29.619391 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4225bf1-ce01-4830-b857-2201d4e67fd6-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f4225bf1-ce01-4830-b857-2201d4e67fd6\") " pod="openstack/nova-metadata-0" Feb 17 16:17:29 crc kubenswrapper[4808]: I0217 16:17:29.619516 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1acfe51-1173-4ce1-a645-d757d30e3312-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"e1acfe51-1173-4ce1-a645-d757d30e3312\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:17:29 crc kubenswrapper[4808]: I0217 16:17:29.622392 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4225bf1-ce01-4830-b857-2201d4e67fd6-config-data\") pod \"nova-metadata-0\" (UID: \"f4225bf1-ce01-4830-b857-2201d4e67fd6\") " pod="openstack/nova-metadata-0" Feb 17 16:17:29 crc kubenswrapper[4808]: I0217 16:17:29.628219 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4225bf1-ce01-4830-b857-2201d4e67fd6-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f4225bf1-ce01-4830-b857-2201d4e67fd6\") " pod="openstack/nova-metadata-0" Feb 17 16:17:29 crc kubenswrapper[4808]: I0217 16:17:29.628434 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4225bf1-ce01-4830-b857-2201d4e67fd6-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"f4225bf1-ce01-4830-b857-2201d4e67fd6\") " pod="openstack/nova-metadata-0" Feb 17 16:17:29 crc kubenswrapper[4808]: I0217 16:17:29.636659 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzbxx\" (UniqueName: \"kubernetes.io/projected/f4225bf1-ce01-4830-b857-2201d4e67fd6-kube-api-access-nzbxx\") pod \"nova-metadata-0\" (UID: \"f4225bf1-ce01-4830-b857-2201d4e67fd6\") " pod="openstack/nova-metadata-0" Feb 17 16:17:29 crc kubenswrapper[4808]: I0217 16:17:29.721496 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/e1acfe51-1173-4ce1-a645-d757d30e3312-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"e1acfe51-1173-4ce1-a645-d757d30e3312\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:17:29 crc kubenswrapper[4808]: I0217 16:17:29.721594 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1acfe51-1173-4ce1-a645-d757d30e3312-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"e1acfe51-1173-4ce1-a645-d757d30e3312\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:17:29 crc kubenswrapper[4808]: I0217 16:17:29.721662 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhp2l\" (UniqueName: \"kubernetes.io/projected/e1acfe51-1173-4ce1-a645-d757d30e3312-kube-api-access-dhp2l\") pod \"nova-cell1-novncproxy-0\" (UID: \"e1acfe51-1173-4ce1-a645-d757d30e3312\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:17:29 crc kubenswrapper[4808]: I0217 16:17:29.721719 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1acfe51-1173-4ce1-a645-d757d30e3312-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"e1acfe51-1173-4ce1-a645-d757d30e3312\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:17:29 crc kubenswrapper[4808]: I0217 16:17:29.721741 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/e1acfe51-1173-4ce1-a645-d757d30e3312-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"e1acfe51-1173-4ce1-a645-d757d30e3312\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:17:29 crc kubenswrapper[4808]: I0217 16:17:29.724872 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/e1acfe51-1173-4ce1-a645-d757d30e3312-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"e1acfe51-1173-4ce1-a645-d757d30e3312\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:17:29 crc kubenswrapper[4808]: I0217 16:17:29.725109 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1acfe51-1173-4ce1-a645-d757d30e3312-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"e1acfe51-1173-4ce1-a645-d757d30e3312\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:17:29 crc kubenswrapper[4808]: I0217 16:17:29.725628 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/e1acfe51-1173-4ce1-a645-d757d30e3312-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"e1acfe51-1173-4ce1-a645-d757d30e3312\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:17:29 crc kubenswrapper[4808]: I0217 16:17:29.727149 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1acfe51-1173-4ce1-a645-d757d30e3312-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"e1acfe51-1173-4ce1-a645-d757d30e3312\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:17:29 crc kubenswrapper[4808]: I0217 16:17:29.743122 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhp2l\" (UniqueName: \"kubernetes.io/projected/e1acfe51-1173-4ce1-a645-d757d30e3312-kube-api-access-dhp2l\") pod \"nova-cell1-novncproxy-0\" (UID: \"e1acfe51-1173-4ce1-a645-d757d30e3312\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:17:29 crc kubenswrapper[4808]: I0217 16:17:29.749032 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 16:17:29 crc kubenswrapper[4808]: I0217 16:17:29.814301 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:17:30 crc kubenswrapper[4808]: I0217 16:17:30.248813 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 16:17:30 crc kubenswrapper[4808]: W0217 16:17:30.256025 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf4225bf1_ce01_4830_b857_2201d4e67fd6.slice/crio-b9ba282b61dd19cf7f01d6fa791c3901ce461226c81f5bc25a782cde7271b2fe WatchSource:0}: Error finding container b9ba282b61dd19cf7f01d6fa791c3901ce461226c81f5bc25a782cde7271b2fe: Status 404 returned error can't find the container with id b9ba282b61dd19cf7f01d6fa791c3901ce461226c81f5bc25a782cde7271b2fe Feb 17 16:17:30 crc kubenswrapper[4808]: I0217 16:17:30.315805 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f4225bf1-ce01-4830-b857-2201d4e67fd6","Type":"ContainerStarted","Data":"b9ba282b61dd19cf7f01d6fa791c3901ce461226c81f5bc25a782cde7271b2fe"} Feb 17 16:17:30 crc kubenswrapper[4808]: I0217 16:17:30.321567 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"28d43ac9-e802-4679-a989-5032d56ea9dd","Type":"ContainerStarted","Data":"35a73f991947a0cd10731b25033a4694cf130ce52c934dc6024d1cb61cb74337"} Feb 17 16:17:30 crc kubenswrapper[4808]: I0217 16:17:30.345344 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 17 16:17:30 crc kubenswrapper[4808]: I0217 16:17:30.750995 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 17 16:17:30 crc kubenswrapper[4808]: I0217 16:17:30.751482 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 17 16:17:30 crc kubenswrapper[4808]: I0217 16:17:30.751804 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 17 16:17:30 crc kubenswrapper[4808]: I0217 16:17:30.759819 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 17 16:17:31 crc kubenswrapper[4808]: I0217 16:17:31.160912 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="018b3b96-1953-4437-83ab-99bc970bcd36" path="/var/lib/kubelet/pods/018b3b96-1953-4437-83ab-99bc970bcd36/volumes" Feb 17 16:17:31 crc kubenswrapper[4808]: I0217 16:17:31.161486 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67800510-1957-448c-88a1-0d2898a6524b" path="/var/lib/kubelet/pods/67800510-1957-448c-88a1-0d2898a6524b/volumes" Feb 17 16:17:31 crc kubenswrapper[4808]: I0217 16:17:31.331695 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"e1acfe51-1173-4ce1-a645-d757d30e3312","Type":"ContainerStarted","Data":"f0e4e0459d4b30bcbc27bbf87d35c5a023f938b33320b620b4c3125771b4ca6f"} Feb 17 16:17:31 crc kubenswrapper[4808]: I0217 16:17:31.331741 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"e1acfe51-1173-4ce1-a645-d757d30e3312","Type":"ContainerStarted","Data":"5c0fe5224fe64637ee65d7c020d56249fad757b4f26c1d5910f3c48ed30b6247"} Feb 17 16:17:31 crc kubenswrapper[4808]: I0217 16:17:31.336060 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"28d43ac9-e802-4679-a989-5032d56ea9dd","Type":"ContainerStarted","Data":"a4ab3534824b6e5095da080bc7891b4fec20af147b6023092cb6d058a442f5ed"} Feb 17 16:17:31 crc kubenswrapper[4808]: I0217 16:17:31.342531 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f4225bf1-ce01-4830-b857-2201d4e67fd6","Type":"ContainerStarted","Data":"ce6083e495f8bd1d0bb01f3f9f8ec767b206db7820b55aab9e2d9682e9112c59"} Feb 17 16:17:31 crc kubenswrapper[4808]: I0217 16:17:31.342665 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f4225bf1-ce01-4830-b857-2201d4e67fd6","Type":"ContainerStarted","Data":"0ea7c0c9c375fd22964f8f3f8e14e0f294b4d28792f18a93ced64305d017f82a"} Feb 17 16:17:31 crc kubenswrapper[4808]: I0217 16:17:31.342681 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 17 16:17:31 crc kubenswrapper[4808]: I0217 16:17:31.345248 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 17 16:17:31 crc kubenswrapper[4808]: I0217 16:17:31.354965 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.354948062 podStartE2EDuration="2.354948062s" podCreationTimestamp="2026-02-17 16:17:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:17:31.34997013 +0000 UTC m=+1414.866329203" watchObservedRunningTime="2026-02-17 16:17:31.354948062 +0000 UTC m=+1414.871307135" Feb 17 16:17:31 crc kubenswrapper[4808]: I0217 16:17:31.392991 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.392974817 podStartE2EDuration="2.392974817s" podCreationTimestamp="2026-02-17 16:17:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:17:31.389503436 +0000 UTC m=+1414.905862509" watchObservedRunningTime="2026-02-17 16:17:31.392974817 +0000 UTC m=+1414.909333890" Feb 17 16:17:31 crc kubenswrapper[4808]: I0217 16:17:31.557409 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5fd9b586ff-kf4dn"] Feb 17 16:17:31 crc kubenswrapper[4808]: I0217 16:17:31.559026 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fd9b586ff-kf4dn" Feb 17 16:17:31 crc kubenswrapper[4808]: I0217 16:17:31.593798 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5fd9b586ff-kf4dn"] Feb 17 16:17:31 crc kubenswrapper[4808]: I0217 16:17:31.656520 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 17 16:17:31 crc kubenswrapper[4808]: I0217 16:17:31.679751 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/236a76a9-e108-4cb9-b76d-825e33bdad41-ovsdbserver-nb\") pod \"dnsmasq-dns-5fd9b586ff-kf4dn\" (UID: \"236a76a9-e108-4cb9-b76d-825e33bdad41\") " pod="openstack/dnsmasq-dns-5fd9b586ff-kf4dn" Feb 17 16:17:31 crc kubenswrapper[4808]: I0217 16:17:31.679825 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxgsc\" (UniqueName: \"kubernetes.io/projected/236a76a9-e108-4cb9-b76d-825e33bdad41-kube-api-access-fxgsc\") pod \"dnsmasq-dns-5fd9b586ff-kf4dn\" (UID: \"236a76a9-e108-4cb9-b76d-825e33bdad41\") " pod="openstack/dnsmasq-dns-5fd9b586ff-kf4dn" Feb 17 16:17:31 crc kubenswrapper[4808]: I0217 16:17:31.679852 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/236a76a9-e108-4cb9-b76d-825e33bdad41-ovsdbserver-sb\") pod \"dnsmasq-dns-5fd9b586ff-kf4dn\" (UID: \"236a76a9-e108-4cb9-b76d-825e33bdad41\") " pod="openstack/dnsmasq-dns-5fd9b586ff-kf4dn" Feb 17 16:17:31 crc kubenswrapper[4808]: I0217 16:17:31.679878 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/236a76a9-e108-4cb9-b76d-825e33bdad41-dns-svc\") pod \"dnsmasq-dns-5fd9b586ff-kf4dn\" (UID: \"236a76a9-e108-4cb9-b76d-825e33bdad41\") " pod="openstack/dnsmasq-dns-5fd9b586ff-kf4dn" Feb 17 16:17:31 crc kubenswrapper[4808]: I0217 16:17:31.679899 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/236a76a9-e108-4cb9-b76d-825e33bdad41-dns-swift-storage-0\") pod \"dnsmasq-dns-5fd9b586ff-kf4dn\" (UID: \"236a76a9-e108-4cb9-b76d-825e33bdad41\") " pod="openstack/dnsmasq-dns-5fd9b586ff-kf4dn" Feb 17 16:17:31 crc kubenswrapper[4808]: I0217 16:17:31.679963 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/236a76a9-e108-4cb9-b76d-825e33bdad41-config\") pod \"dnsmasq-dns-5fd9b586ff-kf4dn\" (UID: \"236a76a9-e108-4cb9-b76d-825e33bdad41\") " pod="openstack/dnsmasq-dns-5fd9b586ff-kf4dn" Feb 17 16:17:31 crc kubenswrapper[4808]: I0217 16:17:31.781817 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/236a76a9-e108-4cb9-b76d-825e33bdad41-config\") pod \"dnsmasq-dns-5fd9b586ff-kf4dn\" (UID: \"236a76a9-e108-4cb9-b76d-825e33bdad41\") " pod="openstack/dnsmasq-dns-5fd9b586ff-kf4dn" Feb 17 16:17:31 crc kubenswrapper[4808]: I0217 16:17:31.781934 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/236a76a9-e108-4cb9-b76d-825e33bdad41-ovsdbserver-nb\") pod \"dnsmasq-dns-5fd9b586ff-kf4dn\" (UID: \"236a76a9-e108-4cb9-b76d-825e33bdad41\") " pod="openstack/dnsmasq-dns-5fd9b586ff-kf4dn" Feb 17 16:17:31 crc kubenswrapper[4808]: I0217 16:17:31.781994 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxgsc\" (UniqueName: \"kubernetes.io/projected/236a76a9-e108-4cb9-b76d-825e33bdad41-kube-api-access-fxgsc\") pod \"dnsmasq-dns-5fd9b586ff-kf4dn\" (UID: \"236a76a9-e108-4cb9-b76d-825e33bdad41\") " pod="openstack/dnsmasq-dns-5fd9b586ff-kf4dn" Feb 17 16:17:31 crc kubenswrapper[4808]: I0217 16:17:31.782033 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/236a76a9-e108-4cb9-b76d-825e33bdad41-ovsdbserver-sb\") pod \"dnsmasq-dns-5fd9b586ff-kf4dn\" (UID: \"236a76a9-e108-4cb9-b76d-825e33bdad41\") " pod="openstack/dnsmasq-dns-5fd9b586ff-kf4dn" Feb 17 16:17:31 crc kubenswrapper[4808]: I0217 16:17:31.782056 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/236a76a9-e108-4cb9-b76d-825e33bdad41-dns-svc\") pod \"dnsmasq-dns-5fd9b586ff-kf4dn\" (UID: \"236a76a9-e108-4cb9-b76d-825e33bdad41\") " pod="openstack/dnsmasq-dns-5fd9b586ff-kf4dn" Feb 17 16:17:31 crc kubenswrapper[4808]: I0217 16:17:31.782086 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/236a76a9-e108-4cb9-b76d-825e33bdad41-dns-swift-storage-0\") pod \"dnsmasq-dns-5fd9b586ff-kf4dn\" (UID: \"236a76a9-e108-4cb9-b76d-825e33bdad41\") " pod="openstack/dnsmasq-dns-5fd9b586ff-kf4dn" Feb 17 16:17:31 crc kubenswrapper[4808]: I0217 16:17:31.783177 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/236a76a9-e108-4cb9-b76d-825e33bdad41-dns-swift-storage-0\") pod \"dnsmasq-dns-5fd9b586ff-kf4dn\" (UID: \"236a76a9-e108-4cb9-b76d-825e33bdad41\") " pod="openstack/dnsmasq-dns-5fd9b586ff-kf4dn" Feb 17 16:17:31 crc kubenswrapper[4808]: I0217 16:17:31.783892 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/236a76a9-e108-4cb9-b76d-825e33bdad41-config\") pod \"dnsmasq-dns-5fd9b586ff-kf4dn\" (UID: \"236a76a9-e108-4cb9-b76d-825e33bdad41\") " pod="openstack/dnsmasq-dns-5fd9b586ff-kf4dn" Feb 17 16:17:31 crc kubenswrapper[4808]: I0217 16:17:31.784087 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/236a76a9-e108-4cb9-b76d-825e33bdad41-dns-svc\") pod \"dnsmasq-dns-5fd9b586ff-kf4dn\" (UID: \"236a76a9-e108-4cb9-b76d-825e33bdad41\") " pod="openstack/dnsmasq-dns-5fd9b586ff-kf4dn" Feb 17 16:17:31 crc kubenswrapper[4808]: I0217 16:17:31.784417 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/236a76a9-e108-4cb9-b76d-825e33bdad41-ovsdbserver-sb\") pod \"dnsmasq-dns-5fd9b586ff-kf4dn\" (UID: \"236a76a9-e108-4cb9-b76d-825e33bdad41\") " pod="openstack/dnsmasq-dns-5fd9b586ff-kf4dn" Feb 17 16:17:31 crc kubenswrapper[4808]: I0217 16:17:31.784621 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/236a76a9-e108-4cb9-b76d-825e33bdad41-ovsdbserver-nb\") pod \"dnsmasq-dns-5fd9b586ff-kf4dn\" (UID: \"236a76a9-e108-4cb9-b76d-825e33bdad41\") " pod="openstack/dnsmasq-dns-5fd9b586ff-kf4dn" Feb 17 16:17:31 crc kubenswrapper[4808]: I0217 16:17:31.837670 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxgsc\" (UniqueName: \"kubernetes.io/projected/236a76a9-e108-4cb9-b76d-825e33bdad41-kube-api-access-fxgsc\") pod \"dnsmasq-dns-5fd9b586ff-kf4dn\" (UID: \"236a76a9-e108-4cb9-b76d-825e33bdad41\") " pod="openstack/dnsmasq-dns-5fd9b586ff-kf4dn" Feb 17 16:17:31 crc kubenswrapper[4808]: I0217 16:17:31.914120 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fd9b586ff-kf4dn" Feb 17 16:17:32 crc kubenswrapper[4808]: I0217 16:17:32.392213 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5fd9b586ff-kf4dn"] Feb 17 16:17:33 crc kubenswrapper[4808]: I0217 16:17:33.358488 4808 generic.go:334] "Generic (PLEG): container finished" podID="236a76a9-e108-4cb9-b76d-825e33bdad41" containerID="b1830bc8bbf4b2312521eeaea4fe1cc258bc9a13a7a1aef82477a26dccb0e21e" exitCode=0 Feb 17 16:17:33 crc kubenswrapper[4808]: I0217 16:17:33.358563 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fd9b586ff-kf4dn" event={"ID":"236a76a9-e108-4cb9-b76d-825e33bdad41","Type":"ContainerDied","Data":"b1830bc8bbf4b2312521eeaea4fe1cc258bc9a13a7a1aef82477a26dccb0e21e"} Feb 17 16:17:33 crc kubenswrapper[4808]: I0217 16:17:33.359010 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fd9b586ff-kf4dn" event={"ID":"236a76a9-e108-4cb9-b76d-825e33bdad41","Type":"ContainerStarted","Data":"8fe947d0790a922756d78327f84cf510a97c6419a7ba4cf6d5a3665a8b91aebe"} Feb 17 16:17:33 crc kubenswrapper[4808]: I0217 16:17:33.361772 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"28d43ac9-e802-4679-a989-5032d56ea9dd","Type":"ContainerStarted","Data":"721c57846faaa4f40473344e9d393bd7d039388a3ea80e13d23e98986555a7ec"} Feb 17 16:17:33 crc kubenswrapper[4808]: I0217 16:17:33.446589 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.172050217 podStartE2EDuration="7.446567509s" podCreationTimestamp="2026-02-17 16:17:26 +0000 UTC" firstStartedPulling="2026-02-17 16:17:27.154497842 +0000 UTC m=+1410.670856915" lastFinishedPulling="2026-02-17 16:17:32.429015134 +0000 UTC m=+1415.945374207" observedRunningTime="2026-02-17 16:17:33.438284881 +0000 UTC m=+1416.954643954" watchObservedRunningTime="2026-02-17 16:17:33.446567509 +0000 UTC m=+1416.962926572" Feb 17 16:17:33 crc kubenswrapper[4808]: I0217 16:17:33.465348 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vbtkb"] Feb 17 16:17:33 crc kubenswrapper[4808]: I0217 16:17:33.474201 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vbtkb" Feb 17 16:17:33 crc kubenswrapper[4808]: I0217 16:17:33.491880 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vbtkb"] Feb 17 16:17:33 crc kubenswrapper[4808]: I0217 16:17:33.633781 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mnpq\" (UniqueName: \"kubernetes.io/projected/02c5cc0b-1b55-465f-8f31-fd8575d07242-kube-api-access-6mnpq\") pod \"community-operators-vbtkb\" (UID: \"02c5cc0b-1b55-465f-8f31-fd8575d07242\") " pod="openshift-marketplace/community-operators-vbtkb" Feb 17 16:17:33 crc kubenswrapper[4808]: I0217 16:17:33.633877 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02c5cc0b-1b55-465f-8f31-fd8575d07242-catalog-content\") pod \"community-operators-vbtkb\" (UID: \"02c5cc0b-1b55-465f-8f31-fd8575d07242\") " pod="openshift-marketplace/community-operators-vbtkb" Feb 17 16:17:33 crc kubenswrapper[4808]: I0217 16:17:33.633965 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02c5cc0b-1b55-465f-8f31-fd8575d07242-utilities\") pod \"community-operators-vbtkb\" (UID: \"02c5cc0b-1b55-465f-8f31-fd8575d07242\") " pod="openshift-marketplace/community-operators-vbtkb" Feb 17 16:17:33 crc kubenswrapper[4808]: I0217 16:17:33.736024 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6mnpq\" (UniqueName: \"kubernetes.io/projected/02c5cc0b-1b55-465f-8f31-fd8575d07242-kube-api-access-6mnpq\") pod \"community-operators-vbtkb\" (UID: \"02c5cc0b-1b55-465f-8f31-fd8575d07242\") " pod="openshift-marketplace/community-operators-vbtkb" Feb 17 16:17:33 crc kubenswrapper[4808]: I0217 16:17:33.736106 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02c5cc0b-1b55-465f-8f31-fd8575d07242-catalog-content\") pod \"community-operators-vbtkb\" (UID: \"02c5cc0b-1b55-465f-8f31-fd8575d07242\") " pod="openshift-marketplace/community-operators-vbtkb" Feb 17 16:17:33 crc kubenswrapper[4808]: I0217 16:17:33.736199 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02c5cc0b-1b55-465f-8f31-fd8575d07242-utilities\") pod \"community-operators-vbtkb\" (UID: \"02c5cc0b-1b55-465f-8f31-fd8575d07242\") " pod="openshift-marketplace/community-operators-vbtkb" Feb 17 16:17:33 crc kubenswrapper[4808]: I0217 16:17:33.736810 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02c5cc0b-1b55-465f-8f31-fd8575d07242-utilities\") pod \"community-operators-vbtkb\" (UID: \"02c5cc0b-1b55-465f-8f31-fd8575d07242\") " pod="openshift-marketplace/community-operators-vbtkb" Feb 17 16:17:33 crc kubenswrapper[4808]: I0217 16:17:33.736808 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02c5cc0b-1b55-465f-8f31-fd8575d07242-catalog-content\") pod \"community-operators-vbtkb\" (UID: \"02c5cc0b-1b55-465f-8f31-fd8575d07242\") " pod="openshift-marketplace/community-operators-vbtkb" Feb 17 16:17:33 crc kubenswrapper[4808]: I0217 16:17:33.763260 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mnpq\" (UniqueName: \"kubernetes.io/projected/02c5cc0b-1b55-465f-8f31-fd8575d07242-kube-api-access-6mnpq\") pod \"community-operators-vbtkb\" (UID: \"02c5cc0b-1b55-465f-8f31-fd8575d07242\") " pod="openshift-marketplace/community-operators-vbtkb" Feb 17 16:17:33 crc kubenswrapper[4808]: I0217 16:17:33.889657 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vbtkb" Feb 17 16:17:34 crc kubenswrapper[4808]: I0217 16:17:34.373407 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fd9b586ff-kf4dn" event={"ID":"236a76a9-e108-4cb9-b76d-825e33bdad41","Type":"ContainerStarted","Data":"726982a5e02918c4f9048d79766ece8c9bd2f3298827c5b5c0acd8c07d834e65"} Feb 17 16:17:34 crc kubenswrapper[4808]: I0217 16:17:34.373941 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 17 16:17:34 crc kubenswrapper[4808]: I0217 16:17:34.413996 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5fd9b586ff-kf4dn" podStartSLOduration=3.413971288 podStartE2EDuration="3.413971288s" podCreationTimestamp="2026-02-17 16:17:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:17:34.403213673 +0000 UTC m=+1417.919572746" watchObservedRunningTime="2026-02-17 16:17:34.413971288 +0000 UTC m=+1417.930330371" Feb 17 16:17:34 crc kubenswrapper[4808]: I0217 16:17:34.432199 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 17 16:17:34 crc kubenswrapper[4808]: I0217 16:17:34.432440 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="646d437b-8ce5-47ba-8fc6-9c6451caacc8" containerName="nova-api-log" containerID="cri-o://8bfe96313fc0880ba2b05de73386c3a0141557df7597d80f4ca352d193fcea90" gracePeriod=30 Feb 17 16:17:34 crc kubenswrapper[4808]: I0217 16:17:34.432561 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="646d437b-8ce5-47ba-8fc6-9c6451caacc8" containerName="nova-api-api" containerID="cri-o://8ef043aeb841feb7820cafa9458135b261212780ed4c47c6422beb21b665b0f8" gracePeriod=30 Feb 17 16:17:34 crc kubenswrapper[4808]: W0217 16:17:34.503857 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod02c5cc0b_1b55_465f_8f31_fd8575d07242.slice/crio-11e80ad30caf9ea56cfefbec7d1e89b12ad5290f08e7fc3cc6e04510e32e5b8b WatchSource:0}: Error finding container 11e80ad30caf9ea56cfefbec7d1e89b12ad5290f08e7fc3cc6e04510e32e5b8b: Status 404 returned error can't find the container with id 11e80ad30caf9ea56cfefbec7d1e89b12ad5290f08e7fc3cc6e04510e32e5b8b Feb 17 16:17:34 crc kubenswrapper[4808]: I0217 16:17:34.517606 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vbtkb"] Feb 17 16:17:34 crc kubenswrapper[4808]: I0217 16:17:34.749598 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 17 16:17:34 crc kubenswrapper[4808]: I0217 16:17:34.749960 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 17 16:17:34 crc kubenswrapper[4808]: I0217 16:17:34.815144 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:17:35 crc kubenswrapper[4808]: I0217 16:17:35.384039 4808 generic.go:334] "Generic (PLEG): container finished" podID="646d437b-8ce5-47ba-8fc6-9c6451caacc8" containerID="8bfe96313fc0880ba2b05de73386c3a0141557df7597d80f4ca352d193fcea90" exitCode=143 Feb 17 16:17:35 crc kubenswrapper[4808]: I0217 16:17:35.384093 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"646d437b-8ce5-47ba-8fc6-9c6451caacc8","Type":"ContainerDied","Data":"8bfe96313fc0880ba2b05de73386c3a0141557df7597d80f4ca352d193fcea90"} Feb 17 16:17:35 crc kubenswrapper[4808]: I0217 16:17:35.385693 4808 generic.go:334] "Generic (PLEG): container finished" podID="02c5cc0b-1b55-465f-8f31-fd8575d07242" containerID="e98a2e96df763da34095f5b36d490a12752ad034b23f41d68bf217b2eaf71996" exitCode=0 Feb 17 16:17:35 crc kubenswrapper[4808]: I0217 16:17:35.387078 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vbtkb" event={"ID":"02c5cc0b-1b55-465f-8f31-fd8575d07242","Type":"ContainerDied","Data":"e98a2e96df763da34095f5b36d490a12752ad034b23f41d68bf217b2eaf71996"} Feb 17 16:17:35 crc kubenswrapper[4808]: I0217 16:17:35.387098 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vbtkb" event={"ID":"02c5cc0b-1b55-465f-8f31-fd8575d07242","Type":"ContainerStarted","Data":"11e80ad30caf9ea56cfefbec7d1e89b12ad5290f08e7fc3cc6e04510e32e5b8b"} Feb 17 16:17:35 crc kubenswrapper[4808]: I0217 16:17:35.387111 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5fd9b586ff-kf4dn" Feb 17 16:17:36 crc kubenswrapper[4808]: I0217 16:17:36.397625 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vbtkb" event={"ID":"02c5cc0b-1b55-465f-8f31-fd8575d07242","Type":"ContainerStarted","Data":"77fe18d2b0943541237f3b74c773e3a3e36241d7ed44ba023146405de7f15ab1"} Feb 17 16:17:36 crc kubenswrapper[4808]: I0217 16:17:36.800099 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:17:36 crc kubenswrapper[4808]: I0217 16:17:36.800868 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="28d43ac9-e802-4679-a989-5032d56ea9dd" containerName="ceilometer-central-agent" containerID="cri-o://d280b23c1a5b1af2bcce4dd612c258d4f33571abef294ea93665969a086afee4" gracePeriod=30 Feb 17 16:17:36 crc kubenswrapper[4808]: I0217 16:17:36.801375 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="28d43ac9-e802-4679-a989-5032d56ea9dd" containerName="proxy-httpd" containerID="cri-o://721c57846faaa4f40473344e9d393bd7d039388a3ea80e13d23e98986555a7ec" gracePeriod=30 Feb 17 16:17:36 crc kubenswrapper[4808]: I0217 16:17:36.801497 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="28d43ac9-e802-4679-a989-5032d56ea9dd" containerName="ceilometer-notification-agent" containerID="cri-o://35a73f991947a0cd10731b25033a4694cf130ce52c934dc6024d1cb61cb74337" gracePeriod=30 Feb 17 16:17:36 crc kubenswrapper[4808]: I0217 16:17:36.801551 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="28d43ac9-e802-4679-a989-5032d56ea9dd" containerName="sg-core" containerID="cri-o://a4ab3534824b6e5095da080bc7891b4fec20af147b6023092cb6d058a442f5ed" gracePeriod=30 Feb 17 16:17:37 crc kubenswrapper[4808]: I0217 16:17:37.411430 4808 generic.go:334] "Generic (PLEG): container finished" podID="02c5cc0b-1b55-465f-8f31-fd8575d07242" containerID="77fe18d2b0943541237f3b74c773e3a3e36241d7ed44ba023146405de7f15ab1" exitCode=0 Feb 17 16:17:37 crc kubenswrapper[4808]: I0217 16:17:37.411533 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vbtkb" event={"ID":"02c5cc0b-1b55-465f-8f31-fd8575d07242","Type":"ContainerDied","Data":"77fe18d2b0943541237f3b74c773e3a3e36241d7ed44ba023146405de7f15ab1"} Feb 17 16:17:37 crc kubenswrapper[4808]: I0217 16:17:37.425002 4808 generic.go:334] "Generic (PLEG): container finished" podID="28d43ac9-e802-4679-a989-5032d56ea9dd" containerID="721c57846faaa4f40473344e9d393bd7d039388a3ea80e13d23e98986555a7ec" exitCode=0 Feb 17 16:17:37 crc kubenswrapper[4808]: I0217 16:17:37.425072 4808 generic.go:334] "Generic (PLEG): container finished" podID="28d43ac9-e802-4679-a989-5032d56ea9dd" containerID="a4ab3534824b6e5095da080bc7891b4fec20af147b6023092cb6d058a442f5ed" exitCode=2 Feb 17 16:17:37 crc kubenswrapper[4808]: I0217 16:17:37.425089 4808 generic.go:334] "Generic (PLEG): container finished" podID="28d43ac9-e802-4679-a989-5032d56ea9dd" containerID="35a73f991947a0cd10731b25033a4694cf130ce52c934dc6024d1cb61cb74337" exitCode=0 Feb 17 16:17:37 crc kubenswrapper[4808]: I0217 16:17:37.425103 4808 generic.go:334] "Generic (PLEG): container finished" podID="28d43ac9-e802-4679-a989-5032d56ea9dd" containerID="d280b23c1a5b1af2bcce4dd612c258d4f33571abef294ea93665969a086afee4" exitCode=0 Feb 17 16:17:37 crc kubenswrapper[4808]: I0217 16:17:37.425166 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"28d43ac9-e802-4679-a989-5032d56ea9dd","Type":"ContainerDied","Data":"721c57846faaa4f40473344e9d393bd7d039388a3ea80e13d23e98986555a7ec"} Feb 17 16:17:37 crc kubenswrapper[4808]: I0217 16:17:37.425214 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"28d43ac9-e802-4679-a989-5032d56ea9dd","Type":"ContainerDied","Data":"a4ab3534824b6e5095da080bc7891b4fec20af147b6023092cb6d058a442f5ed"} Feb 17 16:17:37 crc kubenswrapper[4808]: I0217 16:17:37.425227 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"28d43ac9-e802-4679-a989-5032d56ea9dd","Type":"ContainerDied","Data":"35a73f991947a0cd10731b25033a4694cf130ce52c934dc6024d1cb61cb74337"} Feb 17 16:17:37 crc kubenswrapper[4808]: I0217 16:17:37.425241 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"28d43ac9-e802-4679-a989-5032d56ea9dd","Type":"ContainerDied","Data":"d280b23c1a5b1af2bcce4dd612c258d4f33571abef294ea93665969a086afee4"} Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.341903 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.429885 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28d43ac9-e802-4679-a989-5032d56ea9dd-config-data\") pod \"28d43ac9-e802-4679-a989-5032d56ea9dd\" (UID: \"28d43ac9-e802-4679-a989-5032d56ea9dd\") " Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.429984 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/28d43ac9-e802-4679-a989-5032d56ea9dd-ceilometer-tls-certs\") pod \"28d43ac9-e802-4679-a989-5032d56ea9dd\" (UID: \"28d43ac9-e802-4679-a989-5032d56ea9dd\") " Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.430032 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28d43ac9-e802-4679-a989-5032d56ea9dd-combined-ca-bundle\") pod \"28d43ac9-e802-4679-a989-5032d56ea9dd\" (UID: \"28d43ac9-e802-4679-a989-5032d56ea9dd\") " Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.430157 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28d43ac9-e802-4679-a989-5032d56ea9dd-scripts\") pod \"28d43ac9-e802-4679-a989-5032d56ea9dd\" (UID: \"28d43ac9-e802-4679-a989-5032d56ea9dd\") " Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.430255 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fwssk\" (UniqueName: \"kubernetes.io/projected/28d43ac9-e802-4679-a989-5032d56ea9dd-kube-api-access-fwssk\") pod \"28d43ac9-e802-4679-a989-5032d56ea9dd\" (UID: \"28d43ac9-e802-4679-a989-5032d56ea9dd\") " Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.430711 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28d43ac9-e802-4679-a989-5032d56ea9dd-log-httpd\") pod \"28d43ac9-e802-4679-a989-5032d56ea9dd\" (UID: \"28d43ac9-e802-4679-a989-5032d56ea9dd\") " Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.430736 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28d43ac9-e802-4679-a989-5032d56ea9dd-run-httpd\") pod \"28d43ac9-e802-4679-a989-5032d56ea9dd\" (UID: \"28d43ac9-e802-4679-a989-5032d56ea9dd\") " Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.430763 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/28d43ac9-e802-4679-a989-5032d56ea9dd-sg-core-conf-yaml\") pod \"28d43ac9-e802-4679-a989-5032d56ea9dd\" (UID: \"28d43ac9-e802-4679-a989-5032d56ea9dd\") " Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.431802 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/28d43ac9-e802-4679-a989-5032d56ea9dd-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "28d43ac9-e802-4679-a989-5032d56ea9dd" (UID: "28d43ac9-e802-4679-a989-5032d56ea9dd"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.432050 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/28d43ac9-e802-4679-a989-5032d56ea9dd-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "28d43ac9-e802-4679-a989-5032d56ea9dd" (UID: "28d43ac9-e802-4679-a989-5032d56ea9dd"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.432565 4808 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28d43ac9-e802-4679-a989-5032d56ea9dd-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.432608 4808 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28d43ac9-e802-4679-a989-5032d56ea9dd-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.436299 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28d43ac9-e802-4679-a989-5032d56ea9dd-kube-api-access-fwssk" (OuterVolumeSpecName: "kube-api-access-fwssk") pod "28d43ac9-e802-4679-a989-5032d56ea9dd" (UID: "28d43ac9-e802-4679-a989-5032d56ea9dd"). InnerVolumeSpecName "kube-api-access-fwssk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.442591 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vbtkb" event={"ID":"02c5cc0b-1b55-465f-8f31-fd8575d07242","Type":"ContainerStarted","Data":"4889c213cbd2b08515c838ee226a5311661235481dfa4a53524a4c6a6346e5a6"} Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.448050 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"28d43ac9-e802-4679-a989-5032d56ea9dd","Type":"ContainerDied","Data":"ab32feefa5626c6c7de2470473cdca164dd77fd77015ec801b8e2ecef92b4ac6"} Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.448101 4808 scope.go:117] "RemoveContainer" containerID="721c57846faaa4f40473344e9d393bd7d039388a3ea80e13d23e98986555a7ec" Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.448266 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.454442 4808 generic.go:334] "Generic (PLEG): container finished" podID="646d437b-8ce5-47ba-8fc6-9c6451caacc8" containerID="8ef043aeb841feb7820cafa9458135b261212780ed4c47c6422beb21b665b0f8" exitCode=0 Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.454488 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"646d437b-8ce5-47ba-8fc6-9c6451caacc8","Type":"ContainerDied","Data":"8ef043aeb841feb7820cafa9458135b261212780ed4c47c6422beb21b665b0f8"} Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.454534 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"646d437b-8ce5-47ba-8fc6-9c6451caacc8","Type":"ContainerDied","Data":"98396bda825cd064a21268c85ea75ac821bba4f4fc3e844ab94ef3298d308124"} Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.454545 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98396bda825cd064a21268c85ea75ac821bba4f4fc3e844ab94ef3298d308124" Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.455602 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28d43ac9-e802-4679-a989-5032d56ea9dd-scripts" (OuterVolumeSpecName: "scripts") pod "28d43ac9-e802-4679-a989-5032d56ea9dd" (UID: "28d43ac9-e802-4679-a989-5032d56ea9dd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.470531 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vbtkb" podStartSLOduration=3.012180004 podStartE2EDuration="5.470513788s" podCreationTimestamp="2026-02-17 16:17:33 +0000 UTC" firstStartedPulling="2026-02-17 16:17:35.387730284 +0000 UTC m=+1418.904089347" lastFinishedPulling="2026-02-17 16:17:37.846064058 +0000 UTC m=+1421.362423131" observedRunningTime="2026-02-17 16:17:38.459461325 +0000 UTC m=+1421.975820408" watchObservedRunningTime="2026-02-17 16:17:38.470513788 +0000 UTC m=+1421.986872861" Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.478995 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28d43ac9-e802-4679-a989-5032d56ea9dd-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "28d43ac9-e802-4679-a989-5032d56ea9dd" (UID: "28d43ac9-e802-4679-a989-5032d56ea9dd"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.526170 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28d43ac9-e802-4679-a989-5032d56ea9dd-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "28d43ac9-e802-4679-a989-5032d56ea9dd" (UID: "28d43ac9-e802-4679-a989-5032d56ea9dd"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.534792 4808 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28d43ac9-e802-4679-a989-5032d56ea9dd-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.534825 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fwssk\" (UniqueName: \"kubernetes.io/projected/28d43ac9-e802-4679-a989-5032d56ea9dd-kube-api-access-fwssk\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.534835 4808 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/28d43ac9-e802-4679-a989-5032d56ea9dd-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.534843 4808 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/28d43ac9-e802-4679-a989-5032d56ea9dd-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.569764 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28d43ac9-e802-4679-a989-5032d56ea9dd-config-data" (OuterVolumeSpecName: "config-data") pod "28d43ac9-e802-4679-a989-5032d56ea9dd" (UID: "28d43ac9-e802-4679-a989-5032d56ea9dd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.571437 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28d43ac9-e802-4679-a989-5032d56ea9dd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "28d43ac9-e802-4679-a989-5032d56ea9dd" (UID: "28d43ac9-e802-4679-a989-5032d56ea9dd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.604724 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.607110 4808 scope.go:117] "RemoveContainer" containerID="a4ab3534824b6e5095da080bc7891b4fec20af147b6023092cb6d058a442f5ed" Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.630742 4808 scope.go:117] "RemoveContainer" containerID="35a73f991947a0cd10731b25033a4694cf130ce52c934dc6024d1cb61cb74337" Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.637185 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28d43ac9-e802-4679-a989-5032d56ea9dd-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.637228 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28d43ac9-e802-4679-a989-5032d56ea9dd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.659944 4808 scope.go:117] "RemoveContainer" containerID="d280b23c1a5b1af2bcce4dd612c258d4f33571abef294ea93665969a086afee4" Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.738717 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/646d437b-8ce5-47ba-8fc6-9c6451caacc8-config-data\") pod \"646d437b-8ce5-47ba-8fc6-9c6451caacc8\" (UID: \"646d437b-8ce5-47ba-8fc6-9c6451caacc8\") " Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.738817 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/646d437b-8ce5-47ba-8fc6-9c6451caacc8-logs\") pod \"646d437b-8ce5-47ba-8fc6-9c6451caacc8\" (UID: \"646d437b-8ce5-47ba-8fc6-9c6451caacc8\") " Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.738980 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/646d437b-8ce5-47ba-8fc6-9c6451caacc8-combined-ca-bundle\") pod \"646d437b-8ce5-47ba-8fc6-9c6451caacc8\" (UID: \"646d437b-8ce5-47ba-8fc6-9c6451caacc8\") " Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.739109 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7629p\" (UniqueName: \"kubernetes.io/projected/646d437b-8ce5-47ba-8fc6-9c6451caacc8-kube-api-access-7629p\") pod \"646d437b-8ce5-47ba-8fc6-9c6451caacc8\" (UID: \"646d437b-8ce5-47ba-8fc6-9c6451caacc8\") " Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.740216 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/646d437b-8ce5-47ba-8fc6-9c6451caacc8-logs" (OuterVolumeSpecName: "logs") pod "646d437b-8ce5-47ba-8fc6-9c6451caacc8" (UID: "646d437b-8ce5-47ba-8fc6-9c6451caacc8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.743738 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/646d437b-8ce5-47ba-8fc6-9c6451caacc8-kube-api-access-7629p" (OuterVolumeSpecName: "kube-api-access-7629p") pod "646d437b-8ce5-47ba-8fc6-9c6451caacc8" (UID: "646d437b-8ce5-47ba-8fc6-9c6451caacc8"). InnerVolumeSpecName "kube-api-access-7629p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.770133 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/646d437b-8ce5-47ba-8fc6-9c6451caacc8-config-data" (OuterVolumeSpecName: "config-data") pod "646d437b-8ce5-47ba-8fc6-9c6451caacc8" (UID: "646d437b-8ce5-47ba-8fc6-9c6451caacc8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.771715 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/646d437b-8ce5-47ba-8fc6-9c6451caacc8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "646d437b-8ce5-47ba-8fc6-9c6451caacc8" (UID: "646d437b-8ce5-47ba-8fc6-9c6451caacc8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.804723 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.816934 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.829708 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:17:38 crc kubenswrapper[4808]: E0217 16:17:38.830439 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28d43ac9-e802-4679-a989-5032d56ea9dd" containerName="ceilometer-central-agent" Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.830459 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="28d43ac9-e802-4679-a989-5032d56ea9dd" containerName="ceilometer-central-agent" Feb 17 16:17:38 crc kubenswrapper[4808]: E0217 16:17:38.830471 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28d43ac9-e802-4679-a989-5032d56ea9dd" containerName="proxy-httpd" Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.830479 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="28d43ac9-e802-4679-a989-5032d56ea9dd" containerName="proxy-httpd" Feb 17 16:17:38 crc kubenswrapper[4808]: E0217 16:17:38.830506 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="646d437b-8ce5-47ba-8fc6-9c6451caacc8" containerName="nova-api-api" Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.830514 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="646d437b-8ce5-47ba-8fc6-9c6451caacc8" containerName="nova-api-api" Feb 17 16:17:38 crc kubenswrapper[4808]: E0217 16:17:38.830531 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28d43ac9-e802-4679-a989-5032d56ea9dd" containerName="sg-core" Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.830537 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="28d43ac9-e802-4679-a989-5032d56ea9dd" containerName="sg-core" Feb 17 16:17:38 crc kubenswrapper[4808]: E0217 16:17:38.830547 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28d43ac9-e802-4679-a989-5032d56ea9dd" containerName="ceilometer-notification-agent" Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.830552 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="28d43ac9-e802-4679-a989-5032d56ea9dd" containerName="ceilometer-notification-agent" Feb 17 16:17:38 crc kubenswrapper[4808]: E0217 16:17:38.830565 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="646d437b-8ce5-47ba-8fc6-9c6451caacc8" containerName="nova-api-log" Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.830575 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="646d437b-8ce5-47ba-8fc6-9c6451caacc8" containerName="nova-api-log" Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.830778 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="646d437b-8ce5-47ba-8fc6-9c6451caacc8" containerName="nova-api-api" Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.830791 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="28d43ac9-e802-4679-a989-5032d56ea9dd" containerName="ceilometer-notification-agent" Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.830799 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="28d43ac9-e802-4679-a989-5032d56ea9dd" containerName="sg-core" Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.830809 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="646d437b-8ce5-47ba-8fc6-9c6451caacc8" containerName="nova-api-log" Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.830817 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="28d43ac9-e802-4679-a989-5032d56ea9dd" containerName="ceilometer-central-agent" Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.830825 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="28d43ac9-e802-4679-a989-5032d56ea9dd" containerName="proxy-httpd" Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.832943 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.838127 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.838299 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.838415 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.843039 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.844986 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/646d437b-8ce5-47ba-8fc6-9c6451caacc8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.845010 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7629p\" (UniqueName: \"kubernetes.io/projected/646d437b-8ce5-47ba-8fc6-9c6451caacc8-kube-api-access-7629p\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.845022 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/646d437b-8ce5-47ba-8fc6-9c6451caacc8-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.845030 4808 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/646d437b-8ce5-47ba-8fc6-9c6451caacc8-logs\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.946476 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f17f0491-7507-40fb-a2b9-d13d2c51eed6-scripts\") pod \"ceilometer-0\" (UID: \"f17f0491-7507-40fb-a2b9-d13d2c51eed6\") " pod="openstack/ceilometer-0" Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.946538 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f17f0491-7507-40fb-a2b9-d13d2c51eed6-run-httpd\") pod \"ceilometer-0\" (UID: \"f17f0491-7507-40fb-a2b9-d13d2c51eed6\") " pod="openstack/ceilometer-0" Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.946578 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2p8c4\" (UniqueName: \"kubernetes.io/projected/f17f0491-7507-40fb-a2b9-d13d2c51eed6-kube-api-access-2p8c4\") pod \"ceilometer-0\" (UID: \"f17f0491-7507-40fb-a2b9-d13d2c51eed6\") " pod="openstack/ceilometer-0" Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.946793 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f17f0491-7507-40fb-a2b9-d13d2c51eed6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f17f0491-7507-40fb-a2b9-d13d2c51eed6\") " pod="openstack/ceilometer-0" Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.946821 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f17f0491-7507-40fb-a2b9-d13d2c51eed6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f17f0491-7507-40fb-a2b9-d13d2c51eed6\") " pod="openstack/ceilometer-0" Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.946857 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f17f0491-7507-40fb-a2b9-d13d2c51eed6-config-data\") pod \"ceilometer-0\" (UID: \"f17f0491-7507-40fb-a2b9-d13d2c51eed6\") " pod="openstack/ceilometer-0" Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.946898 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f17f0491-7507-40fb-a2b9-d13d2c51eed6-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f17f0491-7507-40fb-a2b9-d13d2c51eed6\") " pod="openstack/ceilometer-0" Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.946972 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f17f0491-7507-40fb-a2b9-d13d2c51eed6-log-httpd\") pod \"ceilometer-0\" (UID: \"f17f0491-7507-40fb-a2b9-d13d2c51eed6\") " pod="openstack/ceilometer-0" Feb 17 16:17:39 crc kubenswrapper[4808]: I0217 16:17:39.049061 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f17f0491-7507-40fb-a2b9-d13d2c51eed6-scripts\") pod \"ceilometer-0\" (UID: \"f17f0491-7507-40fb-a2b9-d13d2c51eed6\") " pod="openstack/ceilometer-0" Feb 17 16:17:39 crc kubenswrapper[4808]: I0217 16:17:39.049117 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f17f0491-7507-40fb-a2b9-d13d2c51eed6-run-httpd\") pod \"ceilometer-0\" (UID: \"f17f0491-7507-40fb-a2b9-d13d2c51eed6\") " pod="openstack/ceilometer-0" Feb 17 16:17:39 crc kubenswrapper[4808]: I0217 16:17:39.049139 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2p8c4\" (UniqueName: \"kubernetes.io/projected/f17f0491-7507-40fb-a2b9-d13d2c51eed6-kube-api-access-2p8c4\") pod \"ceilometer-0\" (UID: \"f17f0491-7507-40fb-a2b9-d13d2c51eed6\") " pod="openstack/ceilometer-0" Feb 17 16:17:39 crc kubenswrapper[4808]: I0217 16:17:39.049413 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f17f0491-7507-40fb-a2b9-d13d2c51eed6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f17f0491-7507-40fb-a2b9-d13d2c51eed6\") " pod="openstack/ceilometer-0" Feb 17 16:17:39 crc kubenswrapper[4808]: I0217 16:17:39.049456 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f17f0491-7507-40fb-a2b9-d13d2c51eed6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f17f0491-7507-40fb-a2b9-d13d2c51eed6\") " pod="openstack/ceilometer-0" Feb 17 16:17:39 crc kubenswrapper[4808]: I0217 16:17:39.049508 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f17f0491-7507-40fb-a2b9-d13d2c51eed6-config-data\") pod \"ceilometer-0\" (UID: \"f17f0491-7507-40fb-a2b9-d13d2c51eed6\") " pod="openstack/ceilometer-0" Feb 17 16:17:39 crc kubenswrapper[4808]: I0217 16:17:39.049569 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f17f0491-7507-40fb-a2b9-d13d2c51eed6-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f17f0491-7507-40fb-a2b9-d13d2c51eed6\") " pod="openstack/ceilometer-0" Feb 17 16:17:39 crc kubenswrapper[4808]: I0217 16:17:39.049595 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f17f0491-7507-40fb-a2b9-d13d2c51eed6-run-httpd\") pod \"ceilometer-0\" (UID: \"f17f0491-7507-40fb-a2b9-d13d2c51eed6\") " pod="openstack/ceilometer-0" Feb 17 16:17:39 crc kubenswrapper[4808]: I0217 16:17:39.049764 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f17f0491-7507-40fb-a2b9-d13d2c51eed6-log-httpd\") pod \"ceilometer-0\" (UID: \"f17f0491-7507-40fb-a2b9-d13d2c51eed6\") " pod="openstack/ceilometer-0" Feb 17 16:17:39 crc kubenswrapper[4808]: I0217 16:17:39.050034 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f17f0491-7507-40fb-a2b9-d13d2c51eed6-log-httpd\") pod \"ceilometer-0\" (UID: \"f17f0491-7507-40fb-a2b9-d13d2c51eed6\") " pod="openstack/ceilometer-0" Feb 17 16:17:39 crc kubenswrapper[4808]: I0217 16:17:39.052931 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f17f0491-7507-40fb-a2b9-d13d2c51eed6-scripts\") pod \"ceilometer-0\" (UID: \"f17f0491-7507-40fb-a2b9-d13d2c51eed6\") " pod="openstack/ceilometer-0" Feb 17 16:17:39 crc kubenswrapper[4808]: I0217 16:17:39.053974 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f17f0491-7507-40fb-a2b9-d13d2c51eed6-config-data\") pod \"ceilometer-0\" (UID: \"f17f0491-7507-40fb-a2b9-d13d2c51eed6\") " pod="openstack/ceilometer-0" Feb 17 16:17:39 crc kubenswrapper[4808]: I0217 16:17:39.053991 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f17f0491-7507-40fb-a2b9-d13d2c51eed6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f17f0491-7507-40fb-a2b9-d13d2c51eed6\") " pod="openstack/ceilometer-0" Feb 17 16:17:39 crc kubenswrapper[4808]: I0217 16:17:39.055142 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f17f0491-7507-40fb-a2b9-d13d2c51eed6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f17f0491-7507-40fb-a2b9-d13d2c51eed6\") " pod="openstack/ceilometer-0" Feb 17 16:17:39 crc kubenswrapper[4808]: I0217 16:17:39.056327 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f17f0491-7507-40fb-a2b9-d13d2c51eed6-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f17f0491-7507-40fb-a2b9-d13d2c51eed6\") " pod="openstack/ceilometer-0" Feb 17 16:17:39 crc kubenswrapper[4808]: I0217 16:17:39.070088 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2p8c4\" (UniqueName: \"kubernetes.io/projected/f17f0491-7507-40fb-a2b9-d13d2c51eed6-kube-api-access-2p8c4\") pod \"ceilometer-0\" (UID: \"f17f0491-7507-40fb-a2b9-d13d2c51eed6\") " pod="openstack/ceilometer-0" Feb 17 16:17:39 crc kubenswrapper[4808]: I0217 16:17:39.153180 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:17:39 crc kubenswrapper[4808]: I0217 16:17:39.158176 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28d43ac9-e802-4679-a989-5032d56ea9dd" path="/var/lib/kubelet/pods/28d43ac9-e802-4679-a989-5032d56ea9dd/volumes" Feb 17 16:17:39 crc kubenswrapper[4808]: I0217 16:17:39.476849 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 16:17:39 crc kubenswrapper[4808]: I0217 16:17:39.509402 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 17 16:17:39 crc kubenswrapper[4808]: I0217 16:17:39.526135 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 17 16:17:39 crc kubenswrapper[4808]: I0217 16:17:39.539802 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 17 16:17:39 crc kubenswrapper[4808]: I0217 16:17:39.542148 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 16:17:39 crc kubenswrapper[4808]: I0217 16:17:39.545036 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 17 16:17:39 crc kubenswrapper[4808]: I0217 16:17:39.545270 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 17 16:17:39 crc kubenswrapper[4808]: I0217 16:17:39.545418 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 17 16:17:39 crc kubenswrapper[4808]: I0217 16:17:39.572207 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 17 16:17:39 crc kubenswrapper[4808]: I0217 16:17:39.661883 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0fdf7ae-717a-43f1-82b8-9c87285d4b4b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f0fdf7ae-717a-43f1-82b8-9c87285d4b4b\") " pod="openstack/nova-api-0" Feb 17 16:17:39 crc kubenswrapper[4808]: I0217 16:17:39.661935 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0fdf7ae-717a-43f1-82b8-9c87285d4b4b-config-data\") pod \"nova-api-0\" (UID: \"f0fdf7ae-717a-43f1-82b8-9c87285d4b4b\") " pod="openstack/nova-api-0" Feb 17 16:17:39 crc kubenswrapper[4808]: I0217 16:17:39.662042 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b26nj\" (UniqueName: \"kubernetes.io/projected/f0fdf7ae-717a-43f1-82b8-9c87285d4b4b-kube-api-access-b26nj\") pod \"nova-api-0\" (UID: \"f0fdf7ae-717a-43f1-82b8-9c87285d4b4b\") " pod="openstack/nova-api-0" Feb 17 16:17:39 crc kubenswrapper[4808]: I0217 16:17:39.662099 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f0fdf7ae-717a-43f1-82b8-9c87285d4b4b-logs\") pod \"nova-api-0\" (UID: \"f0fdf7ae-717a-43f1-82b8-9c87285d4b4b\") " pod="openstack/nova-api-0" Feb 17 16:17:39 crc kubenswrapper[4808]: I0217 16:17:39.662124 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0fdf7ae-717a-43f1-82b8-9c87285d4b4b-internal-tls-certs\") pod \"nova-api-0\" (UID: \"f0fdf7ae-717a-43f1-82b8-9c87285d4b4b\") " pod="openstack/nova-api-0" Feb 17 16:17:39 crc kubenswrapper[4808]: I0217 16:17:39.662156 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0fdf7ae-717a-43f1-82b8-9c87285d4b4b-public-tls-certs\") pod \"nova-api-0\" (UID: \"f0fdf7ae-717a-43f1-82b8-9c87285d4b4b\") " pod="openstack/nova-api-0" Feb 17 16:17:39 crc kubenswrapper[4808]: I0217 16:17:39.669613 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:17:39 crc kubenswrapper[4808]: I0217 16:17:39.752095 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 17 16:17:39 crc kubenswrapper[4808]: I0217 16:17:39.752133 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 17 16:17:39 crc kubenswrapper[4808]: I0217 16:17:39.764597 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0fdf7ae-717a-43f1-82b8-9c87285d4b4b-internal-tls-certs\") pod \"nova-api-0\" (UID: \"f0fdf7ae-717a-43f1-82b8-9c87285d4b4b\") " pod="openstack/nova-api-0" Feb 17 16:17:39 crc kubenswrapper[4808]: I0217 16:17:39.764666 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0fdf7ae-717a-43f1-82b8-9c87285d4b4b-public-tls-certs\") pod \"nova-api-0\" (UID: \"f0fdf7ae-717a-43f1-82b8-9c87285d4b4b\") " pod="openstack/nova-api-0" Feb 17 16:17:39 crc kubenswrapper[4808]: I0217 16:17:39.764732 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0fdf7ae-717a-43f1-82b8-9c87285d4b4b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f0fdf7ae-717a-43f1-82b8-9c87285d4b4b\") " pod="openstack/nova-api-0" Feb 17 16:17:39 crc kubenswrapper[4808]: I0217 16:17:39.764783 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0fdf7ae-717a-43f1-82b8-9c87285d4b4b-config-data\") pod \"nova-api-0\" (UID: \"f0fdf7ae-717a-43f1-82b8-9c87285d4b4b\") " pod="openstack/nova-api-0" Feb 17 16:17:39 crc kubenswrapper[4808]: I0217 16:17:39.764872 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b26nj\" (UniqueName: \"kubernetes.io/projected/f0fdf7ae-717a-43f1-82b8-9c87285d4b4b-kube-api-access-b26nj\") pod \"nova-api-0\" (UID: \"f0fdf7ae-717a-43f1-82b8-9c87285d4b4b\") " pod="openstack/nova-api-0" Feb 17 16:17:39 crc kubenswrapper[4808]: I0217 16:17:39.764944 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f0fdf7ae-717a-43f1-82b8-9c87285d4b4b-logs\") pod \"nova-api-0\" (UID: \"f0fdf7ae-717a-43f1-82b8-9c87285d4b4b\") " pod="openstack/nova-api-0" Feb 17 16:17:39 crc kubenswrapper[4808]: I0217 16:17:39.765320 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f0fdf7ae-717a-43f1-82b8-9c87285d4b4b-logs\") pod \"nova-api-0\" (UID: \"f0fdf7ae-717a-43f1-82b8-9c87285d4b4b\") " pod="openstack/nova-api-0" Feb 17 16:17:39 crc kubenswrapper[4808]: I0217 16:17:39.773428 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0fdf7ae-717a-43f1-82b8-9c87285d4b4b-public-tls-certs\") pod \"nova-api-0\" (UID: \"f0fdf7ae-717a-43f1-82b8-9c87285d4b4b\") " pod="openstack/nova-api-0" Feb 17 16:17:39 crc kubenswrapper[4808]: I0217 16:17:39.774427 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0fdf7ae-717a-43f1-82b8-9c87285d4b4b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f0fdf7ae-717a-43f1-82b8-9c87285d4b4b\") " pod="openstack/nova-api-0" Feb 17 16:17:39 crc kubenswrapper[4808]: I0217 16:17:39.775101 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0fdf7ae-717a-43f1-82b8-9c87285d4b4b-internal-tls-certs\") pod \"nova-api-0\" (UID: \"f0fdf7ae-717a-43f1-82b8-9c87285d4b4b\") " pod="openstack/nova-api-0" Feb 17 16:17:39 crc kubenswrapper[4808]: I0217 16:17:39.782345 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0fdf7ae-717a-43f1-82b8-9c87285d4b4b-config-data\") pod \"nova-api-0\" (UID: \"f0fdf7ae-717a-43f1-82b8-9c87285d4b4b\") " pod="openstack/nova-api-0" Feb 17 16:17:39 crc kubenswrapper[4808]: I0217 16:17:39.805170 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b26nj\" (UniqueName: \"kubernetes.io/projected/f0fdf7ae-717a-43f1-82b8-9c87285d4b4b-kube-api-access-b26nj\") pod \"nova-api-0\" (UID: \"f0fdf7ae-717a-43f1-82b8-9c87285d4b4b\") " pod="openstack/nova-api-0" Feb 17 16:17:39 crc kubenswrapper[4808]: I0217 16:17:39.816336 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:17:39 crc kubenswrapper[4808]: I0217 16:17:39.864109 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 16:17:39 crc kubenswrapper[4808]: I0217 16:17:39.957994 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:17:40 crc kubenswrapper[4808]: I0217 16:17:40.446257 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 17 16:17:40 crc kubenswrapper[4808]: I0217 16:17:40.503381 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f0fdf7ae-717a-43f1-82b8-9c87285d4b4b","Type":"ContainerStarted","Data":"ea9847b252efaef71e3a85841133385f61299d19b321c26d06d5bb202a3896ea"} Feb 17 16:17:40 crc kubenswrapper[4808]: I0217 16:17:40.507323 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f17f0491-7507-40fb-a2b9-d13d2c51eed6","Type":"ContainerStarted","Data":"3b118204dd16ab977f67d0447b3dc8abe3067fde9909bbf01899be9a3a24cb87"} Feb 17 16:17:40 crc kubenswrapper[4808]: I0217 16:17:40.526203 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:17:40 crc kubenswrapper[4808]: I0217 16:17:40.725398 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-lf98l"] Feb 17 16:17:40 crc kubenswrapper[4808]: I0217 16:17:40.727010 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-lf98l" Feb 17 16:17:40 crc kubenswrapper[4808]: I0217 16:17:40.731971 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Feb 17 16:17:40 crc kubenswrapper[4808]: I0217 16:17:40.732322 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Feb 17 16:17:40 crc kubenswrapper[4808]: I0217 16:17:40.743167 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-lf98l"] Feb 17 16:17:40 crc kubenswrapper[4808]: I0217 16:17:40.796314 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4l866\" (UniqueName: \"kubernetes.io/projected/9a26947f-ccdc-4726-98dc-a0c08a2a198b-kube-api-access-4l866\") pod \"nova-cell1-cell-mapping-lf98l\" (UID: \"9a26947f-ccdc-4726-98dc-a0c08a2a198b\") " pod="openstack/nova-cell1-cell-mapping-lf98l" Feb 17 16:17:40 crc kubenswrapper[4808]: I0217 16:17:40.796430 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a26947f-ccdc-4726-98dc-a0c08a2a198b-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-lf98l\" (UID: \"9a26947f-ccdc-4726-98dc-a0c08a2a198b\") " pod="openstack/nova-cell1-cell-mapping-lf98l" Feb 17 16:17:40 crc kubenswrapper[4808]: I0217 16:17:40.796476 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a26947f-ccdc-4726-98dc-a0c08a2a198b-config-data\") pod \"nova-cell1-cell-mapping-lf98l\" (UID: \"9a26947f-ccdc-4726-98dc-a0c08a2a198b\") " pod="openstack/nova-cell1-cell-mapping-lf98l" Feb 17 16:17:40 crc kubenswrapper[4808]: I0217 16:17:40.796500 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a26947f-ccdc-4726-98dc-a0c08a2a198b-scripts\") pod \"nova-cell1-cell-mapping-lf98l\" (UID: \"9a26947f-ccdc-4726-98dc-a0c08a2a198b\") " pod="openstack/nova-cell1-cell-mapping-lf98l" Feb 17 16:17:40 crc kubenswrapper[4808]: I0217 16:17:40.803881 4808 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="f4225bf1-ce01-4830-b857-2201d4e67fd6" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.223:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 17 16:17:40 crc kubenswrapper[4808]: I0217 16:17:40.804014 4808 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="f4225bf1-ce01-4830-b857-2201d4e67fd6" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.223:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 17 16:17:40 crc kubenswrapper[4808]: I0217 16:17:40.898212 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a26947f-ccdc-4726-98dc-a0c08a2a198b-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-lf98l\" (UID: \"9a26947f-ccdc-4726-98dc-a0c08a2a198b\") " pod="openstack/nova-cell1-cell-mapping-lf98l" Feb 17 16:17:40 crc kubenswrapper[4808]: I0217 16:17:40.898804 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a26947f-ccdc-4726-98dc-a0c08a2a198b-config-data\") pod \"nova-cell1-cell-mapping-lf98l\" (UID: \"9a26947f-ccdc-4726-98dc-a0c08a2a198b\") " pod="openstack/nova-cell1-cell-mapping-lf98l" Feb 17 16:17:40 crc kubenswrapper[4808]: I0217 16:17:40.898906 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a26947f-ccdc-4726-98dc-a0c08a2a198b-scripts\") pod \"nova-cell1-cell-mapping-lf98l\" (UID: \"9a26947f-ccdc-4726-98dc-a0c08a2a198b\") " pod="openstack/nova-cell1-cell-mapping-lf98l" Feb 17 16:17:40 crc kubenswrapper[4808]: I0217 16:17:40.899130 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4l866\" (UniqueName: \"kubernetes.io/projected/9a26947f-ccdc-4726-98dc-a0c08a2a198b-kube-api-access-4l866\") pod \"nova-cell1-cell-mapping-lf98l\" (UID: \"9a26947f-ccdc-4726-98dc-a0c08a2a198b\") " pod="openstack/nova-cell1-cell-mapping-lf98l" Feb 17 16:17:40 crc kubenswrapper[4808]: I0217 16:17:40.903238 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a26947f-ccdc-4726-98dc-a0c08a2a198b-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-lf98l\" (UID: \"9a26947f-ccdc-4726-98dc-a0c08a2a198b\") " pod="openstack/nova-cell1-cell-mapping-lf98l" Feb 17 16:17:40 crc kubenswrapper[4808]: I0217 16:17:40.906604 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a26947f-ccdc-4726-98dc-a0c08a2a198b-config-data\") pod \"nova-cell1-cell-mapping-lf98l\" (UID: \"9a26947f-ccdc-4726-98dc-a0c08a2a198b\") " pod="openstack/nova-cell1-cell-mapping-lf98l" Feb 17 16:17:40 crc kubenswrapper[4808]: I0217 16:17:40.913366 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4l866\" (UniqueName: \"kubernetes.io/projected/9a26947f-ccdc-4726-98dc-a0c08a2a198b-kube-api-access-4l866\") pod \"nova-cell1-cell-mapping-lf98l\" (UID: \"9a26947f-ccdc-4726-98dc-a0c08a2a198b\") " pod="openstack/nova-cell1-cell-mapping-lf98l" Feb 17 16:17:40 crc kubenswrapper[4808]: I0217 16:17:40.922123 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a26947f-ccdc-4726-98dc-a0c08a2a198b-scripts\") pod \"nova-cell1-cell-mapping-lf98l\" (UID: \"9a26947f-ccdc-4726-98dc-a0c08a2a198b\") " pod="openstack/nova-cell1-cell-mapping-lf98l" Feb 17 16:17:41 crc kubenswrapper[4808]: I0217 16:17:41.075843 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-lf98l" Feb 17 16:17:41 crc kubenswrapper[4808]: I0217 16:17:41.158345 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="646d437b-8ce5-47ba-8fc6-9c6451caacc8" path="/var/lib/kubelet/pods/646d437b-8ce5-47ba-8fc6-9c6451caacc8/volumes" Feb 17 16:17:41 crc kubenswrapper[4808]: I0217 16:17:41.521066 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f0fdf7ae-717a-43f1-82b8-9c87285d4b4b","Type":"ContainerStarted","Data":"ec8315c6142559a5476ca3a0343759e88721f0b33254f08b4740490ad769e248"} Feb 17 16:17:41 crc kubenswrapper[4808]: I0217 16:17:41.521357 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f0fdf7ae-717a-43f1-82b8-9c87285d4b4b","Type":"ContainerStarted","Data":"b94e5b5414eaea5609181fe57f8eb9c5db284f5a842649aa0395af8d5e1b42e4"} Feb 17 16:17:41 crc kubenswrapper[4808]: I0217 16:17:41.545192 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f17f0491-7507-40fb-a2b9-d13d2c51eed6","Type":"ContainerStarted","Data":"c0971f47e4c9c39f71e7c6f7840068671f8ad7112b616991124ea5bfcdc2d3fe"} Feb 17 16:17:41 crc kubenswrapper[4808]: I0217 16:17:41.545232 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f17f0491-7507-40fb-a2b9-d13d2c51eed6","Type":"ContainerStarted","Data":"d002c2e4e3d0d68bfb48ed8610eba6f9a0ecf6103a908faf77897768a2cf9b9c"} Feb 17 16:17:41 crc kubenswrapper[4808]: I0217 16:17:41.546987 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.5469765840000003 podStartE2EDuration="2.546976584s" podCreationTimestamp="2026-02-17 16:17:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:17:41.545548117 +0000 UTC m=+1425.061907190" watchObservedRunningTime="2026-02-17 16:17:41.546976584 +0000 UTC m=+1425.063335657" Feb 17 16:17:41 crc kubenswrapper[4808]: I0217 16:17:41.658207 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-lf98l"] Feb 17 16:17:41 crc kubenswrapper[4808]: I0217 16:17:41.920756 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5fd9b586ff-kf4dn" Feb 17 16:17:42 crc kubenswrapper[4808]: I0217 16:17:42.016163 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78cd565959-ktqh6"] Feb 17 16:17:42 crc kubenswrapper[4808]: I0217 16:17:42.028856 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-78cd565959-ktqh6" podUID="17dd9003-af7c-4ead-bd8a-69dd599672e1" containerName="dnsmasq-dns" containerID="cri-o://60ea09e4f101b5eefb07143e634305b321a92f4dcd3e620b2c5a1a60a199bdae" gracePeriod=10 Feb 17 16:17:42 crc kubenswrapper[4808]: E0217 16:17:42.172262 4808 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod17dd9003_af7c_4ead_bd8a_69dd599672e1.slice/crio-60ea09e4f101b5eefb07143e634305b321a92f4dcd3e620b2c5a1a60a199bdae.scope\": RecentStats: unable to find data in memory cache]" Feb 17 16:17:42 crc kubenswrapper[4808]: I0217 16:17:42.555758 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-lf98l" event={"ID":"9a26947f-ccdc-4726-98dc-a0c08a2a198b","Type":"ContainerStarted","Data":"af528ab271e814b2015501ad54dc67165447a3cd6d539f4779d4b1f395b9ad79"} Feb 17 16:17:42 crc kubenswrapper[4808]: I0217 16:17:42.557198 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-lf98l" event={"ID":"9a26947f-ccdc-4726-98dc-a0c08a2a198b","Type":"ContainerStarted","Data":"2b898e02f703f3e6f00a35ddb4ceb83c7f74fbaad9c4fcf19b31734489f2f161"} Feb 17 16:17:43 crc kubenswrapper[4808]: I0217 16:17:43.500744 4808 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-78cd565959-ktqh6" podUID="17dd9003-af7c-4ead-bd8a-69dd599672e1" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.216:5353: connect: connection refused" Feb 17 16:17:43 crc kubenswrapper[4808]: I0217 16:17:43.587625 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-lf98l" podStartSLOduration=3.587603681 podStartE2EDuration="3.587603681s" podCreationTimestamp="2026-02-17 16:17:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:17:43.578291694 +0000 UTC m=+1427.094650767" watchObservedRunningTime="2026-02-17 16:17:43.587603681 +0000 UTC m=+1427.103962764" Feb 17 16:17:43 crc kubenswrapper[4808]: I0217 16:17:43.890795 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vbtkb" Feb 17 16:17:43 crc kubenswrapper[4808]: I0217 16:17:43.891171 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vbtkb" Feb 17 16:17:43 crc kubenswrapper[4808]: I0217 16:17:43.942778 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vbtkb" Feb 17 16:17:44 crc kubenswrapper[4808]: I0217 16:17:44.574961 4808 generic.go:334] "Generic (PLEG): container finished" podID="17dd9003-af7c-4ead-bd8a-69dd599672e1" containerID="60ea09e4f101b5eefb07143e634305b321a92f4dcd3e620b2c5a1a60a199bdae" exitCode=0 Feb 17 16:17:44 crc kubenswrapper[4808]: I0217 16:17:44.575041 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78cd565959-ktqh6" event={"ID":"17dd9003-af7c-4ead-bd8a-69dd599672e1","Type":"ContainerDied","Data":"60ea09e4f101b5eefb07143e634305b321a92f4dcd3e620b2c5a1a60a199bdae"} Feb 17 16:17:44 crc kubenswrapper[4808]: I0217 16:17:44.632859 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vbtkb" Feb 17 16:17:44 crc kubenswrapper[4808]: I0217 16:17:44.692009 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vbtkb"] Feb 17 16:17:44 crc kubenswrapper[4808]: I0217 16:17:44.827830 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78cd565959-ktqh6" Feb 17 16:17:44 crc kubenswrapper[4808]: I0217 16:17:44.998211 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17dd9003-af7c-4ead-bd8a-69dd599672e1-config\") pod \"17dd9003-af7c-4ead-bd8a-69dd599672e1\" (UID: \"17dd9003-af7c-4ead-bd8a-69dd599672e1\") " Feb 17 16:17:44 crc kubenswrapper[4808]: I0217 16:17:44.998692 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/17dd9003-af7c-4ead-bd8a-69dd599672e1-ovsdbserver-sb\") pod \"17dd9003-af7c-4ead-bd8a-69dd599672e1\" (UID: \"17dd9003-af7c-4ead-bd8a-69dd599672e1\") " Feb 17 16:17:44 crc kubenswrapper[4808]: I0217 16:17:44.998819 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dghr7\" (UniqueName: \"kubernetes.io/projected/17dd9003-af7c-4ead-bd8a-69dd599672e1-kube-api-access-dghr7\") pod \"17dd9003-af7c-4ead-bd8a-69dd599672e1\" (UID: \"17dd9003-af7c-4ead-bd8a-69dd599672e1\") " Feb 17 16:17:44 crc kubenswrapper[4808]: I0217 16:17:44.998932 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/17dd9003-af7c-4ead-bd8a-69dd599672e1-ovsdbserver-nb\") pod \"17dd9003-af7c-4ead-bd8a-69dd599672e1\" (UID: \"17dd9003-af7c-4ead-bd8a-69dd599672e1\") " Feb 17 16:17:44 crc kubenswrapper[4808]: I0217 16:17:44.998995 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/17dd9003-af7c-4ead-bd8a-69dd599672e1-dns-swift-storage-0\") pod \"17dd9003-af7c-4ead-bd8a-69dd599672e1\" (UID: \"17dd9003-af7c-4ead-bd8a-69dd599672e1\") " Feb 17 16:17:44 crc kubenswrapper[4808]: I0217 16:17:44.999026 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/17dd9003-af7c-4ead-bd8a-69dd599672e1-dns-svc\") pod \"17dd9003-af7c-4ead-bd8a-69dd599672e1\" (UID: \"17dd9003-af7c-4ead-bd8a-69dd599672e1\") " Feb 17 16:17:45 crc kubenswrapper[4808]: I0217 16:17:45.050604 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17dd9003-af7c-4ead-bd8a-69dd599672e1-kube-api-access-dghr7" (OuterVolumeSpecName: "kube-api-access-dghr7") pod "17dd9003-af7c-4ead-bd8a-69dd599672e1" (UID: "17dd9003-af7c-4ead-bd8a-69dd599672e1"). InnerVolumeSpecName "kube-api-access-dghr7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:17:45 crc kubenswrapper[4808]: I0217 16:17:45.093465 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17dd9003-af7c-4ead-bd8a-69dd599672e1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "17dd9003-af7c-4ead-bd8a-69dd599672e1" (UID: "17dd9003-af7c-4ead-bd8a-69dd599672e1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:17:45 crc kubenswrapper[4808]: I0217 16:17:45.093489 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17dd9003-af7c-4ead-bd8a-69dd599672e1-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "17dd9003-af7c-4ead-bd8a-69dd599672e1" (UID: "17dd9003-af7c-4ead-bd8a-69dd599672e1"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:17:45 crc kubenswrapper[4808]: I0217 16:17:45.094188 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17dd9003-af7c-4ead-bd8a-69dd599672e1-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "17dd9003-af7c-4ead-bd8a-69dd599672e1" (UID: "17dd9003-af7c-4ead-bd8a-69dd599672e1"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:17:45 crc kubenswrapper[4808]: I0217 16:17:45.106062 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17dd9003-af7c-4ead-bd8a-69dd599672e1-config" (OuterVolumeSpecName: "config") pod "17dd9003-af7c-4ead-bd8a-69dd599672e1" (UID: "17dd9003-af7c-4ead-bd8a-69dd599672e1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:17:45 crc kubenswrapper[4808]: I0217 16:17:45.107373 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dghr7\" (UniqueName: \"kubernetes.io/projected/17dd9003-af7c-4ead-bd8a-69dd599672e1-kube-api-access-dghr7\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:45 crc kubenswrapper[4808]: I0217 16:17:45.107493 4808 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/17dd9003-af7c-4ead-bd8a-69dd599672e1-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:45 crc kubenswrapper[4808]: I0217 16:17:45.107589 4808 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/17dd9003-af7c-4ead-bd8a-69dd599672e1-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:45 crc kubenswrapper[4808]: I0217 16:17:45.107669 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17dd9003-af7c-4ead-bd8a-69dd599672e1-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:45 crc kubenswrapper[4808]: I0217 16:17:45.107755 4808 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/17dd9003-af7c-4ead-bd8a-69dd599672e1-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:45 crc kubenswrapper[4808]: I0217 16:17:45.112638 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17dd9003-af7c-4ead-bd8a-69dd599672e1-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "17dd9003-af7c-4ead-bd8a-69dd599672e1" (UID: "17dd9003-af7c-4ead-bd8a-69dd599672e1"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:17:45 crc kubenswrapper[4808]: I0217 16:17:45.210070 4808 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/17dd9003-af7c-4ead-bd8a-69dd599672e1-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:45 crc kubenswrapper[4808]: I0217 16:17:45.590816 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78cd565959-ktqh6" event={"ID":"17dd9003-af7c-4ead-bd8a-69dd599672e1","Type":"ContainerDied","Data":"6041d8f48336fb9f3aea4819de5b72096ec393680040db5b6c883b60b9ab2c94"} Feb 17 16:17:45 crc kubenswrapper[4808]: I0217 16:17:45.591136 4808 scope.go:117] "RemoveContainer" containerID="60ea09e4f101b5eefb07143e634305b321a92f4dcd3e620b2c5a1a60a199bdae" Feb 17 16:17:45 crc kubenswrapper[4808]: I0217 16:17:45.590828 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78cd565959-ktqh6" Feb 17 16:17:45 crc kubenswrapper[4808]: I0217 16:17:45.596972 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f17f0491-7507-40fb-a2b9-d13d2c51eed6","Type":"ContainerStarted","Data":"5b669a87f3e7dd40db4275e143a7c3152957d19b8ee8fd03190fac9ff4c10d22"} Feb 17 16:17:45 crc kubenswrapper[4808]: I0217 16:17:45.649882 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78cd565959-ktqh6"] Feb 17 16:17:45 crc kubenswrapper[4808]: I0217 16:17:45.670003 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78cd565959-ktqh6"] Feb 17 16:17:45 crc kubenswrapper[4808]: I0217 16:17:45.677023 4808 scope.go:117] "RemoveContainer" containerID="3ef21441db2673d8cb4a73235d72eeb9fb765f3ab14514345fdd78ed72a42293" Feb 17 16:17:46 crc kubenswrapper[4808]: I0217 16:17:46.585830 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-scd77"] Feb 17 16:17:46 crc kubenswrapper[4808]: E0217 16:17:46.586229 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17dd9003-af7c-4ead-bd8a-69dd599672e1" containerName="init" Feb 17 16:17:46 crc kubenswrapper[4808]: I0217 16:17:46.586246 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="17dd9003-af7c-4ead-bd8a-69dd599672e1" containerName="init" Feb 17 16:17:46 crc kubenswrapper[4808]: E0217 16:17:46.586273 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17dd9003-af7c-4ead-bd8a-69dd599672e1" containerName="dnsmasq-dns" Feb 17 16:17:46 crc kubenswrapper[4808]: I0217 16:17:46.586280 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="17dd9003-af7c-4ead-bd8a-69dd599672e1" containerName="dnsmasq-dns" Feb 17 16:17:46 crc kubenswrapper[4808]: I0217 16:17:46.586461 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="17dd9003-af7c-4ead-bd8a-69dd599672e1" containerName="dnsmasq-dns" Feb 17 16:17:46 crc kubenswrapper[4808]: I0217 16:17:46.587940 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-scd77" Feb 17 16:17:46 crc kubenswrapper[4808]: I0217 16:17:46.597658 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-scd77"] Feb 17 16:17:46 crc kubenswrapper[4808]: I0217 16:17:46.607740 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-vbtkb" podUID="02c5cc0b-1b55-465f-8f31-fd8575d07242" containerName="registry-server" containerID="cri-o://4889c213cbd2b08515c838ee226a5311661235481dfa4a53524a4c6a6346e5a6" gracePeriod=2 Feb 17 16:17:46 crc kubenswrapper[4808]: I0217 16:17:46.742065 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rlpm\" (UniqueName: \"kubernetes.io/projected/fdd136e1-cf53-4300-9df6-53bfb28905cd-kube-api-access-4rlpm\") pod \"redhat-operators-scd77\" (UID: \"fdd136e1-cf53-4300-9df6-53bfb28905cd\") " pod="openshift-marketplace/redhat-operators-scd77" Feb 17 16:17:46 crc kubenswrapper[4808]: I0217 16:17:46.742112 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fdd136e1-cf53-4300-9df6-53bfb28905cd-catalog-content\") pod \"redhat-operators-scd77\" (UID: \"fdd136e1-cf53-4300-9df6-53bfb28905cd\") " pod="openshift-marketplace/redhat-operators-scd77" Feb 17 16:17:46 crc kubenswrapper[4808]: I0217 16:17:46.742402 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fdd136e1-cf53-4300-9df6-53bfb28905cd-utilities\") pod \"redhat-operators-scd77\" (UID: \"fdd136e1-cf53-4300-9df6-53bfb28905cd\") " pod="openshift-marketplace/redhat-operators-scd77" Feb 17 16:17:46 crc kubenswrapper[4808]: I0217 16:17:46.844414 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fdd136e1-cf53-4300-9df6-53bfb28905cd-utilities\") pod \"redhat-operators-scd77\" (UID: \"fdd136e1-cf53-4300-9df6-53bfb28905cd\") " pod="openshift-marketplace/redhat-operators-scd77" Feb 17 16:17:46 crc kubenswrapper[4808]: I0217 16:17:46.845095 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fdd136e1-cf53-4300-9df6-53bfb28905cd-utilities\") pod \"redhat-operators-scd77\" (UID: \"fdd136e1-cf53-4300-9df6-53bfb28905cd\") " pod="openshift-marketplace/redhat-operators-scd77" Feb 17 16:17:46 crc kubenswrapper[4808]: I0217 16:17:46.845128 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rlpm\" (UniqueName: \"kubernetes.io/projected/fdd136e1-cf53-4300-9df6-53bfb28905cd-kube-api-access-4rlpm\") pod \"redhat-operators-scd77\" (UID: \"fdd136e1-cf53-4300-9df6-53bfb28905cd\") " pod="openshift-marketplace/redhat-operators-scd77" Feb 17 16:17:46 crc kubenswrapper[4808]: I0217 16:17:46.845245 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fdd136e1-cf53-4300-9df6-53bfb28905cd-catalog-content\") pod \"redhat-operators-scd77\" (UID: \"fdd136e1-cf53-4300-9df6-53bfb28905cd\") " pod="openshift-marketplace/redhat-operators-scd77" Feb 17 16:17:46 crc kubenswrapper[4808]: I0217 16:17:46.845984 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fdd136e1-cf53-4300-9df6-53bfb28905cd-catalog-content\") pod \"redhat-operators-scd77\" (UID: \"fdd136e1-cf53-4300-9df6-53bfb28905cd\") " pod="openshift-marketplace/redhat-operators-scd77" Feb 17 16:17:46 crc kubenswrapper[4808]: I0217 16:17:46.874232 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rlpm\" (UniqueName: \"kubernetes.io/projected/fdd136e1-cf53-4300-9df6-53bfb28905cd-kube-api-access-4rlpm\") pod \"redhat-operators-scd77\" (UID: \"fdd136e1-cf53-4300-9df6-53bfb28905cd\") " pod="openshift-marketplace/redhat-operators-scd77" Feb 17 16:17:46 crc kubenswrapper[4808]: I0217 16:17:46.909432 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-scd77" Feb 17 16:17:47 crc kubenswrapper[4808]: I0217 16:17:47.159132 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17dd9003-af7c-4ead-bd8a-69dd599672e1" path="/var/lib/kubelet/pods/17dd9003-af7c-4ead-bd8a-69dd599672e1/volumes" Feb 17 16:17:47 crc kubenswrapper[4808]: I0217 16:17:47.647882 4808 generic.go:334] "Generic (PLEG): container finished" podID="02c5cc0b-1b55-465f-8f31-fd8575d07242" containerID="4889c213cbd2b08515c838ee226a5311661235481dfa4a53524a4c6a6346e5a6" exitCode=0 Feb 17 16:17:47 crc kubenswrapper[4808]: I0217 16:17:47.648156 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vbtkb" event={"ID":"02c5cc0b-1b55-465f-8f31-fd8575d07242","Type":"ContainerDied","Data":"4889c213cbd2b08515c838ee226a5311661235481dfa4a53524a4c6a6346e5a6"} Feb 17 16:17:47 crc kubenswrapper[4808]: I0217 16:17:47.838550 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vbtkb" Feb 17 16:17:47 crc kubenswrapper[4808]: I0217 16:17:47.971001 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6mnpq\" (UniqueName: \"kubernetes.io/projected/02c5cc0b-1b55-465f-8f31-fd8575d07242-kube-api-access-6mnpq\") pod \"02c5cc0b-1b55-465f-8f31-fd8575d07242\" (UID: \"02c5cc0b-1b55-465f-8f31-fd8575d07242\") " Feb 17 16:17:47 crc kubenswrapper[4808]: I0217 16:17:47.971162 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02c5cc0b-1b55-465f-8f31-fd8575d07242-catalog-content\") pod \"02c5cc0b-1b55-465f-8f31-fd8575d07242\" (UID: \"02c5cc0b-1b55-465f-8f31-fd8575d07242\") " Feb 17 16:17:47 crc kubenswrapper[4808]: I0217 16:17:47.971253 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02c5cc0b-1b55-465f-8f31-fd8575d07242-utilities\") pod \"02c5cc0b-1b55-465f-8f31-fd8575d07242\" (UID: \"02c5cc0b-1b55-465f-8f31-fd8575d07242\") " Feb 17 16:17:47 crc kubenswrapper[4808]: I0217 16:17:47.972266 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02c5cc0b-1b55-465f-8f31-fd8575d07242-utilities" (OuterVolumeSpecName: "utilities") pod "02c5cc0b-1b55-465f-8f31-fd8575d07242" (UID: "02c5cc0b-1b55-465f-8f31-fd8575d07242"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:17:47 crc kubenswrapper[4808]: I0217 16:17:47.975481 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02c5cc0b-1b55-465f-8f31-fd8575d07242-kube-api-access-6mnpq" (OuterVolumeSpecName: "kube-api-access-6mnpq") pod "02c5cc0b-1b55-465f-8f31-fd8575d07242" (UID: "02c5cc0b-1b55-465f-8f31-fd8575d07242"). InnerVolumeSpecName "kube-api-access-6mnpq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:17:48 crc kubenswrapper[4808]: I0217 16:17:48.023605 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02c5cc0b-1b55-465f-8f31-fd8575d07242-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "02c5cc0b-1b55-465f-8f31-fd8575d07242" (UID: "02c5cc0b-1b55-465f-8f31-fd8575d07242"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:17:48 crc kubenswrapper[4808]: I0217 16:17:48.074866 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02c5cc0b-1b55-465f-8f31-fd8575d07242-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:48 crc kubenswrapper[4808]: I0217 16:17:48.074931 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6mnpq\" (UniqueName: \"kubernetes.io/projected/02c5cc0b-1b55-465f-8f31-fd8575d07242-kube-api-access-6mnpq\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:48 crc kubenswrapper[4808]: I0217 16:17:48.074956 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02c5cc0b-1b55-465f-8f31-fd8575d07242-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:48 crc kubenswrapper[4808]: I0217 16:17:48.078978 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-scd77"] Feb 17 16:17:48 crc kubenswrapper[4808]: W0217 16:17:48.090102 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfdd136e1_cf53_4300_9df6_53bfb28905cd.slice/crio-beb497e4573909af9da6473ab6ad5239876480309153dc5a4dbda0c71e03d0d1 WatchSource:0}: Error finding container beb497e4573909af9da6473ab6ad5239876480309153dc5a4dbda0c71e03d0d1: Status 404 returned error can't find the container with id beb497e4573909af9da6473ab6ad5239876480309153dc5a4dbda0c71e03d0d1 Feb 17 16:17:48 crc kubenswrapper[4808]: I0217 16:17:48.662637 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f17f0491-7507-40fb-a2b9-d13d2c51eed6","Type":"ContainerStarted","Data":"de6991fc741f4dab215e9fa0e4bbfa723a35a1ad1c479d9fbf2ff2d2ef68c689"} Feb 17 16:17:48 crc kubenswrapper[4808]: I0217 16:17:48.663170 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 17 16:17:48 crc kubenswrapper[4808]: I0217 16:17:48.667170 4808 generic.go:334] "Generic (PLEG): container finished" podID="fdd136e1-cf53-4300-9df6-53bfb28905cd" containerID="4c33795a6d982c861075c31dcb5c9401341d147e1e982483729f44aa01df7914" exitCode=0 Feb 17 16:17:48 crc kubenswrapper[4808]: I0217 16:17:48.667422 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-scd77" event={"ID":"fdd136e1-cf53-4300-9df6-53bfb28905cd","Type":"ContainerDied","Data":"4c33795a6d982c861075c31dcb5c9401341d147e1e982483729f44aa01df7914"} Feb 17 16:17:48 crc kubenswrapper[4808]: I0217 16:17:48.667457 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-scd77" event={"ID":"fdd136e1-cf53-4300-9df6-53bfb28905cd","Type":"ContainerStarted","Data":"beb497e4573909af9da6473ab6ad5239876480309153dc5a4dbda0c71e03d0d1"} Feb 17 16:17:48 crc kubenswrapper[4808]: I0217 16:17:48.671256 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vbtkb" event={"ID":"02c5cc0b-1b55-465f-8f31-fd8575d07242","Type":"ContainerDied","Data":"11e80ad30caf9ea56cfefbec7d1e89b12ad5290f08e7fc3cc6e04510e32e5b8b"} Feb 17 16:17:48 crc kubenswrapper[4808]: I0217 16:17:48.671300 4808 scope.go:117] "RemoveContainer" containerID="4889c213cbd2b08515c838ee226a5311661235481dfa4a53524a4c6a6346e5a6" Feb 17 16:17:48 crc kubenswrapper[4808]: I0217 16:17:48.671426 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vbtkb" Feb 17 16:17:48 crc kubenswrapper[4808]: I0217 16:17:48.692459 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.77610477 podStartE2EDuration="10.69244161s" podCreationTimestamp="2026-02-17 16:17:38 +0000 UTC" firstStartedPulling="2026-02-17 16:17:39.674469118 +0000 UTC m=+1423.190828191" lastFinishedPulling="2026-02-17 16:17:47.590805948 +0000 UTC m=+1431.107165031" observedRunningTime="2026-02-17 16:17:48.688022963 +0000 UTC m=+1432.204382076" watchObservedRunningTime="2026-02-17 16:17:48.69244161 +0000 UTC m=+1432.208800683" Feb 17 16:17:48 crc kubenswrapper[4808]: I0217 16:17:48.695238 4808 scope.go:117] "RemoveContainer" containerID="77fe18d2b0943541237f3b74c773e3a3e36241d7ed44ba023146405de7f15ab1" Feb 17 16:17:48 crc kubenswrapper[4808]: I0217 16:17:48.730811 4808 scope.go:117] "RemoveContainer" containerID="e98a2e96df763da34095f5b36d490a12752ad034b23f41d68bf217b2eaf71996" Feb 17 16:17:48 crc kubenswrapper[4808]: I0217 16:17:48.735884 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vbtkb"] Feb 17 16:17:48 crc kubenswrapper[4808]: I0217 16:17:48.750646 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-vbtkb"] Feb 17 16:17:49 crc kubenswrapper[4808]: I0217 16:17:49.161627 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02c5cc0b-1b55-465f-8f31-fd8575d07242" path="/var/lib/kubelet/pods/02c5cc0b-1b55-465f-8f31-fd8575d07242/volumes" Feb 17 16:17:49 crc kubenswrapper[4808]: I0217 16:17:49.756984 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 17 16:17:49 crc kubenswrapper[4808]: I0217 16:17:49.757057 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 17 16:17:49 crc kubenswrapper[4808]: I0217 16:17:49.765313 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 17 16:17:49 crc kubenswrapper[4808]: I0217 16:17:49.767988 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 17 16:17:49 crc kubenswrapper[4808]: I0217 16:17:49.871833 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 17 16:17:49 crc kubenswrapper[4808]: I0217 16:17:49.871895 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 17 16:17:50 crc kubenswrapper[4808]: I0217 16:17:50.699801 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-scd77" event={"ID":"fdd136e1-cf53-4300-9df6-53bfb28905cd","Type":"ContainerStarted","Data":"70c41ea11a7a6ad0cd421e097caf52b723c2e7dcd550f23abc585761684fe1f5"} Feb 17 16:17:50 crc kubenswrapper[4808]: I0217 16:17:50.701838 4808 generic.go:334] "Generic (PLEG): container finished" podID="9a26947f-ccdc-4726-98dc-a0c08a2a198b" containerID="af528ab271e814b2015501ad54dc67165447a3cd6d539f4779d4b1f395b9ad79" exitCode=0 Feb 17 16:17:50 crc kubenswrapper[4808]: I0217 16:17:50.701865 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-lf98l" event={"ID":"9a26947f-ccdc-4726-98dc-a0c08a2a198b","Type":"ContainerDied","Data":"af528ab271e814b2015501ad54dc67165447a3cd6d539f4779d4b1f395b9ad79"} Feb 17 16:17:50 crc kubenswrapper[4808]: I0217 16:17:50.884853 4808 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f0fdf7ae-717a-43f1-82b8-9c87285d4b4b" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.228:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 17 16:17:50 crc kubenswrapper[4808]: I0217 16:17:50.884886 4808 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f0fdf7ae-717a-43f1-82b8-9c87285d4b4b" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.228:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 17 16:17:52 crc kubenswrapper[4808]: I0217 16:17:52.168223 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-lf98l" Feb 17 16:17:52 crc kubenswrapper[4808]: I0217 16:17:52.367404 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a26947f-ccdc-4726-98dc-a0c08a2a198b-combined-ca-bundle\") pod \"9a26947f-ccdc-4726-98dc-a0c08a2a198b\" (UID: \"9a26947f-ccdc-4726-98dc-a0c08a2a198b\") " Feb 17 16:17:52 crc kubenswrapper[4808]: I0217 16:17:52.367707 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a26947f-ccdc-4726-98dc-a0c08a2a198b-scripts\") pod \"9a26947f-ccdc-4726-98dc-a0c08a2a198b\" (UID: \"9a26947f-ccdc-4726-98dc-a0c08a2a198b\") " Feb 17 16:17:52 crc kubenswrapper[4808]: I0217 16:17:52.367787 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a26947f-ccdc-4726-98dc-a0c08a2a198b-config-data\") pod \"9a26947f-ccdc-4726-98dc-a0c08a2a198b\" (UID: \"9a26947f-ccdc-4726-98dc-a0c08a2a198b\") " Feb 17 16:17:52 crc kubenswrapper[4808]: I0217 16:17:52.368299 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4l866\" (UniqueName: \"kubernetes.io/projected/9a26947f-ccdc-4726-98dc-a0c08a2a198b-kube-api-access-4l866\") pod \"9a26947f-ccdc-4726-98dc-a0c08a2a198b\" (UID: \"9a26947f-ccdc-4726-98dc-a0c08a2a198b\") " Feb 17 16:17:52 crc kubenswrapper[4808]: I0217 16:17:52.372998 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a26947f-ccdc-4726-98dc-a0c08a2a198b-scripts" (OuterVolumeSpecName: "scripts") pod "9a26947f-ccdc-4726-98dc-a0c08a2a198b" (UID: "9a26947f-ccdc-4726-98dc-a0c08a2a198b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:17:52 crc kubenswrapper[4808]: I0217 16:17:52.381785 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a26947f-ccdc-4726-98dc-a0c08a2a198b-kube-api-access-4l866" (OuterVolumeSpecName: "kube-api-access-4l866") pod "9a26947f-ccdc-4726-98dc-a0c08a2a198b" (UID: "9a26947f-ccdc-4726-98dc-a0c08a2a198b"). InnerVolumeSpecName "kube-api-access-4l866". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:17:52 crc kubenswrapper[4808]: I0217 16:17:52.403552 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a26947f-ccdc-4726-98dc-a0c08a2a198b-config-data" (OuterVolumeSpecName: "config-data") pod "9a26947f-ccdc-4726-98dc-a0c08a2a198b" (UID: "9a26947f-ccdc-4726-98dc-a0c08a2a198b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:17:52 crc kubenswrapper[4808]: I0217 16:17:52.408621 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a26947f-ccdc-4726-98dc-a0c08a2a198b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9a26947f-ccdc-4726-98dc-a0c08a2a198b" (UID: "9a26947f-ccdc-4726-98dc-a0c08a2a198b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:17:52 crc kubenswrapper[4808]: I0217 16:17:52.475393 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a26947f-ccdc-4726-98dc-a0c08a2a198b-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:52 crc kubenswrapper[4808]: I0217 16:17:52.475817 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4l866\" (UniqueName: \"kubernetes.io/projected/9a26947f-ccdc-4726-98dc-a0c08a2a198b-kube-api-access-4l866\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:52 crc kubenswrapper[4808]: I0217 16:17:52.475922 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a26947f-ccdc-4726-98dc-a0c08a2a198b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:52 crc kubenswrapper[4808]: I0217 16:17:52.476032 4808 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a26947f-ccdc-4726-98dc-a0c08a2a198b-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:52 crc kubenswrapper[4808]: I0217 16:17:52.737593 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-lf98l" event={"ID":"9a26947f-ccdc-4726-98dc-a0c08a2a198b","Type":"ContainerDied","Data":"2b898e02f703f3e6f00a35ddb4ceb83c7f74fbaad9c4fcf19b31734489f2f161"} Feb 17 16:17:52 crc kubenswrapper[4808]: I0217 16:17:52.737634 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2b898e02f703f3e6f00a35ddb4ceb83c7f74fbaad9c4fcf19b31734489f2f161" Feb 17 16:17:52 crc kubenswrapper[4808]: I0217 16:17:52.737731 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-lf98l" Feb 17 16:17:53 crc kubenswrapper[4808]: I0217 16:17:53.017448 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 17 16:17:53 crc kubenswrapper[4808]: I0217 16:17:53.018288 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="f0fdf7ae-717a-43f1-82b8-9c87285d4b4b" containerName="nova-api-log" containerID="cri-o://b94e5b5414eaea5609181fe57f8eb9c5db284f5a842649aa0395af8d5e1b42e4" gracePeriod=30 Feb 17 16:17:53 crc kubenswrapper[4808]: I0217 16:17:53.018352 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="f0fdf7ae-717a-43f1-82b8-9c87285d4b4b" containerName="nova-api-api" containerID="cri-o://ec8315c6142559a5476ca3a0343759e88721f0b33254f08b4740490ad769e248" gracePeriod=30 Feb 17 16:17:53 crc kubenswrapper[4808]: I0217 16:17:53.040537 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 16:17:53 crc kubenswrapper[4808]: I0217 16:17:53.040948 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="c906d5a8-4187-4f58-a352-fa7faea85309" containerName="nova-scheduler-scheduler" containerID="cri-o://d5693756f54d942082122949e8141932a3315f36a027840738a229e012a32372" gracePeriod=30 Feb 17 16:17:53 crc kubenswrapper[4808]: I0217 16:17:53.065650 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 16:17:53 crc kubenswrapper[4808]: I0217 16:17:53.065917 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="f4225bf1-ce01-4830-b857-2201d4e67fd6" containerName="nova-metadata-log" containerID="cri-o://0ea7c0c9c375fd22964f8f3f8e14e0f294b4d28792f18a93ced64305d017f82a" gracePeriod=30 Feb 17 16:17:53 crc kubenswrapper[4808]: I0217 16:17:53.066109 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="f4225bf1-ce01-4830-b857-2201d4e67fd6" containerName="nova-metadata-metadata" containerID="cri-o://ce6083e495f8bd1d0bb01f3f9f8ec767b206db7820b55aab9e2d9682e9112c59" gracePeriod=30 Feb 17 16:17:53 crc kubenswrapper[4808]: I0217 16:17:53.747132 4808 generic.go:334] "Generic (PLEG): container finished" podID="f0fdf7ae-717a-43f1-82b8-9c87285d4b4b" containerID="b94e5b5414eaea5609181fe57f8eb9c5db284f5a842649aa0395af8d5e1b42e4" exitCode=143 Feb 17 16:17:53 crc kubenswrapper[4808]: I0217 16:17:53.747211 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f0fdf7ae-717a-43f1-82b8-9c87285d4b4b","Type":"ContainerDied","Data":"b94e5b5414eaea5609181fe57f8eb9c5db284f5a842649aa0395af8d5e1b42e4"} Feb 17 16:17:53 crc kubenswrapper[4808]: I0217 16:17:53.748860 4808 generic.go:334] "Generic (PLEG): container finished" podID="f4225bf1-ce01-4830-b857-2201d4e67fd6" containerID="0ea7c0c9c375fd22964f8f3f8e14e0f294b4d28792f18a93ced64305d017f82a" exitCode=143 Feb 17 16:17:53 crc kubenswrapper[4808]: I0217 16:17:53.748895 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f4225bf1-ce01-4830-b857-2201d4e67fd6","Type":"ContainerDied","Data":"0ea7c0c9c375fd22964f8f3f8e14e0f294b4d28792f18a93ced64305d017f82a"} Feb 17 16:17:55 crc kubenswrapper[4808]: E0217 16:17:55.562832 4808 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d5693756f54d942082122949e8141932a3315f36a027840738a229e012a32372 is running failed: container process not found" containerID="d5693756f54d942082122949e8141932a3315f36a027840738a229e012a32372" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 17 16:17:55 crc kubenswrapper[4808]: E0217 16:17:55.564152 4808 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d5693756f54d942082122949e8141932a3315f36a027840738a229e012a32372 is running failed: container process not found" containerID="d5693756f54d942082122949e8141932a3315f36a027840738a229e012a32372" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 17 16:17:55 crc kubenswrapper[4808]: E0217 16:17:55.564692 4808 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d5693756f54d942082122949e8141932a3315f36a027840738a229e012a32372 is running failed: container process not found" containerID="d5693756f54d942082122949e8141932a3315f36a027840738a229e012a32372" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 17 16:17:55 crc kubenswrapper[4808]: E0217 16:17:55.564749 4808 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d5693756f54d942082122949e8141932a3315f36a027840738a229e012a32372 is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="c906d5a8-4187-4f58-a352-fa7faea85309" containerName="nova-scheduler-scheduler" Feb 17 16:17:55 crc kubenswrapper[4808]: I0217 16:17:55.779864 4808 generic.go:334] "Generic (PLEG): container finished" podID="c906d5a8-4187-4f58-a352-fa7faea85309" containerID="d5693756f54d942082122949e8141932a3315f36a027840738a229e012a32372" exitCode=0 Feb 17 16:17:55 crc kubenswrapper[4808]: I0217 16:17:55.779968 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c906d5a8-4187-4f58-a352-fa7faea85309","Type":"ContainerDied","Data":"d5693756f54d942082122949e8141932a3315f36a027840738a229e012a32372"} Feb 17 16:17:56 crc kubenswrapper[4808]: I0217 16:17:56.213868 4808 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="f4225bf1-ce01-4830-b857-2201d4e67fd6" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.223:8775/\": read tcp 10.217.0.2:46540->10.217.0.223:8775: read: connection reset by peer" Feb 17 16:17:56 crc kubenswrapper[4808]: I0217 16:17:56.214261 4808 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="f4225bf1-ce01-4830-b857-2201d4e67fd6" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.223:8775/\": read tcp 10.217.0.2:46528->10.217.0.223:8775: read: connection reset by peer" Feb 17 16:17:56 crc kubenswrapper[4808]: I0217 16:17:56.368053 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 17 16:17:56 crc kubenswrapper[4808]: I0217 16:17:56.455235 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-crb6r\" (UniqueName: \"kubernetes.io/projected/c906d5a8-4187-4f58-a352-fa7faea85309-kube-api-access-crb6r\") pod \"c906d5a8-4187-4f58-a352-fa7faea85309\" (UID: \"c906d5a8-4187-4f58-a352-fa7faea85309\") " Feb 17 16:17:56 crc kubenswrapper[4808]: I0217 16:17:56.455456 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c906d5a8-4187-4f58-a352-fa7faea85309-config-data\") pod \"c906d5a8-4187-4f58-a352-fa7faea85309\" (UID: \"c906d5a8-4187-4f58-a352-fa7faea85309\") " Feb 17 16:17:56 crc kubenswrapper[4808]: I0217 16:17:56.455534 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c906d5a8-4187-4f58-a352-fa7faea85309-combined-ca-bundle\") pod \"c906d5a8-4187-4f58-a352-fa7faea85309\" (UID: \"c906d5a8-4187-4f58-a352-fa7faea85309\") " Feb 17 16:17:56 crc kubenswrapper[4808]: I0217 16:17:56.476818 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c906d5a8-4187-4f58-a352-fa7faea85309-kube-api-access-crb6r" (OuterVolumeSpecName: "kube-api-access-crb6r") pod "c906d5a8-4187-4f58-a352-fa7faea85309" (UID: "c906d5a8-4187-4f58-a352-fa7faea85309"). InnerVolumeSpecName "kube-api-access-crb6r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:17:56 crc kubenswrapper[4808]: I0217 16:17:56.508754 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c906d5a8-4187-4f58-a352-fa7faea85309-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c906d5a8-4187-4f58-a352-fa7faea85309" (UID: "c906d5a8-4187-4f58-a352-fa7faea85309"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:17:56 crc kubenswrapper[4808]: I0217 16:17:56.555539 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c906d5a8-4187-4f58-a352-fa7faea85309-config-data" (OuterVolumeSpecName: "config-data") pod "c906d5a8-4187-4f58-a352-fa7faea85309" (UID: "c906d5a8-4187-4f58-a352-fa7faea85309"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:17:56 crc kubenswrapper[4808]: I0217 16:17:56.573614 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c906d5a8-4187-4f58-a352-fa7faea85309-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:56 crc kubenswrapper[4808]: I0217 16:17:56.573651 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c906d5a8-4187-4f58-a352-fa7faea85309-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:56 crc kubenswrapper[4808]: I0217 16:17:56.573694 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-crb6r\" (UniqueName: \"kubernetes.io/projected/c906d5a8-4187-4f58-a352-fa7faea85309-kube-api-access-crb6r\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:56 crc kubenswrapper[4808]: I0217 16:17:56.710704 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 16:17:56 crc kubenswrapper[4808]: I0217 16:17:56.796513 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c906d5a8-4187-4f58-a352-fa7faea85309","Type":"ContainerDied","Data":"3a1dc36f880b404ebe891876f34b6e341baecb45367f34a30cd20f2687eeede8"} Feb 17 16:17:56 crc kubenswrapper[4808]: I0217 16:17:56.796562 4808 scope.go:117] "RemoveContainer" containerID="d5693756f54d942082122949e8141932a3315f36a027840738a229e012a32372" Feb 17 16:17:56 crc kubenswrapper[4808]: I0217 16:17:56.796750 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 17 16:17:56 crc kubenswrapper[4808]: I0217 16:17:56.800796 4808 generic.go:334] "Generic (PLEG): container finished" podID="f4225bf1-ce01-4830-b857-2201d4e67fd6" containerID="ce6083e495f8bd1d0bb01f3f9f8ec767b206db7820b55aab9e2d9682e9112c59" exitCode=0 Feb 17 16:17:56 crc kubenswrapper[4808]: I0217 16:17:56.800820 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 16:17:56 crc kubenswrapper[4808]: I0217 16:17:56.800842 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f4225bf1-ce01-4830-b857-2201d4e67fd6","Type":"ContainerDied","Data":"ce6083e495f8bd1d0bb01f3f9f8ec767b206db7820b55aab9e2d9682e9112c59"} Feb 17 16:17:56 crc kubenswrapper[4808]: I0217 16:17:56.800874 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f4225bf1-ce01-4830-b857-2201d4e67fd6","Type":"ContainerDied","Data":"b9ba282b61dd19cf7f01d6fa791c3901ce461226c81f5bc25a782cde7271b2fe"} Feb 17 16:17:56 crc kubenswrapper[4808]: I0217 16:17:56.829940 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 16:17:56 crc kubenswrapper[4808]: I0217 16:17:56.849058 4808 scope.go:117] "RemoveContainer" containerID="ce6083e495f8bd1d0bb01f3f9f8ec767b206db7820b55aab9e2d9682e9112c59" Feb 17 16:17:56 crc kubenswrapper[4808]: I0217 16:17:56.858382 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 16:17:56 crc kubenswrapper[4808]: I0217 16:17:56.867134 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 16:17:56 crc kubenswrapper[4808]: E0217 16:17:56.867628 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4225bf1-ce01-4830-b857-2201d4e67fd6" containerName="nova-metadata-log" Feb 17 16:17:56 crc kubenswrapper[4808]: I0217 16:17:56.867640 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4225bf1-ce01-4830-b857-2201d4e67fd6" containerName="nova-metadata-log" Feb 17 16:17:56 crc kubenswrapper[4808]: E0217 16:17:56.867663 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a26947f-ccdc-4726-98dc-a0c08a2a198b" containerName="nova-manage" Feb 17 16:17:56 crc kubenswrapper[4808]: I0217 16:17:56.867669 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a26947f-ccdc-4726-98dc-a0c08a2a198b" containerName="nova-manage" Feb 17 16:17:56 crc kubenswrapper[4808]: E0217 16:17:56.867683 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02c5cc0b-1b55-465f-8f31-fd8575d07242" containerName="registry-server" Feb 17 16:17:56 crc kubenswrapper[4808]: I0217 16:17:56.867689 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="02c5cc0b-1b55-465f-8f31-fd8575d07242" containerName="registry-server" Feb 17 16:17:56 crc kubenswrapper[4808]: E0217 16:17:56.867704 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02c5cc0b-1b55-465f-8f31-fd8575d07242" containerName="extract-utilities" Feb 17 16:17:56 crc kubenswrapper[4808]: I0217 16:17:56.867710 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="02c5cc0b-1b55-465f-8f31-fd8575d07242" containerName="extract-utilities" Feb 17 16:17:56 crc kubenswrapper[4808]: E0217 16:17:56.867726 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4225bf1-ce01-4830-b857-2201d4e67fd6" containerName="nova-metadata-metadata" Feb 17 16:17:56 crc kubenswrapper[4808]: I0217 16:17:56.867734 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4225bf1-ce01-4830-b857-2201d4e67fd6" containerName="nova-metadata-metadata" Feb 17 16:17:56 crc kubenswrapper[4808]: E0217 16:17:56.867754 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02c5cc0b-1b55-465f-8f31-fd8575d07242" containerName="extract-content" Feb 17 16:17:56 crc kubenswrapper[4808]: I0217 16:17:56.867759 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="02c5cc0b-1b55-465f-8f31-fd8575d07242" containerName="extract-content" Feb 17 16:17:56 crc kubenswrapper[4808]: E0217 16:17:56.867771 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c906d5a8-4187-4f58-a352-fa7faea85309" containerName="nova-scheduler-scheduler" Feb 17 16:17:56 crc kubenswrapper[4808]: I0217 16:17:56.867779 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="c906d5a8-4187-4f58-a352-fa7faea85309" containerName="nova-scheduler-scheduler" Feb 17 16:17:56 crc kubenswrapper[4808]: I0217 16:17:56.867998 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4225bf1-ce01-4830-b857-2201d4e67fd6" containerName="nova-metadata-metadata" Feb 17 16:17:56 crc kubenswrapper[4808]: I0217 16:17:56.868009 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="02c5cc0b-1b55-465f-8f31-fd8575d07242" containerName="registry-server" Feb 17 16:17:56 crc kubenswrapper[4808]: I0217 16:17:56.868029 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4225bf1-ce01-4830-b857-2201d4e67fd6" containerName="nova-metadata-log" Feb 17 16:17:56 crc kubenswrapper[4808]: I0217 16:17:56.868043 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="c906d5a8-4187-4f58-a352-fa7faea85309" containerName="nova-scheduler-scheduler" Feb 17 16:17:56 crc kubenswrapper[4808]: I0217 16:17:56.868063 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a26947f-ccdc-4726-98dc-a0c08a2a198b" containerName="nova-manage" Feb 17 16:17:56 crc kubenswrapper[4808]: I0217 16:17:56.868815 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 17 16:17:56 crc kubenswrapper[4808]: I0217 16:17:56.870709 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 17 16:17:56 crc kubenswrapper[4808]: I0217 16:17:56.874388 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 16:17:56 crc kubenswrapper[4808]: I0217 16:17:56.880315 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4225bf1-ce01-4830-b857-2201d4e67fd6-combined-ca-bundle\") pod \"f4225bf1-ce01-4830-b857-2201d4e67fd6\" (UID: \"f4225bf1-ce01-4830-b857-2201d4e67fd6\") " Feb 17 16:17:56 crc kubenswrapper[4808]: I0217 16:17:56.880554 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f4225bf1-ce01-4830-b857-2201d4e67fd6-logs\") pod \"f4225bf1-ce01-4830-b857-2201d4e67fd6\" (UID: \"f4225bf1-ce01-4830-b857-2201d4e67fd6\") " Feb 17 16:17:56 crc kubenswrapper[4808]: I0217 16:17:56.880615 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4225bf1-ce01-4830-b857-2201d4e67fd6-config-data\") pod \"f4225bf1-ce01-4830-b857-2201d4e67fd6\" (UID: \"f4225bf1-ce01-4830-b857-2201d4e67fd6\") " Feb 17 16:17:56 crc kubenswrapper[4808]: I0217 16:17:56.880670 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzbxx\" (UniqueName: \"kubernetes.io/projected/f4225bf1-ce01-4830-b857-2201d4e67fd6-kube-api-access-nzbxx\") pod \"f4225bf1-ce01-4830-b857-2201d4e67fd6\" (UID: \"f4225bf1-ce01-4830-b857-2201d4e67fd6\") " Feb 17 16:17:56 crc kubenswrapper[4808]: I0217 16:17:56.880720 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4225bf1-ce01-4830-b857-2201d4e67fd6-nova-metadata-tls-certs\") pod \"f4225bf1-ce01-4830-b857-2201d4e67fd6\" (UID: \"f4225bf1-ce01-4830-b857-2201d4e67fd6\") " Feb 17 16:17:56 crc kubenswrapper[4808]: I0217 16:17:56.881186 4808 scope.go:117] "RemoveContainer" containerID="0ea7c0c9c375fd22964f8f3f8e14e0f294b4d28792f18a93ced64305d017f82a" Feb 17 16:17:56 crc kubenswrapper[4808]: I0217 16:17:56.881690 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4225bf1-ce01-4830-b857-2201d4e67fd6-logs" (OuterVolumeSpecName: "logs") pod "f4225bf1-ce01-4830-b857-2201d4e67fd6" (UID: "f4225bf1-ce01-4830-b857-2201d4e67fd6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:17:56 crc kubenswrapper[4808]: I0217 16:17:56.907287 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4225bf1-ce01-4830-b857-2201d4e67fd6-kube-api-access-nzbxx" (OuterVolumeSpecName: "kube-api-access-nzbxx") pod "f4225bf1-ce01-4830-b857-2201d4e67fd6" (UID: "f4225bf1-ce01-4830-b857-2201d4e67fd6"). InnerVolumeSpecName "kube-api-access-nzbxx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:17:56 crc kubenswrapper[4808]: I0217 16:17:56.915058 4808 scope.go:117] "RemoveContainer" containerID="ce6083e495f8bd1d0bb01f3f9f8ec767b206db7820b55aab9e2d9682e9112c59" Feb 17 16:17:56 crc kubenswrapper[4808]: I0217 16:17:56.915157 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4225bf1-ce01-4830-b857-2201d4e67fd6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f4225bf1-ce01-4830-b857-2201d4e67fd6" (UID: "f4225bf1-ce01-4830-b857-2201d4e67fd6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:17:56 crc kubenswrapper[4808]: E0217 16:17:56.915728 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce6083e495f8bd1d0bb01f3f9f8ec767b206db7820b55aab9e2d9682e9112c59\": container with ID starting with ce6083e495f8bd1d0bb01f3f9f8ec767b206db7820b55aab9e2d9682e9112c59 not found: ID does not exist" containerID="ce6083e495f8bd1d0bb01f3f9f8ec767b206db7820b55aab9e2d9682e9112c59" Feb 17 16:17:56 crc kubenswrapper[4808]: I0217 16:17:56.915768 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce6083e495f8bd1d0bb01f3f9f8ec767b206db7820b55aab9e2d9682e9112c59"} err="failed to get container status \"ce6083e495f8bd1d0bb01f3f9f8ec767b206db7820b55aab9e2d9682e9112c59\": rpc error: code = NotFound desc = could not find container \"ce6083e495f8bd1d0bb01f3f9f8ec767b206db7820b55aab9e2d9682e9112c59\": container with ID starting with ce6083e495f8bd1d0bb01f3f9f8ec767b206db7820b55aab9e2d9682e9112c59 not found: ID does not exist" Feb 17 16:17:56 crc kubenswrapper[4808]: I0217 16:17:56.915792 4808 scope.go:117] "RemoveContainer" containerID="0ea7c0c9c375fd22964f8f3f8e14e0f294b4d28792f18a93ced64305d017f82a" Feb 17 16:17:56 crc kubenswrapper[4808]: E0217 16:17:56.916697 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ea7c0c9c375fd22964f8f3f8e14e0f294b4d28792f18a93ced64305d017f82a\": container with ID starting with 0ea7c0c9c375fd22964f8f3f8e14e0f294b4d28792f18a93ced64305d017f82a not found: ID does not exist" containerID="0ea7c0c9c375fd22964f8f3f8e14e0f294b4d28792f18a93ced64305d017f82a" Feb 17 16:17:56 crc kubenswrapper[4808]: I0217 16:17:56.916740 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ea7c0c9c375fd22964f8f3f8e14e0f294b4d28792f18a93ced64305d017f82a"} err="failed to get container status \"0ea7c0c9c375fd22964f8f3f8e14e0f294b4d28792f18a93ced64305d017f82a\": rpc error: code = NotFound desc = could not find container \"0ea7c0c9c375fd22964f8f3f8e14e0f294b4d28792f18a93ced64305d017f82a\": container with ID starting with 0ea7c0c9c375fd22964f8f3f8e14e0f294b4d28792f18a93ced64305d017f82a not found: ID does not exist" Feb 17 16:17:56 crc kubenswrapper[4808]: I0217 16:17:56.930289 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4225bf1-ce01-4830-b857-2201d4e67fd6-config-data" (OuterVolumeSpecName: "config-data") pod "f4225bf1-ce01-4830-b857-2201d4e67fd6" (UID: "f4225bf1-ce01-4830-b857-2201d4e67fd6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:17:56 crc kubenswrapper[4808]: I0217 16:17:56.955842 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4225bf1-ce01-4830-b857-2201d4e67fd6-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "f4225bf1-ce01-4830-b857-2201d4e67fd6" (UID: "f4225bf1-ce01-4830-b857-2201d4e67fd6"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:17:56 crc kubenswrapper[4808]: I0217 16:17:56.982888 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbktk\" (UniqueName: \"kubernetes.io/projected/4481dde9-062b-48d4-ae35-b6fa96ccf94e-kube-api-access-lbktk\") pod \"nova-scheduler-0\" (UID: \"4481dde9-062b-48d4-ae35-b6fa96ccf94e\") " pod="openstack/nova-scheduler-0" Feb 17 16:17:56 crc kubenswrapper[4808]: I0217 16:17:56.982932 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4481dde9-062b-48d4-ae35-b6fa96ccf94e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4481dde9-062b-48d4-ae35-b6fa96ccf94e\") " pod="openstack/nova-scheduler-0" Feb 17 16:17:56 crc kubenswrapper[4808]: I0217 16:17:56.983090 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4481dde9-062b-48d4-ae35-b6fa96ccf94e-config-data\") pod \"nova-scheduler-0\" (UID: \"4481dde9-062b-48d4-ae35-b6fa96ccf94e\") " pod="openstack/nova-scheduler-0" Feb 17 16:17:56 crc kubenswrapper[4808]: I0217 16:17:56.983391 4808 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f4225bf1-ce01-4830-b857-2201d4e67fd6-logs\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:56 crc kubenswrapper[4808]: I0217 16:17:56.983421 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4225bf1-ce01-4830-b857-2201d4e67fd6-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:56 crc kubenswrapper[4808]: I0217 16:17:56.983435 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzbxx\" (UniqueName: \"kubernetes.io/projected/f4225bf1-ce01-4830-b857-2201d4e67fd6-kube-api-access-nzbxx\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:56 crc kubenswrapper[4808]: I0217 16:17:56.983452 4808 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4225bf1-ce01-4830-b857-2201d4e67fd6-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:56 crc kubenswrapper[4808]: I0217 16:17:56.983464 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4225bf1-ce01-4830-b857-2201d4e67fd6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:57 crc kubenswrapper[4808]: I0217 16:17:57.085499 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4481dde9-062b-48d4-ae35-b6fa96ccf94e-config-data\") pod \"nova-scheduler-0\" (UID: \"4481dde9-062b-48d4-ae35-b6fa96ccf94e\") " pod="openstack/nova-scheduler-0" Feb 17 16:17:57 crc kubenswrapper[4808]: I0217 16:17:57.085645 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbktk\" (UniqueName: \"kubernetes.io/projected/4481dde9-062b-48d4-ae35-b6fa96ccf94e-kube-api-access-lbktk\") pod \"nova-scheduler-0\" (UID: \"4481dde9-062b-48d4-ae35-b6fa96ccf94e\") " pod="openstack/nova-scheduler-0" Feb 17 16:17:57 crc kubenswrapper[4808]: I0217 16:17:57.085667 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4481dde9-062b-48d4-ae35-b6fa96ccf94e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4481dde9-062b-48d4-ae35-b6fa96ccf94e\") " pod="openstack/nova-scheduler-0" Feb 17 16:17:57 crc kubenswrapper[4808]: I0217 16:17:57.088828 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4481dde9-062b-48d4-ae35-b6fa96ccf94e-config-data\") pod \"nova-scheduler-0\" (UID: \"4481dde9-062b-48d4-ae35-b6fa96ccf94e\") " pod="openstack/nova-scheduler-0" Feb 17 16:17:57 crc kubenswrapper[4808]: I0217 16:17:57.088900 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4481dde9-062b-48d4-ae35-b6fa96ccf94e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4481dde9-062b-48d4-ae35-b6fa96ccf94e\") " pod="openstack/nova-scheduler-0" Feb 17 16:17:57 crc kubenswrapper[4808]: I0217 16:17:57.106223 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbktk\" (UniqueName: \"kubernetes.io/projected/4481dde9-062b-48d4-ae35-b6fa96ccf94e-kube-api-access-lbktk\") pod \"nova-scheduler-0\" (UID: \"4481dde9-062b-48d4-ae35-b6fa96ccf94e\") " pod="openstack/nova-scheduler-0" Feb 17 16:17:57 crc kubenswrapper[4808]: I0217 16:17:57.136336 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 16:17:57 crc kubenswrapper[4808]: I0217 16:17:57.166745 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c906d5a8-4187-4f58-a352-fa7faea85309" path="/var/lib/kubelet/pods/c906d5a8-4187-4f58-a352-fa7faea85309/volumes" Feb 17 16:17:57 crc kubenswrapper[4808]: I0217 16:17:57.167434 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 16:17:57 crc kubenswrapper[4808]: I0217 16:17:57.167473 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 17 16:17:57 crc kubenswrapper[4808]: I0217 16:17:57.173655 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 16:17:57 crc kubenswrapper[4808]: I0217 16:17:57.178010 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 17 16:17:57 crc kubenswrapper[4808]: I0217 16:17:57.178317 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 17 16:17:57 crc kubenswrapper[4808]: I0217 16:17:57.205890 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 16:17:57 crc kubenswrapper[4808]: I0217 16:17:57.208780 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 17 16:17:57 crc kubenswrapper[4808]: I0217 16:17:57.294779 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbdf54f1-8cfa-46c6-addd-bda126337c05-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"fbdf54f1-8cfa-46c6-addd-bda126337c05\") " pod="openstack/nova-metadata-0" Feb 17 16:17:57 crc kubenswrapper[4808]: I0217 16:17:57.294860 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fbdf54f1-8cfa-46c6-addd-bda126337c05-logs\") pod \"nova-metadata-0\" (UID: \"fbdf54f1-8cfa-46c6-addd-bda126337c05\") " pod="openstack/nova-metadata-0" Feb 17 16:17:57 crc kubenswrapper[4808]: I0217 16:17:57.294888 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrjl9\" (UniqueName: \"kubernetes.io/projected/fbdf54f1-8cfa-46c6-addd-bda126337c05-kube-api-access-jrjl9\") pod \"nova-metadata-0\" (UID: \"fbdf54f1-8cfa-46c6-addd-bda126337c05\") " pod="openstack/nova-metadata-0" Feb 17 16:17:57 crc kubenswrapper[4808]: I0217 16:17:57.294924 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/fbdf54f1-8cfa-46c6-addd-bda126337c05-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"fbdf54f1-8cfa-46c6-addd-bda126337c05\") " pod="openstack/nova-metadata-0" Feb 17 16:17:57 crc kubenswrapper[4808]: I0217 16:17:57.295001 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbdf54f1-8cfa-46c6-addd-bda126337c05-config-data\") pod \"nova-metadata-0\" (UID: \"fbdf54f1-8cfa-46c6-addd-bda126337c05\") " pod="openstack/nova-metadata-0" Feb 17 16:17:57 crc kubenswrapper[4808]: I0217 16:17:57.396752 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbdf54f1-8cfa-46c6-addd-bda126337c05-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"fbdf54f1-8cfa-46c6-addd-bda126337c05\") " pod="openstack/nova-metadata-0" Feb 17 16:17:57 crc kubenswrapper[4808]: I0217 16:17:57.396808 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fbdf54f1-8cfa-46c6-addd-bda126337c05-logs\") pod \"nova-metadata-0\" (UID: \"fbdf54f1-8cfa-46c6-addd-bda126337c05\") " pod="openstack/nova-metadata-0" Feb 17 16:17:57 crc kubenswrapper[4808]: I0217 16:17:57.396831 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrjl9\" (UniqueName: \"kubernetes.io/projected/fbdf54f1-8cfa-46c6-addd-bda126337c05-kube-api-access-jrjl9\") pod \"nova-metadata-0\" (UID: \"fbdf54f1-8cfa-46c6-addd-bda126337c05\") " pod="openstack/nova-metadata-0" Feb 17 16:17:57 crc kubenswrapper[4808]: I0217 16:17:57.396871 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/fbdf54f1-8cfa-46c6-addd-bda126337c05-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"fbdf54f1-8cfa-46c6-addd-bda126337c05\") " pod="openstack/nova-metadata-0" Feb 17 16:17:57 crc kubenswrapper[4808]: I0217 16:17:57.396931 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbdf54f1-8cfa-46c6-addd-bda126337c05-config-data\") pod \"nova-metadata-0\" (UID: \"fbdf54f1-8cfa-46c6-addd-bda126337c05\") " pod="openstack/nova-metadata-0" Feb 17 16:17:57 crc kubenswrapper[4808]: I0217 16:17:57.397416 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fbdf54f1-8cfa-46c6-addd-bda126337c05-logs\") pod \"nova-metadata-0\" (UID: \"fbdf54f1-8cfa-46c6-addd-bda126337c05\") " pod="openstack/nova-metadata-0" Feb 17 16:17:57 crc kubenswrapper[4808]: I0217 16:17:57.402065 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/fbdf54f1-8cfa-46c6-addd-bda126337c05-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"fbdf54f1-8cfa-46c6-addd-bda126337c05\") " pod="openstack/nova-metadata-0" Feb 17 16:17:57 crc kubenswrapper[4808]: I0217 16:17:57.402267 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbdf54f1-8cfa-46c6-addd-bda126337c05-config-data\") pod \"nova-metadata-0\" (UID: \"fbdf54f1-8cfa-46c6-addd-bda126337c05\") " pod="openstack/nova-metadata-0" Feb 17 16:17:57 crc kubenswrapper[4808]: I0217 16:17:57.402462 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbdf54f1-8cfa-46c6-addd-bda126337c05-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"fbdf54f1-8cfa-46c6-addd-bda126337c05\") " pod="openstack/nova-metadata-0" Feb 17 16:17:57 crc kubenswrapper[4808]: I0217 16:17:57.417444 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrjl9\" (UniqueName: \"kubernetes.io/projected/fbdf54f1-8cfa-46c6-addd-bda126337c05-kube-api-access-jrjl9\") pod \"nova-metadata-0\" (UID: \"fbdf54f1-8cfa-46c6-addd-bda126337c05\") " pod="openstack/nova-metadata-0" Feb 17 16:17:57 crc kubenswrapper[4808]: I0217 16:17:57.493496 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 16:17:57 crc kubenswrapper[4808]: I0217 16:17:57.815710 4808 generic.go:334] "Generic (PLEG): container finished" podID="f0fdf7ae-717a-43f1-82b8-9c87285d4b4b" containerID="ec8315c6142559a5476ca3a0343759e88721f0b33254f08b4740490ad769e248" exitCode=0 Feb 17 16:17:57 crc kubenswrapper[4808]: I0217 16:17:57.815791 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f0fdf7ae-717a-43f1-82b8-9c87285d4b4b","Type":"ContainerDied","Data":"ec8315c6142559a5476ca3a0343759e88721f0b33254f08b4740490ad769e248"} Feb 17 16:17:58 crc kubenswrapper[4808]: I0217 16:17:57.869142 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 16:17:58 crc kubenswrapper[4808]: W0217 16:17:57.871258 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4481dde9_062b_48d4_ae35_b6fa96ccf94e.slice/crio-01361f852e8ff770375d1279d67e722d1f2352cff373acf2c35b5d0e7ea7e15d WatchSource:0}: Error finding container 01361f852e8ff770375d1279d67e722d1f2352cff373acf2c35b5d0e7ea7e15d: Status 404 returned error can't find the container with id 01361f852e8ff770375d1279d67e722d1f2352cff373acf2c35b5d0e7ea7e15d Feb 17 16:17:58 crc kubenswrapper[4808]: I0217 16:17:58.039612 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 16:17:58 crc kubenswrapper[4808]: I0217 16:17:58.117090 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b26nj\" (UniqueName: \"kubernetes.io/projected/f0fdf7ae-717a-43f1-82b8-9c87285d4b4b-kube-api-access-b26nj\") pod \"f0fdf7ae-717a-43f1-82b8-9c87285d4b4b\" (UID: \"f0fdf7ae-717a-43f1-82b8-9c87285d4b4b\") " Feb 17 16:17:58 crc kubenswrapper[4808]: I0217 16:17:58.117220 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0fdf7ae-717a-43f1-82b8-9c87285d4b4b-config-data\") pod \"f0fdf7ae-717a-43f1-82b8-9c87285d4b4b\" (UID: \"f0fdf7ae-717a-43f1-82b8-9c87285d4b4b\") " Feb 17 16:17:58 crc kubenswrapper[4808]: I0217 16:17:58.117248 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0fdf7ae-717a-43f1-82b8-9c87285d4b4b-internal-tls-certs\") pod \"f0fdf7ae-717a-43f1-82b8-9c87285d4b4b\" (UID: \"f0fdf7ae-717a-43f1-82b8-9c87285d4b4b\") " Feb 17 16:17:58 crc kubenswrapper[4808]: I0217 16:17:58.117330 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0fdf7ae-717a-43f1-82b8-9c87285d4b4b-public-tls-certs\") pod \"f0fdf7ae-717a-43f1-82b8-9c87285d4b4b\" (UID: \"f0fdf7ae-717a-43f1-82b8-9c87285d4b4b\") " Feb 17 16:17:58 crc kubenswrapper[4808]: I0217 16:17:58.117362 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f0fdf7ae-717a-43f1-82b8-9c87285d4b4b-logs\") pod \"f0fdf7ae-717a-43f1-82b8-9c87285d4b4b\" (UID: \"f0fdf7ae-717a-43f1-82b8-9c87285d4b4b\") " Feb 17 16:17:58 crc kubenswrapper[4808]: I0217 16:17:58.117446 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0fdf7ae-717a-43f1-82b8-9c87285d4b4b-combined-ca-bundle\") pod \"f0fdf7ae-717a-43f1-82b8-9c87285d4b4b\" (UID: \"f0fdf7ae-717a-43f1-82b8-9c87285d4b4b\") " Feb 17 16:17:58 crc kubenswrapper[4808]: I0217 16:17:58.129297 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f0fdf7ae-717a-43f1-82b8-9c87285d4b4b-logs" (OuterVolumeSpecName: "logs") pod "f0fdf7ae-717a-43f1-82b8-9c87285d4b4b" (UID: "f0fdf7ae-717a-43f1-82b8-9c87285d4b4b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:17:58 crc kubenswrapper[4808]: I0217 16:17:58.136640 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0fdf7ae-717a-43f1-82b8-9c87285d4b4b-kube-api-access-b26nj" (OuterVolumeSpecName: "kube-api-access-b26nj") pod "f0fdf7ae-717a-43f1-82b8-9c87285d4b4b" (UID: "f0fdf7ae-717a-43f1-82b8-9c87285d4b4b"). InnerVolumeSpecName "kube-api-access-b26nj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:17:58 crc kubenswrapper[4808]: I0217 16:17:58.148273 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 16:17:58 crc kubenswrapper[4808]: W0217 16:17:58.151505 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfbdf54f1_8cfa_46c6_addd_bda126337c05.slice/crio-57a282b68f17139d2fd56202b4246ae469dd0c8c5c5e45c1f786d59828fa465a WatchSource:0}: Error finding container 57a282b68f17139d2fd56202b4246ae469dd0c8c5c5e45c1f786d59828fa465a: Status 404 returned error can't find the container with id 57a282b68f17139d2fd56202b4246ae469dd0c8c5c5e45c1f786d59828fa465a Feb 17 16:17:58 crc kubenswrapper[4808]: I0217 16:17:58.161097 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0fdf7ae-717a-43f1-82b8-9c87285d4b4b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f0fdf7ae-717a-43f1-82b8-9c87285d4b4b" (UID: "f0fdf7ae-717a-43f1-82b8-9c87285d4b4b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:17:58 crc kubenswrapper[4808]: I0217 16:17:58.182412 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0fdf7ae-717a-43f1-82b8-9c87285d4b4b-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "f0fdf7ae-717a-43f1-82b8-9c87285d4b4b" (UID: "f0fdf7ae-717a-43f1-82b8-9c87285d4b4b"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:17:58 crc kubenswrapper[4808]: I0217 16:17:58.191755 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0fdf7ae-717a-43f1-82b8-9c87285d4b4b-config-data" (OuterVolumeSpecName: "config-data") pod "f0fdf7ae-717a-43f1-82b8-9c87285d4b4b" (UID: "f0fdf7ae-717a-43f1-82b8-9c87285d4b4b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:17:58 crc kubenswrapper[4808]: I0217 16:17:58.211305 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0fdf7ae-717a-43f1-82b8-9c87285d4b4b-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "f0fdf7ae-717a-43f1-82b8-9c87285d4b4b" (UID: "f0fdf7ae-717a-43f1-82b8-9c87285d4b4b"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:17:58 crc kubenswrapper[4808]: I0217 16:17:58.220841 4808 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0fdf7ae-717a-43f1-82b8-9c87285d4b4b-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:58 crc kubenswrapper[4808]: I0217 16:17:58.220869 4808 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f0fdf7ae-717a-43f1-82b8-9c87285d4b4b-logs\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:58 crc kubenswrapper[4808]: I0217 16:17:58.220881 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0fdf7ae-717a-43f1-82b8-9c87285d4b4b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:58 crc kubenswrapper[4808]: I0217 16:17:58.220891 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b26nj\" (UniqueName: \"kubernetes.io/projected/f0fdf7ae-717a-43f1-82b8-9c87285d4b4b-kube-api-access-b26nj\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:58 crc kubenswrapper[4808]: I0217 16:17:58.220904 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0fdf7ae-717a-43f1-82b8-9c87285d4b4b-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:58 crc kubenswrapper[4808]: I0217 16:17:58.220914 4808 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0fdf7ae-717a-43f1-82b8-9c87285d4b4b-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:58 crc kubenswrapper[4808]: I0217 16:17:58.831247 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fbdf54f1-8cfa-46c6-addd-bda126337c05","Type":"ContainerStarted","Data":"610af160e1941960b85a0b3a5740cab8df8fc0990aede2b062c280b582777eb1"} Feb 17 16:17:58 crc kubenswrapper[4808]: I0217 16:17:58.831291 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fbdf54f1-8cfa-46c6-addd-bda126337c05","Type":"ContainerStarted","Data":"aee42fad9d7ee53b5fdefc2286b5134b69be072ad3d32ae3e21f3e4d5364d295"} Feb 17 16:17:58 crc kubenswrapper[4808]: I0217 16:17:58.831302 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fbdf54f1-8cfa-46c6-addd-bda126337c05","Type":"ContainerStarted","Data":"57a282b68f17139d2fd56202b4246ae469dd0c8c5c5e45c1f786d59828fa465a"} Feb 17 16:17:58 crc kubenswrapper[4808]: I0217 16:17:58.833089 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4481dde9-062b-48d4-ae35-b6fa96ccf94e","Type":"ContainerStarted","Data":"63811202ee0ca69af9a75b2b7b90d7990ed5c27c26734790ca6227d824b4737c"} Feb 17 16:17:58 crc kubenswrapper[4808]: I0217 16:17:58.833453 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4481dde9-062b-48d4-ae35-b6fa96ccf94e","Type":"ContainerStarted","Data":"01361f852e8ff770375d1279d67e722d1f2352cff373acf2c35b5d0e7ea7e15d"} Feb 17 16:17:58 crc kubenswrapper[4808]: I0217 16:17:58.835168 4808 generic.go:334] "Generic (PLEG): container finished" podID="fdd136e1-cf53-4300-9df6-53bfb28905cd" containerID="70c41ea11a7a6ad0cd421e097caf52b723c2e7dcd550f23abc585761684fe1f5" exitCode=0 Feb 17 16:17:58 crc kubenswrapper[4808]: I0217 16:17:58.835199 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-scd77" event={"ID":"fdd136e1-cf53-4300-9df6-53bfb28905cd","Type":"ContainerDied","Data":"70c41ea11a7a6ad0cd421e097caf52b723c2e7dcd550f23abc585761684fe1f5"} Feb 17 16:17:58 crc kubenswrapper[4808]: I0217 16:17:58.838074 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f0fdf7ae-717a-43f1-82b8-9c87285d4b4b","Type":"ContainerDied","Data":"ea9847b252efaef71e3a85841133385f61299d19b321c26d06d5bb202a3896ea"} Feb 17 16:17:58 crc kubenswrapper[4808]: I0217 16:17:58.838213 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 16:17:58 crc kubenswrapper[4808]: I0217 16:17:58.839052 4808 scope.go:117] "RemoveContainer" containerID="ec8315c6142559a5476ca3a0343759e88721f0b33254f08b4740490ad769e248" Feb 17 16:17:58 crc kubenswrapper[4808]: I0217 16:17:58.865089 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.864942314 podStartE2EDuration="2.864942314s" podCreationTimestamp="2026-02-17 16:17:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:17:58.85422334 +0000 UTC m=+1442.370582423" watchObservedRunningTime="2026-02-17 16:17:58.864942314 +0000 UTC m=+1442.381301387" Feb 17 16:17:58 crc kubenswrapper[4808]: I0217 16:17:58.885542 4808 scope.go:117] "RemoveContainer" containerID="b94e5b5414eaea5609181fe57f8eb9c5db284f5a842649aa0395af8d5e1b42e4" Feb 17 16:17:58 crc kubenswrapper[4808]: I0217 16:17:58.940616 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 17 16:17:58 crc kubenswrapper[4808]: I0217 16:17:58.969038 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 17 16:17:58 crc kubenswrapper[4808]: I0217 16:17:58.987217 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 17 16:17:58 crc kubenswrapper[4808]: E0217 16:17:58.987716 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0fdf7ae-717a-43f1-82b8-9c87285d4b4b" containerName="nova-api-api" Feb 17 16:17:58 crc kubenswrapper[4808]: I0217 16:17:58.987729 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0fdf7ae-717a-43f1-82b8-9c87285d4b4b" containerName="nova-api-api" Feb 17 16:17:58 crc kubenswrapper[4808]: E0217 16:17:58.987746 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0fdf7ae-717a-43f1-82b8-9c87285d4b4b" containerName="nova-api-log" Feb 17 16:17:58 crc kubenswrapper[4808]: I0217 16:17:58.987752 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0fdf7ae-717a-43f1-82b8-9c87285d4b4b" containerName="nova-api-log" Feb 17 16:17:58 crc kubenswrapper[4808]: I0217 16:17:58.987964 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0fdf7ae-717a-43f1-82b8-9c87285d4b4b" containerName="nova-api-api" Feb 17 16:17:58 crc kubenswrapper[4808]: I0217 16:17:58.987978 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0fdf7ae-717a-43f1-82b8-9c87285d4b4b" containerName="nova-api-log" Feb 17 16:17:58 crc kubenswrapper[4808]: I0217 16:17:58.989078 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 16:17:58 crc kubenswrapper[4808]: I0217 16:17:58.992244 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 17 16:17:58 crc kubenswrapper[4808]: I0217 16:17:58.992260 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 17 16:17:58 crc kubenswrapper[4808]: I0217 16:17:58.992364 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 17 16:17:59 crc kubenswrapper[4808]: I0217 16:17:59.006507 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 17 16:17:59 crc kubenswrapper[4808]: I0217 16:17:59.136677 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e91a7ada-9f3c-4a6c-a56e-355538c9a868-logs\") pod \"nova-api-0\" (UID: \"e91a7ada-9f3c-4a6c-a56e-355538c9a868\") " pod="openstack/nova-api-0" Feb 17 16:17:59 crc kubenswrapper[4808]: I0217 16:17:59.136799 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e91a7ada-9f3c-4a6c-a56e-355538c9a868-config-data\") pod \"nova-api-0\" (UID: \"e91a7ada-9f3c-4a6c-a56e-355538c9a868\") " pod="openstack/nova-api-0" Feb 17 16:17:59 crc kubenswrapper[4808]: I0217 16:17:59.136826 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e91a7ada-9f3c-4a6c-a56e-355538c9a868-public-tls-certs\") pod \"nova-api-0\" (UID: \"e91a7ada-9f3c-4a6c-a56e-355538c9a868\") " pod="openstack/nova-api-0" Feb 17 16:17:59 crc kubenswrapper[4808]: I0217 16:17:59.137512 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psvn4\" (UniqueName: \"kubernetes.io/projected/e91a7ada-9f3c-4a6c-a56e-355538c9a868-kube-api-access-psvn4\") pod \"nova-api-0\" (UID: \"e91a7ada-9f3c-4a6c-a56e-355538c9a868\") " pod="openstack/nova-api-0" Feb 17 16:17:59 crc kubenswrapper[4808]: I0217 16:17:59.137568 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e91a7ada-9f3c-4a6c-a56e-355538c9a868-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e91a7ada-9f3c-4a6c-a56e-355538c9a868\") " pod="openstack/nova-api-0" Feb 17 16:17:59 crc kubenswrapper[4808]: I0217 16:17:59.137602 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e91a7ada-9f3c-4a6c-a56e-355538c9a868-internal-tls-certs\") pod \"nova-api-0\" (UID: \"e91a7ada-9f3c-4a6c-a56e-355538c9a868\") " pod="openstack/nova-api-0" Feb 17 16:17:59 crc kubenswrapper[4808]: I0217 16:17:59.156506 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0fdf7ae-717a-43f1-82b8-9c87285d4b4b" path="/var/lib/kubelet/pods/f0fdf7ae-717a-43f1-82b8-9c87285d4b4b/volumes" Feb 17 16:17:59 crc kubenswrapper[4808]: I0217 16:17:59.157413 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4225bf1-ce01-4830-b857-2201d4e67fd6" path="/var/lib/kubelet/pods/f4225bf1-ce01-4830-b857-2201d4e67fd6/volumes" Feb 17 16:17:59 crc kubenswrapper[4808]: I0217 16:17:59.240016 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e91a7ada-9f3c-4a6c-a56e-355538c9a868-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e91a7ada-9f3c-4a6c-a56e-355538c9a868\") " pod="openstack/nova-api-0" Feb 17 16:17:59 crc kubenswrapper[4808]: I0217 16:17:59.240048 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e91a7ada-9f3c-4a6c-a56e-355538c9a868-internal-tls-certs\") pod \"nova-api-0\" (UID: \"e91a7ada-9f3c-4a6c-a56e-355538c9a868\") " pod="openstack/nova-api-0" Feb 17 16:17:59 crc kubenswrapper[4808]: I0217 16:17:59.240181 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e91a7ada-9f3c-4a6c-a56e-355538c9a868-logs\") pod \"nova-api-0\" (UID: \"e91a7ada-9f3c-4a6c-a56e-355538c9a868\") " pod="openstack/nova-api-0" Feb 17 16:17:59 crc kubenswrapper[4808]: I0217 16:17:59.240363 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e91a7ada-9f3c-4a6c-a56e-355538c9a868-config-data\") pod \"nova-api-0\" (UID: \"e91a7ada-9f3c-4a6c-a56e-355538c9a868\") " pod="openstack/nova-api-0" Feb 17 16:17:59 crc kubenswrapper[4808]: I0217 16:17:59.240394 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e91a7ada-9f3c-4a6c-a56e-355538c9a868-public-tls-certs\") pod \"nova-api-0\" (UID: \"e91a7ada-9f3c-4a6c-a56e-355538c9a868\") " pod="openstack/nova-api-0" Feb 17 16:17:59 crc kubenswrapper[4808]: I0217 16:17:59.240478 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-psvn4\" (UniqueName: \"kubernetes.io/projected/e91a7ada-9f3c-4a6c-a56e-355538c9a868-kube-api-access-psvn4\") pod \"nova-api-0\" (UID: \"e91a7ada-9f3c-4a6c-a56e-355538c9a868\") " pod="openstack/nova-api-0" Feb 17 16:17:59 crc kubenswrapper[4808]: I0217 16:17:59.241120 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e91a7ada-9f3c-4a6c-a56e-355538c9a868-logs\") pod \"nova-api-0\" (UID: \"e91a7ada-9f3c-4a6c-a56e-355538c9a868\") " pod="openstack/nova-api-0" Feb 17 16:17:59 crc kubenswrapper[4808]: I0217 16:17:59.246259 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e91a7ada-9f3c-4a6c-a56e-355538c9a868-internal-tls-certs\") pod \"nova-api-0\" (UID: \"e91a7ada-9f3c-4a6c-a56e-355538c9a868\") " pod="openstack/nova-api-0" Feb 17 16:17:59 crc kubenswrapper[4808]: I0217 16:17:59.246564 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e91a7ada-9f3c-4a6c-a56e-355538c9a868-config-data\") pod \"nova-api-0\" (UID: \"e91a7ada-9f3c-4a6c-a56e-355538c9a868\") " pod="openstack/nova-api-0" Feb 17 16:17:59 crc kubenswrapper[4808]: I0217 16:17:59.247504 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e91a7ada-9f3c-4a6c-a56e-355538c9a868-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e91a7ada-9f3c-4a6c-a56e-355538c9a868\") " pod="openstack/nova-api-0" Feb 17 16:17:59 crc kubenswrapper[4808]: I0217 16:17:59.250084 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e91a7ada-9f3c-4a6c-a56e-355538c9a868-public-tls-certs\") pod \"nova-api-0\" (UID: \"e91a7ada-9f3c-4a6c-a56e-355538c9a868\") " pod="openstack/nova-api-0" Feb 17 16:17:59 crc kubenswrapper[4808]: I0217 16:17:59.258158 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-psvn4\" (UniqueName: \"kubernetes.io/projected/e91a7ada-9f3c-4a6c-a56e-355538c9a868-kube-api-access-psvn4\") pod \"nova-api-0\" (UID: \"e91a7ada-9f3c-4a6c-a56e-355538c9a868\") " pod="openstack/nova-api-0" Feb 17 16:17:59 crc kubenswrapper[4808]: I0217 16:17:59.309945 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 16:17:59 crc kubenswrapper[4808]: W0217 16:17:59.844457 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode91a7ada_9f3c_4a6c_a56e_355538c9a868.slice/crio-7282e7e0ac4296b48f614d85f28c8838489fbb4304a12d207a4d4c61a52c7cb4 WatchSource:0}: Error finding container 7282e7e0ac4296b48f614d85f28c8838489fbb4304a12d207a4d4c61a52c7cb4: Status 404 returned error can't find the container with id 7282e7e0ac4296b48f614d85f28c8838489fbb4304a12d207a4d4c61a52c7cb4 Feb 17 16:17:59 crc kubenswrapper[4808]: I0217 16:17:59.849373 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 17 16:17:59 crc kubenswrapper[4808]: I0217 16:17:59.859195 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-scd77" event={"ID":"fdd136e1-cf53-4300-9df6-53bfb28905cd","Type":"ContainerStarted","Data":"356b63136bb36f4f253e29cd7c8a7b3e7da5036e116e56a938d183e2bd5afab2"} Feb 17 16:17:59 crc kubenswrapper[4808]: I0217 16:17:59.884836 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-scd77" podStartSLOduration=3.120631854 podStartE2EDuration="13.884818601s" podCreationTimestamp="2026-02-17 16:17:46 +0000 UTC" firstStartedPulling="2026-02-17 16:17:48.674163886 +0000 UTC m=+1432.190522979" lastFinishedPulling="2026-02-17 16:17:59.438350653 +0000 UTC m=+1442.954709726" observedRunningTime="2026-02-17 16:17:59.875047703 +0000 UTC m=+1443.391406776" watchObservedRunningTime="2026-02-17 16:17:59.884818601 +0000 UTC m=+1443.401177674" Feb 17 16:17:59 crc kubenswrapper[4808]: I0217 16:17:59.910481 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.91045298 podStartE2EDuration="2.91045298s" podCreationTimestamp="2026-02-17 16:17:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:17:59.899328695 +0000 UTC m=+1443.415687788" watchObservedRunningTime="2026-02-17 16:17:59.91045298 +0000 UTC m=+1443.426812043" Feb 17 16:18:00 crc kubenswrapper[4808]: I0217 16:18:00.871213 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e91a7ada-9f3c-4a6c-a56e-355538c9a868","Type":"ContainerStarted","Data":"6eaabe9155721ee1f7bc24c6493d78d1b78c85a39555ac7bb4b0f6e8d4897798"} Feb 17 16:18:00 crc kubenswrapper[4808]: I0217 16:18:00.871559 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e91a7ada-9f3c-4a6c-a56e-355538c9a868","Type":"ContainerStarted","Data":"e607105fe44353f172957e4b6be74b049fac2dfe39ce413bc8e9b4b577e1f85b"} Feb 17 16:18:00 crc kubenswrapper[4808]: I0217 16:18:00.871591 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e91a7ada-9f3c-4a6c-a56e-355538c9a868","Type":"ContainerStarted","Data":"7282e7e0ac4296b48f614d85f28c8838489fbb4304a12d207a4d4c61a52c7cb4"} Feb 17 16:18:00 crc kubenswrapper[4808]: I0217 16:18:00.910431 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.910408099 podStartE2EDuration="2.910408099s" podCreationTimestamp="2026-02-17 16:17:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:18:00.90586815 +0000 UTC m=+1444.422227223" watchObservedRunningTime="2026-02-17 16:18:00.910408099 +0000 UTC m=+1444.426767172" Feb 17 16:18:02 crc kubenswrapper[4808]: I0217 16:18:02.210292 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 17 16:18:02 crc kubenswrapper[4808]: I0217 16:18:02.497679 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 17 16:18:02 crc kubenswrapper[4808]: I0217 16:18:02.498029 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 17 16:18:06 crc kubenswrapper[4808]: I0217 16:18:06.909971 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-scd77" Feb 17 16:18:06 crc kubenswrapper[4808]: I0217 16:18:06.910951 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-scd77" Feb 17 16:18:07 crc kubenswrapper[4808]: I0217 16:18:07.210234 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 17 16:18:07 crc kubenswrapper[4808]: I0217 16:18:07.239829 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 17 16:18:07 crc kubenswrapper[4808]: I0217 16:18:07.495131 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 17 16:18:07 crc kubenswrapper[4808]: I0217 16:18:07.495196 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 17 16:18:07 crc kubenswrapper[4808]: I0217 16:18:07.964438 4808 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-scd77" podUID="fdd136e1-cf53-4300-9df6-53bfb28905cd" containerName="registry-server" probeResult="failure" output=< Feb 17 16:18:07 crc kubenswrapper[4808]: timeout: failed to connect service ":50051" within 1s Feb 17 16:18:07 crc kubenswrapper[4808]: > Feb 17 16:18:07 crc kubenswrapper[4808]: I0217 16:18:07.974510 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 17 16:18:08 crc kubenswrapper[4808]: I0217 16:18:08.511348 4808 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="fbdf54f1-8cfa-46c6-addd-bda126337c05" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.232:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 17 16:18:08 crc kubenswrapper[4808]: I0217 16:18:08.511908 4808 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="fbdf54f1-8cfa-46c6-addd-bda126337c05" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.232:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 17 16:18:09 crc kubenswrapper[4808]: I0217 16:18:09.169683 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 17 16:18:09 crc kubenswrapper[4808]: I0217 16:18:09.310885 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 17 16:18:09 crc kubenswrapper[4808]: I0217 16:18:09.310959 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 17 16:18:10 crc kubenswrapper[4808]: I0217 16:18:10.332748 4808 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e91a7ada-9f3c-4a6c-a56e-355538c9a868" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.233:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 17 16:18:10 crc kubenswrapper[4808]: I0217 16:18:10.332768 4808 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e91a7ada-9f3c-4a6c-a56e-355538c9a868" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.233:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 17 16:18:16 crc kubenswrapper[4808]: I0217 16:18:16.965480 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-scd77" Feb 17 16:18:17 crc kubenswrapper[4808]: I0217 16:18:17.020914 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-scd77" Feb 17 16:18:17 crc kubenswrapper[4808]: I0217 16:18:17.502372 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 17 16:18:17 crc kubenswrapper[4808]: I0217 16:18:17.503322 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 17 16:18:17 crc kubenswrapper[4808]: I0217 16:18:17.521851 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 17 16:18:17 crc kubenswrapper[4808]: I0217 16:18:17.791986 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-scd77"] Feb 17 16:18:18 crc kubenswrapper[4808]: I0217 16:18:18.041705 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-scd77" podUID="fdd136e1-cf53-4300-9df6-53bfb28905cd" containerName="registry-server" containerID="cri-o://356b63136bb36f4f253e29cd7c8a7b3e7da5036e116e56a938d183e2bd5afab2" gracePeriod=2 Feb 17 16:18:18 crc kubenswrapper[4808]: I0217 16:18:18.054839 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 17 16:18:18 crc kubenswrapper[4808]: I0217 16:18:18.628783 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-scd77" Feb 17 16:18:18 crc kubenswrapper[4808]: I0217 16:18:18.737337 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fdd136e1-cf53-4300-9df6-53bfb28905cd-utilities\") pod \"fdd136e1-cf53-4300-9df6-53bfb28905cd\" (UID: \"fdd136e1-cf53-4300-9df6-53bfb28905cd\") " Feb 17 16:18:18 crc kubenswrapper[4808]: I0217 16:18:18.737459 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4rlpm\" (UniqueName: \"kubernetes.io/projected/fdd136e1-cf53-4300-9df6-53bfb28905cd-kube-api-access-4rlpm\") pod \"fdd136e1-cf53-4300-9df6-53bfb28905cd\" (UID: \"fdd136e1-cf53-4300-9df6-53bfb28905cd\") " Feb 17 16:18:18 crc kubenswrapper[4808]: I0217 16:18:18.737486 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fdd136e1-cf53-4300-9df6-53bfb28905cd-catalog-content\") pod \"fdd136e1-cf53-4300-9df6-53bfb28905cd\" (UID: \"fdd136e1-cf53-4300-9df6-53bfb28905cd\") " Feb 17 16:18:18 crc kubenswrapper[4808]: I0217 16:18:18.738132 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fdd136e1-cf53-4300-9df6-53bfb28905cd-utilities" (OuterVolumeSpecName: "utilities") pod "fdd136e1-cf53-4300-9df6-53bfb28905cd" (UID: "fdd136e1-cf53-4300-9df6-53bfb28905cd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:18:18 crc kubenswrapper[4808]: I0217 16:18:18.745315 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdd136e1-cf53-4300-9df6-53bfb28905cd-kube-api-access-4rlpm" (OuterVolumeSpecName: "kube-api-access-4rlpm") pod "fdd136e1-cf53-4300-9df6-53bfb28905cd" (UID: "fdd136e1-cf53-4300-9df6-53bfb28905cd"). InnerVolumeSpecName "kube-api-access-4rlpm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:18:18 crc kubenswrapper[4808]: I0217 16:18:18.840892 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fdd136e1-cf53-4300-9df6-53bfb28905cd-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:18 crc kubenswrapper[4808]: I0217 16:18:18.841262 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4rlpm\" (UniqueName: \"kubernetes.io/projected/fdd136e1-cf53-4300-9df6-53bfb28905cd-kube-api-access-4rlpm\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:18 crc kubenswrapper[4808]: I0217 16:18:18.859752 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fdd136e1-cf53-4300-9df6-53bfb28905cd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fdd136e1-cf53-4300-9df6-53bfb28905cd" (UID: "fdd136e1-cf53-4300-9df6-53bfb28905cd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:18:18 crc kubenswrapper[4808]: I0217 16:18:18.943316 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fdd136e1-cf53-4300-9df6-53bfb28905cd-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:19 crc kubenswrapper[4808]: I0217 16:18:19.068173 4808 generic.go:334] "Generic (PLEG): container finished" podID="fdd136e1-cf53-4300-9df6-53bfb28905cd" containerID="356b63136bb36f4f253e29cd7c8a7b3e7da5036e116e56a938d183e2bd5afab2" exitCode=0 Feb 17 16:18:19 crc kubenswrapper[4808]: I0217 16:18:19.069109 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-scd77" Feb 17 16:18:19 crc kubenswrapper[4808]: I0217 16:18:19.076770 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-scd77" event={"ID":"fdd136e1-cf53-4300-9df6-53bfb28905cd","Type":"ContainerDied","Data":"356b63136bb36f4f253e29cd7c8a7b3e7da5036e116e56a938d183e2bd5afab2"} Feb 17 16:18:19 crc kubenswrapper[4808]: I0217 16:18:19.076847 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-scd77" event={"ID":"fdd136e1-cf53-4300-9df6-53bfb28905cd","Type":"ContainerDied","Data":"beb497e4573909af9da6473ab6ad5239876480309153dc5a4dbda0c71e03d0d1"} Feb 17 16:18:19 crc kubenswrapper[4808]: I0217 16:18:19.076870 4808 scope.go:117] "RemoveContainer" containerID="356b63136bb36f4f253e29cd7c8a7b3e7da5036e116e56a938d183e2bd5afab2" Feb 17 16:18:19 crc kubenswrapper[4808]: I0217 16:18:19.101948 4808 scope.go:117] "RemoveContainer" containerID="70c41ea11a7a6ad0cd421e097caf52b723c2e7dcd550f23abc585761684fe1f5" Feb 17 16:18:19 crc kubenswrapper[4808]: I0217 16:18:19.116679 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-scd77"] Feb 17 16:18:19 crc kubenswrapper[4808]: I0217 16:18:19.121402 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-scd77"] Feb 17 16:18:19 crc kubenswrapper[4808]: I0217 16:18:19.137299 4808 scope.go:117] "RemoveContainer" containerID="4c33795a6d982c861075c31dcb5c9401341d147e1e982483729f44aa01df7914" Feb 17 16:18:19 crc kubenswrapper[4808]: I0217 16:18:19.158146 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fdd136e1-cf53-4300-9df6-53bfb28905cd" path="/var/lib/kubelet/pods/fdd136e1-cf53-4300-9df6-53bfb28905cd/volumes" Feb 17 16:18:19 crc kubenswrapper[4808]: I0217 16:18:19.199983 4808 scope.go:117] "RemoveContainer" containerID="356b63136bb36f4f253e29cd7c8a7b3e7da5036e116e56a938d183e2bd5afab2" Feb 17 16:18:19 crc kubenswrapper[4808]: E0217 16:18:19.200519 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"356b63136bb36f4f253e29cd7c8a7b3e7da5036e116e56a938d183e2bd5afab2\": container with ID starting with 356b63136bb36f4f253e29cd7c8a7b3e7da5036e116e56a938d183e2bd5afab2 not found: ID does not exist" containerID="356b63136bb36f4f253e29cd7c8a7b3e7da5036e116e56a938d183e2bd5afab2" Feb 17 16:18:19 crc kubenswrapper[4808]: I0217 16:18:19.200559 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"356b63136bb36f4f253e29cd7c8a7b3e7da5036e116e56a938d183e2bd5afab2"} err="failed to get container status \"356b63136bb36f4f253e29cd7c8a7b3e7da5036e116e56a938d183e2bd5afab2\": rpc error: code = NotFound desc = could not find container \"356b63136bb36f4f253e29cd7c8a7b3e7da5036e116e56a938d183e2bd5afab2\": container with ID starting with 356b63136bb36f4f253e29cd7c8a7b3e7da5036e116e56a938d183e2bd5afab2 not found: ID does not exist" Feb 17 16:18:19 crc kubenswrapper[4808]: I0217 16:18:19.200607 4808 scope.go:117] "RemoveContainer" containerID="70c41ea11a7a6ad0cd421e097caf52b723c2e7dcd550f23abc585761684fe1f5" Feb 17 16:18:19 crc kubenswrapper[4808]: E0217 16:18:19.200938 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"70c41ea11a7a6ad0cd421e097caf52b723c2e7dcd550f23abc585761684fe1f5\": container with ID starting with 70c41ea11a7a6ad0cd421e097caf52b723c2e7dcd550f23abc585761684fe1f5 not found: ID does not exist" containerID="70c41ea11a7a6ad0cd421e097caf52b723c2e7dcd550f23abc585761684fe1f5" Feb 17 16:18:19 crc kubenswrapper[4808]: I0217 16:18:19.200970 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70c41ea11a7a6ad0cd421e097caf52b723c2e7dcd550f23abc585761684fe1f5"} err="failed to get container status \"70c41ea11a7a6ad0cd421e097caf52b723c2e7dcd550f23abc585761684fe1f5\": rpc error: code = NotFound desc = could not find container \"70c41ea11a7a6ad0cd421e097caf52b723c2e7dcd550f23abc585761684fe1f5\": container with ID starting with 70c41ea11a7a6ad0cd421e097caf52b723c2e7dcd550f23abc585761684fe1f5 not found: ID does not exist" Feb 17 16:18:19 crc kubenswrapper[4808]: I0217 16:18:19.200992 4808 scope.go:117] "RemoveContainer" containerID="4c33795a6d982c861075c31dcb5c9401341d147e1e982483729f44aa01df7914" Feb 17 16:18:19 crc kubenswrapper[4808]: E0217 16:18:19.201392 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c33795a6d982c861075c31dcb5c9401341d147e1e982483729f44aa01df7914\": container with ID starting with 4c33795a6d982c861075c31dcb5c9401341d147e1e982483729f44aa01df7914 not found: ID does not exist" containerID="4c33795a6d982c861075c31dcb5c9401341d147e1e982483729f44aa01df7914" Feb 17 16:18:19 crc kubenswrapper[4808]: I0217 16:18:19.201429 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c33795a6d982c861075c31dcb5c9401341d147e1e982483729f44aa01df7914"} err="failed to get container status \"4c33795a6d982c861075c31dcb5c9401341d147e1e982483729f44aa01df7914\": rpc error: code = NotFound desc = could not find container \"4c33795a6d982c861075c31dcb5c9401341d147e1e982483729f44aa01df7914\": container with ID starting with 4c33795a6d982c861075c31dcb5c9401341d147e1e982483729f44aa01df7914 not found: ID does not exist" Feb 17 16:18:19 crc kubenswrapper[4808]: I0217 16:18:19.350589 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 17 16:18:19 crc kubenswrapper[4808]: I0217 16:18:19.351105 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 17 16:18:19 crc kubenswrapper[4808]: I0217 16:18:19.387129 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 17 16:18:19 crc kubenswrapper[4808]: I0217 16:18:19.391638 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 17 16:18:20 crc kubenswrapper[4808]: I0217 16:18:20.079949 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 17 16:18:20 crc kubenswrapper[4808]: I0217 16:18:20.087424 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 17 16:18:29 crc kubenswrapper[4808]: I0217 16:18:29.954016 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-db-sync-wdrmd"] Feb 17 16:18:29 crc kubenswrapper[4808]: I0217 16:18:29.972742 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-db-sync-wdrmd"] Feb 17 16:18:30 crc kubenswrapper[4808]: I0217 16:18:30.050314 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-db-sync-zl7nk"] Feb 17 16:18:30 crc kubenswrapper[4808]: E0217 16:18:30.050881 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdd136e1-cf53-4300-9df6-53bfb28905cd" containerName="extract-utilities" Feb 17 16:18:30 crc kubenswrapper[4808]: I0217 16:18:30.050907 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdd136e1-cf53-4300-9df6-53bfb28905cd" containerName="extract-utilities" Feb 17 16:18:30 crc kubenswrapper[4808]: E0217 16:18:30.050951 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdd136e1-cf53-4300-9df6-53bfb28905cd" containerName="extract-content" Feb 17 16:18:30 crc kubenswrapper[4808]: I0217 16:18:30.050959 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdd136e1-cf53-4300-9df6-53bfb28905cd" containerName="extract-content" Feb 17 16:18:30 crc kubenswrapper[4808]: E0217 16:18:30.050993 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdd136e1-cf53-4300-9df6-53bfb28905cd" containerName="registry-server" Feb 17 16:18:30 crc kubenswrapper[4808]: I0217 16:18:30.051002 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdd136e1-cf53-4300-9df6-53bfb28905cd" containerName="registry-server" Feb 17 16:18:30 crc kubenswrapper[4808]: I0217 16:18:30.051235 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdd136e1-cf53-4300-9df6-53bfb28905cd" containerName="registry-server" Feb 17 16:18:30 crc kubenswrapper[4808]: I0217 16:18:30.052133 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-sync-zl7nk" Feb 17 16:18:30 crc kubenswrapper[4808]: I0217 16:18:30.054358 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 17 16:18:30 crc kubenswrapper[4808]: I0217 16:18:30.064088 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-db-sync-zl7nk"] Feb 17 16:18:30 crc kubenswrapper[4808]: I0217 16:18:30.098009 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4b182d0-48fc-4487-b7ad-18f7803a4d4c-config-data\") pod \"cloudkitty-db-sync-zl7nk\" (UID: \"a4b182d0-48fc-4487-b7ad-18f7803a4d4c\") " pod="openstack/cloudkitty-db-sync-zl7nk" Feb 17 16:18:30 crc kubenswrapper[4808]: I0217 16:18:30.098220 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a4b182d0-48fc-4487-b7ad-18f7803a4d4c-scripts\") pod \"cloudkitty-db-sync-zl7nk\" (UID: \"a4b182d0-48fc-4487-b7ad-18f7803a4d4c\") " pod="openstack/cloudkitty-db-sync-zl7nk" Feb 17 16:18:30 crc kubenswrapper[4808]: I0217 16:18:30.098549 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/a4b182d0-48fc-4487-b7ad-18f7803a4d4c-certs\") pod \"cloudkitty-db-sync-zl7nk\" (UID: \"a4b182d0-48fc-4487-b7ad-18f7803a4d4c\") " pod="openstack/cloudkitty-db-sync-zl7nk" Feb 17 16:18:30 crc kubenswrapper[4808]: I0217 16:18:30.098950 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4b182d0-48fc-4487-b7ad-18f7803a4d4c-combined-ca-bundle\") pod \"cloudkitty-db-sync-zl7nk\" (UID: \"a4b182d0-48fc-4487-b7ad-18f7803a4d4c\") " pod="openstack/cloudkitty-db-sync-zl7nk" Feb 17 16:18:30 crc kubenswrapper[4808]: I0217 16:18:30.099019 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnd2x\" (UniqueName: \"kubernetes.io/projected/a4b182d0-48fc-4487-b7ad-18f7803a4d4c-kube-api-access-fnd2x\") pod \"cloudkitty-db-sync-zl7nk\" (UID: \"a4b182d0-48fc-4487-b7ad-18f7803a4d4c\") " pod="openstack/cloudkitty-db-sync-zl7nk" Feb 17 16:18:30 crc kubenswrapper[4808]: I0217 16:18:30.200594 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/a4b182d0-48fc-4487-b7ad-18f7803a4d4c-certs\") pod \"cloudkitty-db-sync-zl7nk\" (UID: \"a4b182d0-48fc-4487-b7ad-18f7803a4d4c\") " pod="openstack/cloudkitty-db-sync-zl7nk" Feb 17 16:18:30 crc kubenswrapper[4808]: I0217 16:18:30.200665 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4b182d0-48fc-4487-b7ad-18f7803a4d4c-combined-ca-bundle\") pod \"cloudkitty-db-sync-zl7nk\" (UID: \"a4b182d0-48fc-4487-b7ad-18f7803a4d4c\") " pod="openstack/cloudkitty-db-sync-zl7nk" Feb 17 16:18:30 crc kubenswrapper[4808]: I0217 16:18:30.201173 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fnd2x\" (UniqueName: \"kubernetes.io/projected/a4b182d0-48fc-4487-b7ad-18f7803a4d4c-kube-api-access-fnd2x\") pod \"cloudkitty-db-sync-zl7nk\" (UID: \"a4b182d0-48fc-4487-b7ad-18f7803a4d4c\") " pod="openstack/cloudkitty-db-sync-zl7nk" Feb 17 16:18:30 crc kubenswrapper[4808]: I0217 16:18:30.201403 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4b182d0-48fc-4487-b7ad-18f7803a4d4c-config-data\") pod \"cloudkitty-db-sync-zl7nk\" (UID: \"a4b182d0-48fc-4487-b7ad-18f7803a4d4c\") " pod="openstack/cloudkitty-db-sync-zl7nk" Feb 17 16:18:30 crc kubenswrapper[4808]: I0217 16:18:30.201608 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a4b182d0-48fc-4487-b7ad-18f7803a4d4c-scripts\") pod \"cloudkitty-db-sync-zl7nk\" (UID: \"a4b182d0-48fc-4487-b7ad-18f7803a4d4c\") " pod="openstack/cloudkitty-db-sync-zl7nk" Feb 17 16:18:30 crc kubenswrapper[4808]: I0217 16:18:30.205841 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/a4b182d0-48fc-4487-b7ad-18f7803a4d4c-certs\") pod \"cloudkitty-db-sync-zl7nk\" (UID: \"a4b182d0-48fc-4487-b7ad-18f7803a4d4c\") " pod="openstack/cloudkitty-db-sync-zl7nk" Feb 17 16:18:30 crc kubenswrapper[4808]: I0217 16:18:30.206116 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4b182d0-48fc-4487-b7ad-18f7803a4d4c-combined-ca-bundle\") pod \"cloudkitty-db-sync-zl7nk\" (UID: \"a4b182d0-48fc-4487-b7ad-18f7803a4d4c\") " pod="openstack/cloudkitty-db-sync-zl7nk" Feb 17 16:18:30 crc kubenswrapper[4808]: I0217 16:18:30.206590 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4b182d0-48fc-4487-b7ad-18f7803a4d4c-config-data\") pod \"cloudkitty-db-sync-zl7nk\" (UID: \"a4b182d0-48fc-4487-b7ad-18f7803a4d4c\") " pod="openstack/cloudkitty-db-sync-zl7nk" Feb 17 16:18:30 crc kubenswrapper[4808]: I0217 16:18:30.206995 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a4b182d0-48fc-4487-b7ad-18f7803a4d4c-scripts\") pod \"cloudkitty-db-sync-zl7nk\" (UID: \"a4b182d0-48fc-4487-b7ad-18f7803a4d4c\") " pod="openstack/cloudkitty-db-sync-zl7nk" Feb 17 16:18:30 crc kubenswrapper[4808]: I0217 16:18:30.216100 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fnd2x\" (UniqueName: \"kubernetes.io/projected/a4b182d0-48fc-4487-b7ad-18f7803a4d4c-kube-api-access-fnd2x\") pod \"cloudkitty-db-sync-zl7nk\" (UID: \"a4b182d0-48fc-4487-b7ad-18f7803a4d4c\") " pod="openstack/cloudkitty-db-sync-zl7nk" Feb 17 16:18:30 crc kubenswrapper[4808]: I0217 16:18:30.403736 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-sync-zl7nk" Feb 17 16:18:31 crc kubenswrapper[4808]: I0217 16:18:31.038027 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-db-sync-zl7nk"] Feb 17 16:18:31 crc kubenswrapper[4808]: I0217 16:18:31.157327 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ec52dbb-ca2f-4013-8536-972042607240" path="/var/lib/kubelet/pods/2ec52dbb-ca2f-4013-8536-972042607240/volumes" Feb 17 16:18:31 crc kubenswrapper[4808]: E0217 16:18:31.162602 4808 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 17 16:18:31 crc kubenswrapper[4808]: E0217 16:18:31.162682 4808 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 17 16:18:31 crc kubenswrapper[4808]: E0217 16:18:31.162809 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fnd2x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-zl7nk_openstack(a4b182d0-48fc-4487-b7ad-18f7803a4d4c): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:18:31 crc kubenswrapper[4808]: E0217 16:18:31.163977 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:18:31 crc kubenswrapper[4808]: I0217 16:18:31.188589 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-sync-zl7nk" event={"ID":"a4b182d0-48fc-4487-b7ad-18f7803a4d4c","Type":"ContainerStarted","Data":"46a08a8f711b48444ba77a762f412674bac93643320d67f0c19168069a38f058"} Feb 17 16:18:31 crc kubenswrapper[4808]: E0217 16:18:31.190105 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:18:31 crc kubenswrapper[4808]: I0217 16:18:31.808703 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:18:31 crc kubenswrapper[4808]: I0217 16:18:31.809322 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f17f0491-7507-40fb-a2b9-d13d2c51eed6" containerName="ceilometer-central-agent" containerID="cri-o://d002c2e4e3d0d68bfb48ed8610eba6f9a0ecf6103a908faf77897768a2cf9b9c" gracePeriod=30 Feb 17 16:18:31 crc kubenswrapper[4808]: I0217 16:18:31.809775 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f17f0491-7507-40fb-a2b9-d13d2c51eed6" containerName="proxy-httpd" containerID="cri-o://de6991fc741f4dab215e9fa0e4bbfa723a35a1ad1c479d9fbf2ff2d2ef68c689" gracePeriod=30 Feb 17 16:18:31 crc kubenswrapper[4808]: I0217 16:18:31.809886 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f17f0491-7507-40fb-a2b9-d13d2c51eed6" containerName="sg-core" containerID="cri-o://5b669a87f3e7dd40db4275e143a7c3152957d19b8ee8fd03190fac9ff4c10d22" gracePeriod=30 Feb 17 16:18:31 crc kubenswrapper[4808]: I0217 16:18:31.809998 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f17f0491-7507-40fb-a2b9-d13d2c51eed6" containerName="ceilometer-notification-agent" containerID="cri-o://c0971f47e4c9c39f71e7c6f7840068671f8ad7112b616991124ea5bfcdc2d3fe" gracePeriod=30 Feb 17 16:18:32 crc kubenswrapper[4808]: I0217 16:18:32.201461 4808 generic.go:334] "Generic (PLEG): container finished" podID="f17f0491-7507-40fb-a2b9-d13d2c51eed6" containerID="de6991fc741f4dab215e9fa0e4bbfa723a35a1ad1c479d9fbf2ff2d2ef68c689" exitCode=0 Feb 17 16:18:32 crc kubenswrapper[4808]: I0217 16:18:32.201501 4808 generic.go:334] "Generic (PLEG): container finished" podID="f17f0491-7507-40fb-a2b9-d13d2c51eed6" containerID="5b669a87f3e7dd40db4275e143a7c3152957d19b8ee8fd03190fac9ff4c10d22" exitCode=2 Feb 17 16:18:32 crc kubenswrapper[4808]: I0217 16:18:32.201538 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f17f0491-7507-40fb-a2b9-d13d2c51eed6","Type":"ContainerDied","Data":"de6991fc741f4dab215e9fa0e4bbfa723a35a1ad1c479d9fbf2ff2d2ef68c689"} Feb 17 16:18:32 crc kubenswrapper[4808]: I0217 16:18:32.201605 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f17f0491-7507-40fb-a2b9-d13d2c51eed6","Type":"ContainerDied","Data":"5b669a87f3e7dd40db4275e143a7c3152957d19b8ee8fd03190fac9ff4c10d22"} Feb 17 16:18:32 crc kubenswrapper[4808]: E0217 16:18:32.203249 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:18:32 crc kubenswrapper[4808]: I0217 16:18:32.261685 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 17 16:18:33 crc kubenswrapper[4808]: I0217 16:18:33.213966 4808 generic.go:334] "Generic (PLEG): container finished" podID="f17f0491-7507-40fb-a2b9-d13d2c51eed6" containerID="d002c2e4e3d0d68bfb48ed8610eba6f9a0ecf6103a908faf77897768a2cf9b9c" exitCode=0 Feb 17 16:18:33 crc kubenswrapper[4808]: I0217 16:18:33.214104 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f17f0491-7507-40fb-a2b9-d13d2c51eed6","Type":"ContainerDied","Data":"d002c2e4e3d0d68bfb48ed8610eba6f9a0ecf6103a908faf77897768a2cf9b9c"} Feb 17 16:18:33 crc kubenswrapper[4808]: I0217 16:18:33.391957 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 17 16:18:35 crc kubenswrapper[4808]: I0217 16:18:35.925014 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.123938 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f17f0491-7507-40fb-a2b9-d13d2c51eed6-config-data\") pod \"f17f0491-7507-40fb-a2b9-d13d2c51eed6\" (UID: \"f17f0491-7507-40fb-a2b9-d13d2c51eed6\") " Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.123991 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2p8c4\" (UniqueName: \"kubernetes.io/projected/f17f0491-7507-40fb-a2b9-d13d2c51eed6-kube-api-access-2p8c4\") pod \"f17f0491-7507-40fb-a2b9-d13d2c51eed6\" (UID: \"f17f0491-7507-40fb-a2b9-d13d2c51eed6\") " Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.124024 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f17f0491-7507-40fb-a2b9-d13d2c51eed6-ceilometer-tls-certs\") pod \"f17f0491-7507-40fb-a2b9-d13d2c51eed6\" (UID: \"f17f0491-7507-40fb-a2b9-d13d2c51eed6\") " Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.124139 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f17f0491-7507-40fb-a2b9-d13d2c51eed6-log-httpd\") pod \"f17f0491-7507-40fb-a2b9-d13d2c51eed6\" (UID: \"f17f0491-7507-40fb-a2b9-d13d2c51eed6\") " Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.124198 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f17f0491-7507-40fb-a2b9-d13d2c51eed6-scripts\") pod \"f17f0491-7507-40fb-a2b9-d13d2c51eed6\" (UID: \"f17f0491-7507-40fb-a2b9-d13d2c51eed6\") " Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.124238 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f17f0491-7507-40fb-a2b9-d13d2c51eed6-run-httpd\") pod \"f17f0491-7507-40fb-a2b9-d13d2c51eed6\" (UID: \"f17f0491-7507-40fb-a2b9-d13d2c51eed6\") " Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.124311 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f17f0491-7507-40fb-a2b9-d13d2c51eed6-sg-core-conf-yaml\") pod \"f17f0491-7507-40fb-a2b9-d13d2c51eed6\" (UID: \"f17f0491-7507-40fb-a2b9-d13d2c51eed6\") " Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.124329 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f17f0491-7507-40fb-a2b9-d13d2c51eed6-combined-ca-bundle\") pod \"f17f0491-7507-40fb-a2b9-d13d2c51eed6\" (UID: \"f17f0491-7507-40fb-a2b9-d13d2c51eed6\") " Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.125112 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f17f0491-7507-40fb-a2b9-d13d2c51eed6-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "f17f0491-7507-40fb-a2b9-d13d2c51eed6" (UID: "f17f0491-7507-40fb-a2b9-d13d2c51eed6"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.125383 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f17f0491-7507-40fb-a2b9-d13d2c51eed6-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "f17f0491-7507-40fb-a2b9-d13d2c51eed6" (UID: "f17f0491-7507-40fb-a2b9-d13d2c51eed6"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.129724 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f17f0491-7507-40fb-a2b9-d13d2c51eed6-scripts" (OuterVolumeSpecName: "scripts") pod "f17f0491-7507-40fb-a2b9-d13d2c51eed6" (UID: "f17f0491-7507-40fb-a2b9-d13d2c51eed6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.131293 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f17f0491-7507-40fb-a2b9-d13d2c51eed6-kube-api-access-2p8c4" (OuterVolumeSpecName: "kube-api-access-2p8c4") pod "f17f0491-7507-40fb-a2b9-d13d2c51eed6" (UID: "f17f0491-7507-40fb-a2b9-d13d2c51eed6"). InnerVolumeSpecName "kube-api-access-2p8c4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.157788 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f17f0491-7507-40fb-a2b9-d13d2c51eed6-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "f17f0491-7507-40fb-a2b9-d13d2c51eed6" (UID: "f17f0491-7507-40fb-a2b9-d13d2c51eed6"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.213656 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f17f0491-7507-40fb-a2b9-d13d2c51eed6-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "f17f0491-7507-40fb-a2b9-d13d2c51eed6" (UID: "f17f0491-7507-40fb-a2b9-d13d2c51eed6"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.226555 4808 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f17f0491-7507-40fb-a2b9-d13d2c51eed6-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.226650 4808 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f17f0491-7507-40fb-a2b9-d13d2c51eed6-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.226660 4808 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f17f0491-7507-40fb-a2b9-d13d2c51eed6-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.226668 4808 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f17f0491-7507-40fb-a2b9-d13d2c51eed6-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.226679 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2p8c4\" (UniqueName: \"kubernetes.io/projected/f17f0491-7507-40fb-a2b9-d13d2c51eed6-kube-api-access-2p8c4\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.226688 4808 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f17f0491-7507-40fb-a2b9-d13d2c51eed6-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.259810 4808 generic.go:334] "Generic (PLEG): container finished" podID="f17f0491-7507-40fb-a2b9-d13d2c51eed6" containerID="c0971f47e4c9c39f71e7c6f7840068671f8ad7112b616991124ea5bfcdc2d3fe" exitCode=0 Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.259861 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.259866 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f17f0491-7507-40fb-a2b9-d13d2c51eed6","Type":"ContainerDied","Data":"c0971f47e4c9c39f71e7c6f7840068671f8ad7112b616991124ea5bfcdc2d3fe"} Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.259904 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f17f0491-7507-40fb-a2b9-d13d2c51eed6","Type":"ContainerDied","Data":"3b118204dd16ab977f67d0447b3dc8abe3067fde9909bbf01899be9a3a24cb87"} Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.259922 4808 scope.go:117] "RemoveContainer" containerID="de6991fc741f4dab215e9fa0e4bbfa723a35a1ad1c479d9fbf2ff2d2ef68c689" Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.289642 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f17f0491-7507-40fb-a2b9-d13d2c51eed6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f17f0491-7507-40fb-a2b9-d13d2c51eed6" (UID: "f17f0491-7507-40fb-a2b9-d13d2c51eed6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.301861 4808 scope.go:117] "RemoveContainer" containerID="5b669a87f3e7dd40db4275e143a7c3152957d19b8ee8fd03190fac9ff4c10d22" Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.328512 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f17f0491-7507-40fb-a2b9-d13d2c51eed6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.330025 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f17f0491-7507-40fb-a2b9-d13d2c51eed6-config-data" (OuterVolumeSpecName: "config-data") pod "f17f0491-7507-40fb-a2b9-d13d2c51eed6" (UID: "f17f0491-7507-40fb-a2b9-d13d2c51eed6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.337182 4808 scope.go:117] "RemoveContainer" containerID="c0971f47e4c9c39f71e7c6f7840068671f8ad7112b616991124ea5bfcdc2d3fe" Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.358451 4808 scope.go:117] "RemoveContainer" containerID="d002c2e4e3d0d68bfb48ed8610eba6f9a0ecf6103a908faf77897768a2cf9b9c" Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.377217 4808 scope.go:117] "RemoveContainer" containerID="de6991fc741f4dab215e9fa0e4bbfa723a35a1ad1c479d9fbf2ff2d2ef68c689" Feb 17 16:18:36 crc kubenswrapper[4808]: E0217 16:18:36.377808 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de6991fc741f4dab215e9fa0e4bbfa723a35a1ad1c479d9fbf2ff2d2ef68c689\": container with ID starting with de6991fc741f4dab215e9fa0e4bbfa723a35a1ad1c479d9fbf2ff2d2ef68c689 not found: ID does not exist" containerID="de6991fc741f4dab215e9fa0e4bbfa723a35a1ad1c479d9fbf2ff2d2ef68c689" Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.377835 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de6991fc741f4dab215e9fa0e4bbfa723a35a1ad1c479d9fbf2ff2d2ef68c689"} err="failed to get container status \"de6991fc741f4dab215e9fa0e4bbfa723a35a1ad1c479d9fbf2ff2d2ef68c689\": rpc error: code = NotFound desc = could not find container \"de6991fc741f4dab215e9fa0e4bbfa723a35a1ad1c479d9fbf2ff2d2ef68c689\": container with ID starting with de6991fc741f4dab215e9fa0e4bbfa723a35a1ad1c479d9fbf2ff2d2ef68c689 not found: ID does not exist" Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.377856 4808 scope.go:117] "RemoveContainer" containerID="5b669a87f3e7dd40db4275e143a7c3152957d19b8ee8fd03190fac9ff4c10d22" Feb 17 16:18:36 crc kubenswrapper[4808]: E0217 16:18:36.378181 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b669a87f3e7dd40db4275e143a7c3152957d19b8ee8fd03190fac9ff4c10d22\": container with ID starting with 5b669a87f3e7dd40db4275e143a7c3152957d19b8ee8fd03190fac9ff4c10d22 not found: ID does not exist" containerID="5b669a87f3e7dd40db4275e143a7c3152957d19b8ee8fd03190fac9ff4c10d22" Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.378203 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b669a87f3e7dd40db4275e143a7c3152957d19b8ee8fd03190fac9ff4c10d22"} err="failed to get container status \"5b669a87f3e7dd40db4275e143a7c3152957d19b8ee8fd03190fac9ff4c10d22\": rpc error: code = NotFound desc = could not find container \"5b669a87f3e7dd40db4275e143a7c3152957d19b8ee8fd03190fac9ff4c10d22\": container with ID starting with 5b669a87f3e7dd40db4275e143a7c3152957d19b8ee8fd03190fac9ff4c10d22 not found: ID does not exist" Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.378214 4808 scope.go:117] "RemoveContainer" containerID="c0971f47e4c9c39f71e7c6f7840068671f8ad7112b616991124ea5bfcdc2d3fe" Feb 17 16:18:36 crc kubenswrapper[4808]: E0217 16:18:36.378450 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0971f47e4c9c39f71e7c6f7840068671f8ad7112b616991124ea5bfcdc2d3fe\": container with ID starting with c0971f47e4c9c39f71e7c6f7840068671f8ad7112b616991124ea5bfcdc2d3fe not found: ID does not exist" containerID="c0971f47e4c9c39f71e7c6f7840068671f8ad7112b616991124ea5bfcdc2d3fe" Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.378472 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0971f47e4c9c39f71e7c6f7840068671f8ad7112b616991124ea5bfcdc2d3fe"} err="failed to get container status \"c0971f47e4c9c39f71e7c6f7840068671f8ad7112b616991124ea5bfcdc2d3fe\": rpc error: code = NotFound desc = could not find container \"c0971f47e4c9c39f71e7c6f7840068671f8ad7112b616991124ea5bfcdc2d3fe\": container with ID starting with c0971f47e4c9c39f71e7c6f7840068671f8ad7112b616991124ea5bfcdc2d3fe not found: ID does not exist" Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.378484 4808 scope.go:117] "RemoveContainer" containerID="d002c2e4e3d0d68bfb48ed8610eba6f9a0ecf6103a908faf77897768a2cf9b9c" Feb 17 16:18:36 crc kubenswrapper[4808]: E0217 16:18:36.378663 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d002c2e4e3d0d68bfb48ed8610eba6f9a0ecf6103a908faf77897768a2cf9b9c\": container with ID starting with d002c2e4e3d0d68bfb48ed8610eba6f9a0ecf6103a908faf77897768a2cf9b9c not found: ID does not exist" containerID="d002c2e4e3d0d68bfb48ed8610eba6f9a0ecf6103a908faf77897768a2cf9b9c" Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.378682 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d002c2e4e3d0d68bfb48ed8610eba6f9a0ecf6103a908faf77897768a2cf9b9c"} err="failed to get container status \"d002c2e4e3d0d68bfb48ed8610eba6f9a0ecf6103a908faf77897768a2cf9b9c\": rpc error: code = NotFound desc = could not find container \"d002c2e4e3d0d68bfb48ed8610eba6f9a0ecf6103a908faf77897768a2cf9b9c\": container with ID starting with d002c2e4e3d0d68bfb48ed8610eba6f9a0ecf6103a908faf77897768a2cf9b9c not found: ID does not exist" Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.430419 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f17f0491-7507-40fb-a2b9-d13d2c51eed6-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.597113 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.607426 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.630062 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:18:36 crc kubenswrapper[4808]: E0217 16:18:36.630424 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f17f0491-7507-40fb-a2b9-d13d2c51eed6" containerName="sg-core" Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.630440 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="f17f0491-7507-40fb-a2b9-d13d2c51eed6" containerName="sg-core" Feb 17 16:18:36 crc kubenswrapper[4808]: E0217 16:18:36.630457 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f17f0491-7507-40fb-a2b9-d13d2c51eed6" containerName="ceilometer-notification-agent" Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.630464 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="f17f0491-7507-40fb-a2b9-d13d2c51eed6" containerName="ceilometer-notification-agent" Feb 17 16:18:36 crc kubenswrapper[4808]: E0217 16:18:36.630483 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f17f0491-7507-40fb-a2b9-d13d2c51eed6" containerName="proxy-httpd" Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.630488 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="f17f0491-7507-40fb-a2b9-d13d2c51eed6" containerName="proxy-httpd" Feb 17 16:18:36 crc kubenswrapper[4808]: E0217 16:18:36.630513 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f17f0491-7507-40fb-a2b9-d13d2c51eed6" containerName="ceilometer-central-agent" Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.630518 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="f17f0491-7507-40fb-a2b9-d13d2c51eed6" containerName="ceilometer-central-agent" Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.630723 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="f17f0491-7507-40fb-a2b9-d13d2c51eed6" containerName="ceilometer-central-agent" Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.630739 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="f17f0491-7507-40fb-a2b9-d13d2c51eed6" containerName="sg-core" Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.630756 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="f17f0491-7507-40fb-a2b9-d13d2c51eed6" containerName="ceilometer-notification-agent" Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.630768 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="f17f0491-7507-40fb-a2b9-d13d2c51eed6" containerName="proxy-httpd" Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.635260 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.637655 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.637908 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.638411 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.666124 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.736147 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2876084b-7055-449d-9ddb-447d3a515d80-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2876084b-7055-449d-9ddb-447d3a515d80\") " pod="openstack/ceilometer-0" Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.736215 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjgf2\" (UniqueName: \"kubernetes.io/projected/2876084b-7055-449d-9ddb-447d3a515d80-kube-api-access-rjgf2\") pod \"ceilometer-0\" (UID: \"2876084b-7055-449d-9ddb-447d3a515d80\") " pod="openstack/ceilometer-0" Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.736248 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2876084b-7055-449d-9ddb-447d3a515d80-config-data\") pod \"ceilometer-0\" (UID: \"2876084b-7055-449d-9ddb-447d3a515d80\") " pod="openstack/ceilometer-0" Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.736265 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2876084b-7055-449d-9ddb-447d3a515d80-run-httpd\") pod \"ceilometer-0\" (UID: \"2876084b-7055-449d-9ddb-447d3a515d80\") " pod="openstack/ceilometer-0" Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.736316 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2876084b-7055-449d-9ddb-447d3a515d80-scripts\") pod \"ceilometer-0\" (UID: \"2876084b-7055-449d-9ddb-447d3a515d80\") " pod="openstack/ceilometer-0" Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.736334 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2876084b-7055-449d-9ddb-447d3a515d80-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2876084b-7055-449d-9ddb-447d3a515d80\") " pod="openstack/ceilometer-0" Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.736359 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2876084b-7055-449d-9ddb-447d3a515d80-log-httpd\") pod \"ceilometer-0\" (UID: \"2876084b-7055-449d-9ddb-447d3a515d80\") " pod="openstack/ceilometer-0" Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.736430 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2876084b-7055-449d-9ddb-447d3a515d80-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"2876084b-7055-449d-9ddb-447d3a515d80\") " pod="openstack/ceilometer-0" Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.838373 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2876084b-7055-449d-9ddb-447d3a515d80-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"2876084b-7055-449d-9ddb-447d3a515d80\") " pod="openstack/ceilometer-0" Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.838456 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2876084b-7055-449d-9ddb-447d3a515d80-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2876084b-7055-449d-9ddb-447d3a515d80\") " pod="openstack/ceilometer-0" Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.838522 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjgf2\" (UniqueName: \"kubernetes.io/projected/2876084b-7055-449d-9ddb-447d3a515d80-kube-api-access-rjgf2\") pod \"ceilometer-0\" (UID: \"2876084b-7055-449d-9ddb-447d3a515d80\") " pod="openstack/ceilometer-0" Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.838593 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2876084b-7055-449d-9ddb-447d3a515d80-config-data\") pod \"ceilometer-0\" (UID: \"2876084b-7055-449d-9ddb-447d3a515d80\") " pod="openstack/ceilometer-0" Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.838623 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2876084b-7055-449d-9ddb-447d3a515d80-run-httpd\") pod \"ceilometer-0\" (UID: \"2876084b-7055-449d-9ddb-447d3a515d80\") " pod="openstack/ceilometer-0" Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.838702 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2876084b-7055-449d-9ddb-447d3a515d80-scripts\") pod \"ceilometer-0\" (UID: \"2876084b-7055-449d-9ddb-447d3a515d80\") " pod="openstack/ceilometer-0" Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.838727 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2876084b-7055-449d-9ddb-447d3a515d80-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2876084b-7055-449d-9ddb-447d3a515d80\") " pod="openstack/ceilometer-0" Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.838763 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2876084b-7055-449d-9ddb-447d3a515d80-log-httpd\") pod \"ceilometer-0\" (UID: \"2876084b-7055-449d-9ddb-447d3a515d80\") " pod="openstack/ceilometer-0" Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.839244 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2876084b-7055-449d-9ddb-447d3a515d80-log-httpd\") pod \"ceilometer-0\" (UID: \"2876084b-7055-449d-9ddb-447d3a515d80\") " pod="openstack/ceilometer-0" Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.840338 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2876084b-7055-449d-9ddb-447d3a515d80-run-httpd\") pod \"ceilometer-0\" (UID: \"2876084b-7055-449d-9ddb-447d3a515d80\") " pod="openstack/ceilometer-0" Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.843365 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2876084b-7055-449d-9ddb-447d3a515d80-scripts\") pod \"ceilometer-0\" (UID: \"2876084b-7055-449d-9ddb-447d3a515d80\") " pod="openstack/ceilometer-0" Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.843856 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2876084b-7055-449d-9ddb-447d3a515d80-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2876084b-7055-449d-9ddb-447d3a515d80\") " pod="openstack/ceilometer-0" Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.845323 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2876084b-7055-449d-9ddb-447d3a515d80-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2876084b-7055-449d-9ddb-447d3a515d80\") " pod="openstack/ceilometer-0" Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.856907 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2876084b-7055-449d-9ddb-447d3a515d80-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"2876084b-7055-449d-9ddb-447d3a515d80\") " pod="openstack/ceilometer-0" Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.857681 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjgf2\" (UniqueName: \"kubernetes.io/projected/2876084b-7055-449d-9ddb-447d3a515d80-kube-api-access-rjgf2\") pod \"ceilometer-0\" (UID: \"2876084b-7055-449d-9ddb-447d3a515d80\") " pod="openstack/ceilometer-0" Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.858672 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2876084b-7055-449d-9ddb-447d3a515d80-config-data\") pod \"ceilometer-0\" (UID: \"2876084b-7055-449d-9ddb-447d3a515d80\") " pod="openstack/ceilometer-0" Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.900514 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="698c36e9-5f87-4836-8660-aaceac669005" containerName="rabbitmq" containerID="cri-o://d280bb8f394e232e2279b423416261e7f2f5d4ad76577ac87b19691f2c6abe5e" gracePeriod=604796 Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.959344 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:18:37 crc kubenswrapper[4808]: I0217 16:18:37.181062 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f17f0491-7507-40fb-a2b9-d13d2c51eed6" path="/var/lib/kubelet/pods/f17f0491-7507-40fb-a2b9-d13d2c51eed6/volumes" Feb 17 16:18:37 crc kubenswrapper[4808]: I0217 16:18:37.457010 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:18:37 crc kubenswrapper[4808]: I0217 16:18:37.557470 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="59be2048-a5c9-44c9-a3ef-651002555ff0" containerName="rabbitmq" containerID="cri-o://a66e5c234068e929dfcc62adceb6ad71c707c8e45c67ae3fa19c099a1c7d0807" gracePeriod=604796 Feb 17 16:18:37 crc kubenswrapper[4808]: E0217 16:18:37.578229 4808 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:18:37 crc kubenswrapper[4808]: E0217 16:18:37.578302 4808 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:18:37 crc kubenswrapper[4808]: E0217 16:18:37.578430 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nfchb4h678h649h5fbh664h79h7fh666h5bfh68h565h555h59dh5b6h5bfh66ch645h547h5cbh549h9fh58bh5d4hcfh78h68chc7h5ch67dhc7h5b4q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rjgf2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(2876084b-7055-449d-9ddb-447d3a515d80): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:18:38 crc kubenswrapper[4808]: I0217 16:18:38.286120 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2876084b-7055-449d-9ddb-447d3a515d80","Type":"ContainerStarted","Data":"f92594a71ea944bf109615e581db18efb031cc05bb8c8d28aae1396df5d993f8"} Feb 17 16:18:39 crc kubenswrapper[4808]: I0217 16:18:39.299605 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2876084b-7055-449d-9ddb-447d3a515d80","Type":"ContainerStarted","Data":"bd3198028a543422a4bd4d3a3cd25c69aef82a35267a9cbb49dca0aff6c1e668"} Feb 17 16:18:39 crc kubenswrapper[4808]: I0217 16:18:39.299998 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2876084b-7055-449d-9ddb-447d3a515d80","Type":"ContainerStarted","Data":"2d41f32d17275147482bb41cb71d9147907575108a2bbf4b49468be01106e41a"} Feb 17 16:18:40 crc kubenswrapper[4808]: E0217 16:18:40.613437 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:18:41 crc kubenswrapper[4808]: I0217 16:18:41.332072 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2876084b-7055-449d-9ddb-447d3a515d80","Type":"ContainerStarted","Data":"acb126793a2542f2fe3045ec80693fb67ee69ce5e18a3a82729621b0d384f1b3"} Feb 17 16:18:41 crc kubenswrapper[4808]: I0217 16:18:41.332354 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 17 16:18:41 crc kubenswrapper[4808]: E0217 16:18:41.335284 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:18:41 crc kubenswrapper[4808]: I0217 16:18:41.779979 4808 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="698c36e9-5f87-4836-8660-aaceac669005" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.105:5671: connect: connection refused" Feb 17 16:18:42 crc kubenswrapper[4808]: I0217 16:18:42.038325 4808 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="59be2048-a5c9-44c9-a3ef-651002555ff0" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.106:5671: connect: connection refused" Feb 17 16:18:42 crc kubenswrapper[4808]: E0217 16:18:42.352863 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:18:43 crc kubenswrapper[4808]: I0217 16:18:43.363831 4808 generic.go:334] "Generic (PLEG): container finished" podID="698c36e9-5f87-4836-8660-aaceac669005" containerID="d280bb8f394e232e2279b423416261e7f2f5d4ad76577ac87b19691f2c6abe5e" exitCode=0 Feb 17 16:18:43 crc kubenswrapper[4808]: I0217 16:18:43.363895 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"698c36e9-5f87-4836-8660-aaceac669005","Type":"ContainerDied","Data":"d280bb8f394e232e2279b423416261e7f2f5d4ad76577ac87b19691f2c6abe5e"} Feb 17 16:18:43 crc kubenswrapper[4808]: I0217 16:18:43.680309 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 17 16:18:43 crc kubenswrapper[4808]: I0217 16:18:43.814428 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/698c36e9-5f87-4836-8660-aaceac669005-plugins-conf\") pod \"698c36e9-5f87-4836-8660-aaceac669005\" (UID: \"698c36e9-5f87-4836-8660-aaceac669005\") " Feb 17 16:18:43 crc kubenswrapper[4808]: I0217 16:18:43.814468 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/698c36e9-5f87-4836-8660-aaceac669005-pod-info\") pod \"698c36e9-5f87-4836-8660-aaceac669005\" (UID: \"698c36e9-5f87-4836-8660-aaceac669005\") " Feb 17 16:18:43 crc kubenswrapper[4808]: I0217 16:18:43.814553 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/698c36e9-5f87-4836-8660-aaceac669005-config-data\") pod \"698c36e9-5f87-4836-8660-aaceac669005\" (UID: \"698c36e9-5f87-4836-8660-aaceac669005\") " Feb 17 16:18:43 crc kubenswrapper[4808]: I0217 16:18:43.814623 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/698c36e9-5f87-4836-8660-aaceac669005-server-conf\") pod \"698c36e9-5f87-4836-8660-aaceac669005\" (UID: \"698c36e9-5f87-4836-8660-aaceac669005\") " Feb 17 16:18:43 crc kubenswrapper[4808]: I0217 16:18:43.814694 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bqv9f\" (UniqueName: \"kubernetes.io/projected/698c36e9-5f87-4836-8660-aaceac669005-kube-api-access-bqv9f\") pod \"698c36e9-5f87-4836-8660-aaceac669005\" (UID: \"698c36e9-5f87-4836-8660-aaceac669005\") " Feb 17 16:18:43 crc kubenswrapper[4808]: I0217 16:18:43.814739 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/698c36e9-5f87-4836-8660-aaceac669005-erlang-cookie-secret\") pod \"698c36e9-5f87-4836-8660-aaceac669005\" (UID: \"698c36e9-5f87-4836-8660-aaceac669005\") " Feb 17 16:18:43 crc kubenswrapper[4808]: I0217 16:18:43.814811 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/698c36e9-5f87-4836-8660-aaceac669005-rabbitmq-plugins\") pod \"698c36e9-5f87-4836-8660-aaceac669005\" (UID: \"698c36e9-5f87-4836-8660-aaceac669005\") " Feb 17 16:18:43 crc kubenswrapper[4808]: I0217 16:18:43.814845 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/698c36e9-5f87-4836-8660-aaceac669005-rabbitmq-tls\") pod \"698c36e9-5f87-4836-8660-aaceac669005\" (UID: \"698c36e9-5f87-4836-8660-aaceac669005\") " Feb 17 16:18:43 crc kubenswrapper[4808]: I0217 16:18:43.815503 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-41460aca-532a-4a4a-9959-90e4e175e3d4\") pod \"698c36e9-5f87-4836-8660-aaceac669005\" (UID: \"698c36e9-5f87-4836-8660-aaceac669005\") " Feb 17 16:18:43 crc kubenswrapper[4808]: I0217 16:18:43.815664 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/698c36e9-5f87-4836-8660-aaceac669005-rabbitmq-confd\") pod \"698c36e9-5f87-4836-8660-aaceac669005\" (UID: \"698c36e9-5f87-4836-8660-aaceac669005\") " Feb 17 16:18:43 crc kubenswrapper[4808]: I0217 16:18:43.815692 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/698c36e9-5f87-4836-8660-aaceac669005-rabbitmq-erlang-cookie\") pod \"698c36e9-5f87-4836-8660-aaceac669005\" (UID: \"698c36e9-5f87-4836-8660-aaceac669005\") " Feb 17 16:18:43 crc kubenswrapper[4808]: I0217 16:18:43.816194 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/698c36e9-5f87-4836-8660-aaceac669005-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "698c36e9-5f87-4836-8660-aaceac669005" (UID: "698c36e9-5f87-4836-8660-aaceac669005"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:18:43 crc kubenswrapper[4808]: I0217 16:18:43.816863 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/698c36e9-5f87-4836-8660-aaceac669005-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "698c36e9-5f87-4836-8660-aaceac669005" (UID: "698c36e9-5f87-4836-8660-aaceac669005"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:18:43 crc kubenswrapper[4808]: I0217 16:18:43.817971 4808 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/698c36e9-5f87-4836-8660-aaceac669005-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:43 crc kubenswrapper[4808]: I0217 16:18:43.817999 4808 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/698c36e9-5f87-4836-8660-aaceac669005-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:43 crc kubenswrapper[4808]: I0217 16:18:43.815800 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/698c36e9-5f87-4836-8660-aaceac669005-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "698c36e9-5f87-4836-8660-aaceac669005" (UID: "698c36e9-5f87-4836-8660-aaceac669005"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:18:43 crc kubenswrapper[4808]: I0217 16:18:43.824944 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/698c36e9-5f87-4836-8660-aaceac669005-kube-api-access-bqv9f" (OuterVolumeSpecName: "kube-api-access-bqv9f") pod "698c36e9-5f87-4836-8660-aaceac669005" (UID: "698c36e9-5f87-4836-8660-aaceac669005"). InnerVolumeSpecName "kube-api-access-bqv9f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:18:43 crc kubenswrapper[4808]: I0217 16:18:43.836500 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/698c36e9-5f87-4836-8660-aaceac669005-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "698c36e9-5f87-4836-8660-aaceac669005" (UID: "698c36e9-5f87-4836-8660-aaceac669005"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:18:43 crc kubenswrapper[4808]: I0217 16:18:43.838901 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/698c36e9-5f87-4836-8660-aaceac669005-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "698c36e9-5f87-4836-8660-aaceac669005" (UID: "698c36e9-5f87-4836-8660-aaceac669005"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:18:43 crc kubenswrapper[4808]: I0217 16:18:43.840183 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/698c36e9-5f87-4836-8660-aaceac669005-pod-info" (OuterVolumeSpecName: "pod-info") pod "698c36e9-5f87-4836-8660-aaceac669005" (UID: "698c36e9-5f87-4836-8660-aaceac669005"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 17 16:18:43 crc kubenswrapper[4808]: E0217 16:18:43.859320 4808 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod59be2048_a5c9_44c9_a3ef_651002555ff0.slice/crio-conmon-a66e5c234068e929dfcc62adceb6ad71c707c8e45c67ae3fa19c099a1c7d0807.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod59be2048_a5c9_44c9_a3ef_651002555ff0.slice/crio-a66e5c234068e929dfcc62adceb6ad71c707c8e45c67ae3fa19c099a1c7d0807.scope\": RecentStats: unable to find data in memory cache]" Feb 17 16:18:43 crc kubenswrapper[4808]: I0217 16:18:43.864038 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-41460aca-532a-4a4a-9959-90e4e175e3d4" (OuterVolumeSpecName: "persistence") pod "698c36e9-5f87-4836-8660-aaceac669005" (UID: "698c36e9-5f87-4836-8660-aaceac669005"). InnerVolumeSpecName "pvc-41460aca-532a-4a4a-9959-90e4e175e3d4". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 17 16:18:43 crc kubenswrapper[4808]: I0217 16:18:43.876141 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/698c36e9-5f87-4836-8660-aaceac669005-config-data" (OuterVolumeSpecName: "config-data") pod "698c36e9-5f87-4836-8660-aaceac669005" (UID: "698c36e9-5f87-4836-8660-aaceac669005"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:18:43 crc kubenswrapper[4808]: I0217 16:18:43.921129 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bqv9f\" (UniqueName: \"kubernetes.io/projected/698c36e9-5f87-4836-8660-aaceac669005-kube-api-access-bqv9f\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:43 crc kubenswrapper[4808]: I0217 16:18:43.922345 4808 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/698c36e9-5f87-4836-8660-aaceac669005-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:43 crc kubenswrapper[4808]: I0217 16:18:43.922424 4808 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/698c36e9-5f87-4836-8660-aaceac669005-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:43 crc kubenswrapper[4808]: I0217 16:18:43.922497 4808 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-41460aca-532a-4a4a-9959-90e4e175e3d4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-41460aca-532a-4a4a-9959-90e4e175e3d4\") on node \"crc\" " Feb 17 16:18:43 crc kubenswrapper[4808]: I0217 16:18:43.922565 4808 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/698c36e9-5f87-4836-8660-aaceac669005-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:43 crc kubenswrapper[4808]: I0217 16:18:43.922731 4808 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/698c36e9-5f87-4836-8660-aaceac669005-pod-info\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:43 crc kubenswrapper[4808]: I0217 16:18:43.922795 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/698c36e9-5f87-4836-8660-aaceac669005-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:43 crc kubenswrapper[4808]: I0217 16:18:43.997242 4808 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 17 16:18:43 crc kubenswrapper[4808]: I0217 16:18:43.997449 4808 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-41460aca-532a-4a4a-9959-90e4e175e3d4" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-41460aca-532a-4a4a-9959-90e4e175e3d4") on node "crc" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.010163 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/698c36e9-5f87-4836-8660-aaceac669005-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "698c36e9-5f87-4836-8660-aaceac669005" (UID: "698c36e9-5f87-4836-8660-aaceac669005"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.022190 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/698c36e9-5f87-4836-8660-aaceac669005-server-conf" (OuterVolumeSpecName: "server-conf") pod "698c36e9-5f87-4836-8660-aaceac669005" (UID: "698c36e9-5f87-4836-8660-aaceac669005"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.025193 4808 reconciler_common.go:293] "Volume detached for volume \"pvc-41460aca-532a-4a4a-9959-90e4e175e3d4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-41460aca-532a-4a4a-9959-90e4e175e3d4\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.025220 4808 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/698c36e9-5f87-4836-8660-aaceac669005-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.025231 4808 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/698c36e9-5f87-4836-8660-aaceac669005-server-conf\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.291538 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.329929 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-flvtj\" (UniqueName: \"kubernetes.io/projected/59be2048-a5c9-44c9-a3ef-651002555ff0-kube-api-access-flvtj\") pod \"59be2048-a5c9-44c9-a3ef-651002555ff0\" (UID: \"59be2048-a5c9-44c9-a3ef-651002555ff0\") " Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.330017 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/59be2048-a5c9-44c9-a3ef-651002555ff0-rabbitmq-confd\") pod \"59be2048-a5c9-44c9-a3ef-651002555ff0\" (UID: \"59be2048-a5c9-44c9-a3ef-651002555ff0\") " Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.330056 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/59be2048-a5c9-44c9-a3ef-651002555ff0-pod-info\") pod \"59be2048-a5c9-44c9-a3ef-651002555ff0\" (UID: \"59be2048-a5c9-44c9-a3ef-651002555ff0\") " Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.330083 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/59be2048-a5c9-44c9-a3ef-651002555ff0-rabbitmq-plugins\") pod \"59be2048-a5c9-44c9-a3ef-651002555ff0\" (UID: \"59be2048-a5c9-44c9-a3ef-651002555ff0\") " Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.330136 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/59be2048-a5c9-44c9-a3ef-651002555ff0-erlang-cookie-secret\") pod \"59be2048-a5c9-44c9-a3ef-651002555ff0\" (UID: \"59be2048-a5c9-44c9-a3ef-651002555ff0\") " Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.331422 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/59be2048-a5c9-44c9-a3ef-651002555ff0-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "59be2048-a5c9-44c9-a3ef-651002555ff0" (UID: "59be2048-a5c9-44c9-a3ef-651002555ff0"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.331661 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-768b6430-57c2-4601-b30e-a3b0639286e5\") pod \"59be2048-a5c9-44c9-a3ef-651002555ff0\" (UID: \"59be2048-a5c9-44c9-a3ef-651002555ff0\") " Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.332448 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/59be2048-a5c9-44c9-a3ef-651002555ff0-server-conf\") pod \"59be2048-a5c9-44c9-a3ef-651002555ff0\" (UID: \"59be2048-a5c9-44c9-a3ef-651002555ff0\") " Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.332540 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/59be2048-a5c9-44c9-a3ef-651002555ff0-config-data\") pod \"59be2048-a5c9-44c9-a3ef-651002555ff0\" (UID: \"59be2048-a5c9-44c9-a3ef-651002555ff0\") " Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.333110 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/59be2048-a5c9-44c9-a3ef-651002555ff0-rabbitmq-erlang-cookie\") pod \"59be2048-a5c9-44c9-a3ef-651002555ff0\" (UID: \"59be2048-a5c9-44c9-a3ef-651002555ff0\") " Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.333241 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/59be2048-a5c9-44c9-a3ef-651002555ff0-rabbitmq-tls\") pod \"59be2048-a5c9-44c9-a3ef-651002555ff0\" (UID: \"59be2048-a5c9-44c9-a3ef-651002555ff0\") " Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.333331 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/59be2048-a5c9-44c9-a3ef-651002555ff0-plugins-conf\") pod \"59be2048-a5c9-44c9-a3ef-651002555ff0\" (UID: \"59be2048-a5c9-44c9-a3ef-651002555ff0\") " Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.334276 4808 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/59be2048-a5c9-44c9-a3ef-651002555ff0-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.334915 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59be2048-a5c9-44c9-a3ef-651002555ff0-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "59be2048-a5c9-44c9-a3ef-651002555ff0" (UID: "59be2048-a5c9-44c9-a3ef-651002555ff0"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.335515 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/59be2048-a5c9-44c9-a3ef-651002555ff0-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "59be2048-a5c9-44c9-a3ef-651002555ff0" (UID: "59be2048-a5c9-44c9-a3ef-651002555ff0"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.342672 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59be2048-a5c9-44c9-a3ef-651002555ff0-kube-api-access-flvtj" (OuterVolumeSpecName: "kube-api-access-flvtj") pod "59be2048-a5c9-44c9-a3ef-651002555ff0" (UID: "59be2048-a5c9-44c9-a3ef-651002555ff0"). InnerVolumeSpecName "kube-api-access-flvtj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.342813 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59be2048-a5c9-44c9-a3ef-651002555ff0-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "59be2048-a5c9-44c9-a3ef-651002555ff0" (UID: "59be2048-a5c9-44c9-a3ef-651002555ff0"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.349509 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/59be2048-a5c9-44c9-a3ef-651002555ff0-pod-info" (OuterVolumeSpecName: "pod-info") pod "59be2048-a5c9-44c9-a3ef-651002555ff0" (UID: "59be2048-a5c9-44c9-a3ef-651002555ff0"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.355110 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59be2048-a5c9-44c9-a3ef-651002555ff0-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "59be2048-a5c9-44c9-a3ef-651002555ff0" (UID: "59be2048-a5c9-44c9-a3ef-651002555ff0"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.360419 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-768b6430-57c2-4601-b30e-a3b0639286e5" (OuterVolumeSpecName: "persistence") pod "59be2048-a5c9-44c9-a3ef-651002555ff0" (UID: "59be2048-a5c9-44c9-a3ef-651002555ff0"). InnerVolumeSpecName "pvc-768b6430-57c2-4601-b30e-a3b0639286e5". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.384076 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"698c36e9-5f87-4836-8660-aaceac669005","Type":"ContainerDied","Data":"57ad7e9e95603b9e00dced5aff567d0fff1bbfb9d96b8bfdb7074f711d80c274"} Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.384132 4808 scope.go:117] "RemoveContainer" containerID="d280bb8f394e232e2279b423416261e7f2f5d4ad76577ac87b19691f2c6abe5e" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.384317 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.424918 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59be2048-a5c9-44c9-a3ef-651002555ff0-config-data" (OuterVolumeSpecName: "config-data") pod "59be2048-a5c9-44c9-a3ef-651002555ff0" (UID: "59be2048-a5c9-44c9-a3ef-651002555ff0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.427459 4808 generic.go:334] "Generic (PLEG): container finished" podID="59be2048-a5c9-44c9-a3ef-651002555ff0" containerID="a66e5c234068e929dfcc62adceb6ad71c707c8e45c67ae3fa19c099a1c7d0807" exitCode=0 Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.427500 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"59be2048-a5c9-44c9-a3ef-651002555ff0","Type":"ContainerDied","Data":"a66e5c234068e929dfcc62adceb6ad71c707c8e45c67ae3fa19c099a1c7d0807"} Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.427527 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"59be2048-a5c9-44c9-a3ef-651002555ff0","Type":"ContainerDied","Data":"f86bb416640f1c93ce31ac0513d794573c83b4fcf30431f9c4619fd3c48ca73d"} Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.427532 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.437273 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-flvtj\" (UniqueName: \"kubernetes.io/projected/59be2048-a5c9-44c9-a3ef-651002555ff0-kube-api-access-flvtj\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.437637 4808 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/59be2048-a5c9-44c9-a3ef-651002555ff0-pod-info\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.437653 4808 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/59be2048-a5c9-44c9-a3ef-651002555ff0-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.437683 4808 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-768b6430-57c2-4601-b30e-a3b0639286e5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-768b6430-57c2-4601-b30e-a3b0639286e5\") on node \"crc\" " Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.437699 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/59be2048-a5c9-44c9-a3ef-651002555ff0-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.437714 4808 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/59be2048-a5c9-44c9-a3ef-651002555ff0-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.437726 4808 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/59be2048-a5c9-44c9-a3ef-651002555ff0-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.437739 4808 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/59be2048-a5c9-44c9-a3ef-651002555ff0-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.458088 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59be2048-a5c9-44c9-a3ef-651002555ff0-server-conf" (OuterVolumeSpecName: "server-conf") pod "59be2048-a5c9-44c9-a3ef-651002555ff0" (UID: "59be2048-a5c9-44c9-a3ef-651002555ff0"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.485497 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.499096 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.514558 4808 scope.go:117] "RemoveContainer" containerID="19fb997acb847b4585d9f3a1732ebf382a63b29716209b27bb21be0c936a6430" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.540451 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 17 16:18:44 crc kubenswrapper[4808]: E0217 16:18:44.542966 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59be2048-a5c9-44c9-a3ef-651002555ff0" containerName="rabbitmq" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.542995 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="59be2048-a5c9-44c9-a3ef-651002555ff0" containerName="rabbitmq" Feb 17 16:18:44 crc kubenswrapper[4808]: E0217 16:18:44.543025 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="698c36e9-5f87-4836-8660-aaceac669005" containerName="setup-container" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.543034 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="698c36e9-5f87-4836-8660-aaceac669005" containerName="setup-container" Feb 17 16:18:44 crc kubenswrapper[4808]: E0217 16:18:44.543045 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="698c36e9-5f87-4836-8660-aaceac669005" containerName="rabbitmq" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.543053 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="698c36e9-5f87-4836-8660-aaceac669005" containerName="rabbitmq" Feb 17 16:18:44 crc kubenswrapper[4808]: E0217 16:18:44.543068 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59be2048-a5c9-44c9-a3ef-651002555ff0" containerName="setup-container" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.543074 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="59be2048-a5c9-44c9-a3ef-651002555ff0" containerName="setup-container" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.543305 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="698c36e9-5f87-4836-8660-aaceac669005" containerName="rabbitmq" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.543332 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="59be2048-a5c9-44c9-a3ef-651002555ff0" containerName="rabbitmq" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.544681 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.544897 4808 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.545263 4808 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/59be2048-a5c9-44c9-a3ef-651002555ff0-server-conf\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.545274 4808 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-768b6430-57c2-4601-b30e-a3b0639286e5" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-768b6430-57c2-4601-b30e-a3b0639286e5") on node "crc" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.553977 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.553987 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.553987 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.554187 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-gc9dp" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.554223 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.554845 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.556689 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.556894 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.570163 4808 scope.go:117] "RemoveContainer" containerID="a66e5c234068e929dfcc62adceb6ad71c707c8e45c67ae3fa19c099a1c7d0807" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.602695 4808 scope.go:117] "RemoveContainer" containerID="5486e6dc5697e1e74b776b15f38831dacbc3e1b4bd9ce88391352b7167a44fe9" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.627269 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59be2048-a5c9-44c9-a3ef-651002555ff0-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "59be2048-a5c9-44c9-a3ef-651002555ff0" (UID: "59be2048-a5c9-44c9-a3ef-651002555ff0"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.636824 4808 scope.go:117] "RemoveContainer" containerID="a66e5c234068e929dfcc62adceb6ad71c707c8e45c67ae3fa19c099a1c7d0807" Feb 17 16:18:44 crc kubenswrapper[4808]: E0217 16:18:44.638115 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a66e5c234068e929dfcc62adceb6ad71c707c8e45c67ae3fa19c099a1c7d0807\": container with ID starting with a66e5c234068e929dfcc62adceb6ad71c707c8e45c67ae3fa19c099a1c7d0807 not found: ID does not exist" containerID="a66e5c234068e929dfcc62adceb6ad71c707c8e45c67ae3fa19c099a1c7d0807" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.638175 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a66e5c234068e929dfcc62adceb6ad71c707c8e45c67ae3fa19c099a1c7d0807"} err="failed to get container status \"a66e5c234068e929dfcc62adceb6ad71c707c8e45c67ae3fa19c099a1c7d0807\": rpc error: code = NotFound desc = could not find container \"a66e5c234068e929dfcc62adceb6ad71c707c8e45c67ae3fa19c099a1c7d0807\": container with ID starting with a66e5c234068e929dfcc62adceb6ad71c707c8e45c67ae3fa19c099a1c7d0807 not found: ID does not exist" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.638202 4808 scope.go:117] "RemoveContainer" containerID="5486e6dc5697e1e74b776b15f38831dacbc3e1b4bd9ce88391352b7167a44fe9" Feb 17 16:18:44 crc kubenswrapper[4808]: E0217 16:18:44.638613 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5486e6dc5697e1e74b776b15f38831dacbc3e1b4bd9ce88391352b7167a44fe9\": container with ID starting with 5486e6dc5697e1e74b776b15f38831dacbc3e1b4bd9ce88391352b7167a44fe9 not found: ID does not exist" containerID="5486e6dc5697e1e74b776b15f38831dacbc3e1b4bd9ce88391352b7167a44fe9" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.638650 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5486e6dc5697e1e74b776b15f38831dacbc3e1b4bd9ce88391352b7167a44fe9"} err="failed to get container status \"5486e6dc5697e1e74b776b15f38831dacbc3e1b4bd9ce88391352b7167a44fe9\": rpc error: code = NotFound desc = could not find container \"5486e6dc5697e1e74b776b15f38831dacbc3e1b4bd9ce88391352b7167a44fe9\": container with ID starting with 5486e6dc5697e1e74b776b15f38831dacbc3e1b4bd9ce88391352b7167a44fe9 not found: ID does not exist" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.647548 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/357e5513-bef7-45cc-b62f-072a161ccce3-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"357e5513-bef7-45cc-b62f-072a161ccce3\") " pod="openstack/rabbitmq-server-0" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.647598 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/357e5513-bef7-45cc-b62f-072a161ccce3-config-data\") pod \"rabbitmq-server-0\" (UID: \"357e5513-bef7-45cc-b62f-072a161ccce3\") " pod="openstack/rabbitmq-server-0" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.647636 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/357e5513-bef7-45cc-b62f-072a161ccce3-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"357e5513-bef7-45cc-b62f-072a161ccce3\") " pod="openstack/rabbitmq-server-0" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.647653 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/357e5513-bef7-45cc-b62f-072a161ccce3-pod-info\") pod \"rabbitmq-server-0\" (UID: \"357e5513-bef7-45cc-b62f-072a161ccce3\") " pod="openstack/rabbitmq-server-0" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.647681 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/357e5513-bef7-45cc-b62f-072a161ccce3-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"357e5513-bef7-45cc-b62f-072a161ccce3\") " pod="openstack/rabbitmq-server-0" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.647695 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szhc4\" (UniqueName: \"kubernetes.io/projected/357e5513-bef7-45cc-b62f-072a161ccce3-kube-api-access-szhc4\") pod \"rabbitmq-server-0\" (UID: \"357e5513-bef7-45cc-b62f-072a161ccce3\") " pod="openstack/rabbitmq-server-0" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.647768 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-41460aca-532a-4a4a-9959-90e4e175e3d4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-41460aca-532a-4a4a-9959-90e4e175e3d4\") pod \"rabbitmq-server-0\" (UID: \"357e5513-bef7-45cc-b62f-072a161ccce3\") " pod="openstack/rabbitmq-server-0" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.647797 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/357e5513-bef7-45cc-b62f-072a161ccce3-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"357e5513-bef7-45cc-b62f-072a161ccce3\") " pod="openstack/rabbitmq-server-0" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.647819 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/357e5513-bef7-45cc-b62f-072a161ccce3-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"357e5513-bef7-45cc-b62f-072a161ccce3\") " pod="openstack/rabbitmq-server-0" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.647854 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/357e5513-bef7-45cc-b62f-072a161ccce3-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"357e5513-bef7-45cc-b62f-072a161ccce3\") " pod="openstack/rabbitmq-server-0" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.648133 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/357e5513-bef7-45cc-b62f-072a161ccce3-server-conf\") pod \"rabbitmq-server-0\" (UID: \"357e5513-bef7-45cc-b62f-072a161ccce3\") " pod="openstack/rabbitmq-server-0" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.648340 4808 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/59be2048-a5c9-44c9-a3ef-651002555ff0-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.648397 4808 reconciler_common.go:293] "Volume detached for volume \"pvc-768b6430-57c2-4601-b30e-a3b0639286e5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-768b6430-57c2-4601-b30e-a3b0639286e5\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.750058 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/357e5513-bef7-45cc-b62f-072a161ccce3-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"357e5513-bef7-45cc-b62f-072a161ccce3\") " pod="openstack/rabbitmq-server-0" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.750113 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/357e5513-bef7-45cc-b62f-072a161ccce3-pod-info\") pod \"rabbitmq-server-0\" (UID: \"357e5513-bef7-45cc-b62f-072a161ccce3\") " pod="openstack/rabbitmq-server-0" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.750159 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/357e5513-bef7-45cc-b62f-072a161ccce3-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"357e5513-bef7-45cc-b62f-072a161ccce3\") " pod="openstack/rabbitmq-server-0" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.750183 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szhc4\" (UniqueName: \"kubernetes.io/projected/357e5513-bef7-45cc-b62f-072a161ccce3-kube-api-access-szhc4\") pod \"rabbitmq-server-0\" (UID: \"357e5513-bef7-45cc-b62f-072a161ccce3\") " pod="openstack/rabbitmq-server-0" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.750265 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-41460aca-532a-4a4a-9959-90e4e175e3d4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-41460aca-532a-4a4a-9959-90e4e175e3d4\") pod \"rabbitmq-server-0\" (UID: \"357e5513-bef7-45cc-b62f-072a161ccce3\") " pod="openstack/rabbitmq-server-0" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.750314 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/357e5513-bef7-45cc-b62f-072a161ccce3-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"357e5513-bef7-45cc-b62f-072a161ccce3\") " pod="openstack/rabbitmq-server-0" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.750343 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/357e5513-bef7-45cc-b62f-072a161ccce3-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"357e5513-bef7-45cc-b62f-072a161ccce3\") " pod="openstack/rabbitmq-server-0" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.750396 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/357e5513-bef7-45cc-b62f-072a161ccce3-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"357e5513-bef7-45cc-b62f-072a161ccce3\") " pod="openstack/rabbitmq-server-0" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.750488 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/357e5513-bef7-45cc-b62f-072a161ccce3-server-conf\") pod \"rabbitmq-server-0\" (UID: \"357e5513-bef7-45cc-b62f-072a161ccce3\") " pod="openstack/rabbitmq-server-0" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.750524 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/357e5513-bef7-45cc-b62f-072a161ccce3-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"357e5513-bef7-45cc-b62f-072a161ccce3\") " pod="openstack/rabbitmq-server-0" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.750536 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/357e5513-bef7-45cc-b62f-072a161ccce3-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"357e5513-bef7-45cc-b62f-072a161ccce3\") " pod="openstack/rabbitmq-server-0" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.750548 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/357e5513-bef7-45cc-b62f-072a161ccce3-config-data\") pod \"rabbitmq-server-0\" (UID: \"357e5513-bef7-45cc-b62f-072a161ccce3\") " pod="openstack/rabbitmq-server-0" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.752255 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/357e5513-bef7-45cc-b62f-072a161ccce3-config-data\") pod \"rabbitmq-server-0\" (UID: \"357e5513-bef7-45cc-b62f-072a161ccce3\") " pod="openstack/rabbitmq-server-0" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.752749 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/357e5513-bef7-45cc-b62f-072a161ccce3-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"357e5513-bef7-45cc-b62f-072a161ccce3\") " pod="openstack/rabbitmq-server-0" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.752954 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/357e5513-bef7-45cc-b62f-072a161ccce3-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"357e5513-bef7-45cc-b62f-072a161ccce3\") " pod="openstack/rabbitmq-server-0" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.753595 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/357e5513-bef7-45cc-b62f-072a161ccce3-server-conf\") pod \"rabbitmq-server-0\" (UID: \"357e5513-bef7-45cc-b62f-072a161ccce3\") " pod="openstack/rabbitmq-server-0" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.758551 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/357e5513-bef7-45cc-b62f-072a161ccce3-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"357e5513-bef7-45cc-b62f-072a161ccce3\") " pod="openstack/rabbitmq-server-0" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.758591 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/357e5513-bef7-45cc-b62f-072a161ccce3-pod-info\") pod \"rabbitmq-server-0\" (UID: \"357e5513-bef7-45cc-b62f-072a161ccce3\") " pod="openstack/rabbitmq-server-0" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.758916 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/357e5513-bef7-45cc-b62f-072a161ccce3-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"357e5513-bef7-45cc-b62f-072a161ccce3\") " pod="openstack/rabbitmq-server-0" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.759252 4808 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.759292 4808 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-41460aca-532a-4a4a-9959-90e4e175e3d4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-41460aca-532a-4a4a-9959-90e4e175e3d4\") pod \"rabbitmq-server-0\" (UID: \"357e5513-bef7-45cc-b62f-072a161ccce3\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/6f412b4a2036f29492410677330a9ca63ffe6d8a8c319c56d242ee67a4a97d25/globalmount\"" pod="openstack/rabbitmq-server-0" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.759370 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/357e5513-bef7-45cc-b62f-072a161ccce3-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"357e5513-bef7-45cc-b62f-072a161ccce3\") " pod="openstack/rabbitmq-server-0" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.774767 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.789668 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szhc4\" (UniqueName: \"kubernetes.io/projected/357e5513-bef7-45cc-b62f-072a161ccce3-kube-api-access-szhc4\") pod \"rabbitmq-server-0\" (UID: \"357e5513-bef7-45cc-b62f-072a161ccce3\") " pod="openstack/rabbitmq-server-0" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.797826 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.830220 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.832516 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.835353 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.835532 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.835672 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.835823 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.841228 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-gsb4q" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.841562 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.841771 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.849089 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-41460aca-532a-4a4a-9959-90e4e175e3d4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-41460aca-532a-4a4a-9959-90e4e175e3d4\") pod \"rabbitmq-server-0\" (UID: \"357e5513-bef7-45cc-b62f-072a161ccce3\") " pod="openstack/rabbitmq-server-0" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.867200 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.886454 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.960905 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9da8d67e-00c6-4ba1-a08b-09c5653d93fd-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"9da8d67e-00c6-4ba1-a08b-09c5653d93fd\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.961286 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65t8j\" (UniqueName: \"kubernetes.io/projected/9da8d67e-00c6-4ba1-a08b-09c5653d93fd-kube-api-access-65t8j\") pod \"rabbitmq-cell1-server-0\" (UID: \"9da8d67e-00c6-4ba1-a08b-09c5653d93fd\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.961349 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9da8d67e-00c6-4ba1-a08b-09c5653d93fd-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9da8d67e-00c6-4ba1-a08b-09c5653d93fd\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.961391 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9da8d67e-00c6-4ba1-a08b-09c5653d93fd-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"9da8d67e-00c6-4ba1-a08b-09c5653d93fd\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.961442 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9da8d67e-00c6-4ba1-a08b-09c5653d93fd-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"9da8d67e-00c6-4ba1-a08b-09c5653d93fd\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.961521 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-768b6430-57c2-4601-b30e-a3b0639286e5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-768b6430-57c2-4601-b30e-a3b0639286e5\") pod \"rabbitmq-cell1-server-0\" (UID: \"9da8d67e-00c6-4ba1-a08b-09c5653d93fd\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.961584 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9da8d67e-00c6-4ba1-a08b-09c5653d93fd-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9da8d67e-00c6-4ba1-a08b-09c5653d93fd\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.961624 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9da8d67e-00c6-4ba1-a08b-09c5653d93fd-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"9da8d67e-00c6-4ba1-a08b-09c5653d93fd\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.961655 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9da8d67e-00c6-4ba1-a08b-09c5653d93fd-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"9da8d67e-00c6-4ba1-a08b-09c5653d93fd\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.961692 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9da8d67e-00c6-4ba1-a08b-09c5653d93fd-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"9da8d67e-00c6-4ba1-a08b-09c5653d93fd\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.961723 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9da8d67e-00c6-4ba1-a08b-09c5653d93fd-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"9da8d67e-00c6-4ba1-a08b-09c5653d93fd\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:18:45 crc kubenswrapper[4808]: I0217 16:18:45.063352 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9da8d67e-00c6-4ba1-a08b-09c5653d93fd-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"9da8d67e-00c6-4ba1-a08b-09c5653d93fd\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:18:45 crc kubenswrapper[4808]: I0217 16:18:45.063424 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9da8d67e-00c6-4ba1-a08b-09c5653d93fd-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"9da8d67e-00c6-4ba1-a08b-09c5653d93fd\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:18:45 crc kubenswrapper[4808]: I0217 16:18:45.063461 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9da8d67e-00c6-4ba1-a08b-09c5653d93fd-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"9da8d67e-00c6-4ba1-a08b-09c5653d93fd\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:18:45 crc kubenswrapper[4808]: I0217 16:18:45.063510 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9da8d67e-00c6-4ba1-a08b-09c5653d93fd-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"9da8d67e-00c6-4ba1-a08b-09c5653d93fd\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:18:45 crc kubenswrapper[4808]: I0217 16:18:45.063539 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65t8j\" (UniqueName: \"kubernetes.io/projected/9da8d67e-00c6-4ba1-a08b-09c5653d93fd-kube-api-access-65t8j\") pod \"rabbitmq-cell1-server-0\" (UID: \"9da8d67e-00c6-4ba1-a08b-09c5653d93fd\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:18:45 crc kubenswrapper[4808]: I0217 16:18:45.063602 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9da8d67e-00c6-4ba1-a08b-09c5653d93fd-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9da8d67e-00c6-4ba1-a08b-09c5653d93fd\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:18:45 crc kubenswrapper[4808]: I0217 16:18:45.063636 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9da8d67e-00c6-4ba1-a08b-09c5653d93fd-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"9da8d67e-00c6-4ba1-a08b-09c5653d93fd\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:18:45 crc kubenswrapper[4808]: I0217 16:18:45.063681 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9da8d67e-00c6-4ba1-a08b-09c5653d93fd-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"9da8d67e-00c6-4ba1-a08b-09c5653d93fd\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:18:45 crc kubenswrapper[4808]: I0217 16:18:45.063771 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-768b6430-57c2-4601-b30e-a3b0639286e5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-768b6430-57c2-4601-b30e-a3b0639286e5\") pod \"rabbitmq-cell1-server-0\" (UID: \"9da8d67e-00c6-4ba1-a08b-09c5653d93fd\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:18:45 crc kubenswrapper[4808]: I0217 16:18:45.063820 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9da8d67e-00c6-4ba1-a08b-09c5653d93fd-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9da8d67e-00c6-4ba1-a08b-09c5653d93fd\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:18:45 crc kubenswrapper[4808]: I0217 16:18:45.063860 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9da8d67e-00c6-4ba1-a08b-09c5653d93fd-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"9da8d67e-00c6-4ba1-a08b-09c5653d93fd\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:18:45 crc kubenswrapper[4808]: I0217 16:18:45.064802 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9da8d67e-00c6-4ba1-a08b-09c5653d93fd-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9da8d67e-00c6-4ba1-a08b-09c5653d93fd\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:18:45 crc kubenswrapper[4808]: I0217 16:18:45.064837 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9da8d67e-00c6-4ba1-a08b-09c5653d93fd-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"9da8d67e-00c6-4ba1-a08b-09c5653d93fd\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:18:45 crc kubenswrapper[4808]: I0217 16:18:45.065235 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9da8d67e-00c6-4ba1-a08b-09c5653d93fd-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"9da8d67e-00c6-4ba1-a08b-09c5653d93fd\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:18:45 crc kubenswrapper[4808]: I0217 16:18:45.065799 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9da8d67e-00c6-4ba1-a08b-09c5653d93fd-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"9da8d67e-00c6-4ba1-a08b-09c5653d93fd\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:18:45 crc kubenswrapper[4808]: I0217 16:18:45.066536 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9da8d67e-00c6-4ba1-a08b-09c5653d93fd-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9da8d67e-00c6-4ba1-a08b-09c5653d93fd\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:18:45 crc kubenswrapper[4808]: I0217 16:18:45.069034 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9da8d67e-00c6-4ba1-a08b-09c5653d93fd-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"9da8d67e-00c6-4ba1-a08b-09c5653d93fd\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:18:45 crc kubenswrapper[4808]: I0217 16:18:45.069197 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9da8d67e-00c6-4ba1-a08b-09c5653d93fd-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"9da8d67e-00c6-4ba1-a08b-09c5653d93fd\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:18:45 crc kubenswrapper[4808]: I0217 16:18:45.069383 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9da8d67e-00c6-4ba1-a08b-09c5653d93fd-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"9da8d67e-00c6-4ba1-a08b-09c5653d93fd\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:18:45 crc kubenswrapper[4808]: I0217 16:18:45.073982 4808 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 16:18:45 crc kubenswrapper[4808]: I0217 16:18:45.074012 4808 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-768b6430-57c2-4601-b30e-a3b0639286e5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-768b6430-57c2-4601-b30e-a3b0639286e5\") pod \"rabbitmq-cell1-server-0\" (UID: \"9da8d67e-00c6-4ba1-a08b-09c5653d93fd\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/be40d6772f21ead376a83ce27352b0ce535ee01ddc50414a5dc6453b6d9bcfec/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:18:45 crc kubenswrapper[4808]: I0217 16:18:45.075177 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9da8d67e-00c6-4ba1-a08b-09c5653d93fd-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"9da8d67e-00c6-4ba1-a08b-09c5653d93fd\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:18:45 crc kubenswrapper[4808]: I0217 16:18:45.094230 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65t8j\" (UniqueName: \"kubernetes.io/projected/9da8d67e-00c6-4ba1-a08b-09c5653d93fd-kube-api-access-65t8j\") pod \"rabbitmq-cell1-server-0\" (UID: \"9da8d67e-00c6-4ba1-a08b-09c5653d93fd\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:18:45 crc kubenswrapper[4808]: I0217 16:18:45.136028 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-768b6430-57c2-4601-b30e-a3b0639286e5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-768b6430-57c2-4601-b30e-a3b0639286e5\") pod \"rabbitmq-cell1-server-0\" (UID: \"9da8d67e-00c6-4ba1-a08b-09c5653d93fd\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:18:45 crc kubenswrapper[4808]: I0217 16:18:45.156988 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:18:45 crc kubenswrapper[4808]: I0217 16:18:45.186345 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59be2048-a5c9-44c9-a3ef-651002555ff0" path="/var/lib/kubelet/pods/59be2048-a5c9-44c9-a3ef-651002555ff0/volumes" Feb 17 16:18:45 crc kubenswrapper[4808]: I0217 16:18:45.202591 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="698c36e9-5f87-4836-8660-aaceac669005" path="/var/lib/kubelet/pods/698c36e9-5f87-4836-8660-aaceac669005/volumes" Feb 17 16:18:45 crc kubenswrapper[4808]: I0217 16:18:45.384625 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 17 16:18:45 crc kubenswrapper[4808]: W0217 16:18:45.421168 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod357e5513_bef7_45cc_b62f_072a161ccce3.slice/crio-3d7a4deea9f03cd17503d1ccf0226ec64ce9335540f665db854da6c3c7a8424d WatchSource:0}: Error finding container 3d7a4deea9f03cd17503d1ccf0226ec64ce9335540f665db854da6c3c7a8424d: Status 404 returned error can't find the container with id 3d7a4deea9f03cd17503d1ccf0226ec64ce9335540f665db854da6c3c7a8424d Feb 17 16:18:45 crc kubenswrapper[4808]: I0217 16:18:45.463478 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"357e5513-bef7-45cc-b62f-072a161ccce3","Type":"ContainerStarted","Data":"3d7a4deea9f03cd17503d1ccf0226ec64ce9335540f665db854da6c3c7a8424d"} Feb 17 16:18:45 crc kubenswrapper[4808]: I0217 16:18:45.661975 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 17 16:18:45 crc kubenswrapper[4808]: W0217 16:18:45.666315 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9da8d67e_00c6_4ba1_a08b_09c5653d93fd.slice/crio-e100f3b82541b322c159ecac6f827481871a427c00d95b86434b34b9e4a7584d WatchSource:0}: Error finding container e100f3b82541b322c159ecac6f827481871a427c00d95b86434b34b9e4a7584d: Status 404 returned error can't find the container with id e100f3b82541b322c159ecac6f827481871a427c00d95b86434b34b9e4a7584d Feb 17 16:18:46 crc kubenswrapper[4808]: I0217 16:18:46.484811 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9da8d67e-00c6-4ba1-a08b-09c5653d93fd","Type":"ContainerStarted","Data":"e100f3b82541b322c159ecac6f827481871a427c00d95b86434b34b9e4a7584d"} Feb 17 16:18:47 crc kubenswrapper[4808]: E0217 16:18:47.286285 4808 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 17 16:18:47 crc kubenswrapper[4808]: E0217 16:18:47.286688 4808 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 17 16:18:47 crc kubenswrapper[4808]: E0217 16:18:47.286835 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fnd2x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-zl7nk_openstack(a4b182d0-48fc-4487-b7ad-18f7803a4d4c): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:18:47 crc kubenswrapper[4808]: E0217 16:18:47.288196 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:18:47 crc kubenswrapper[4808]: I0217 16:18:47.375157 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-dbb88bf8c-fnvwp"] Feb 17 16:18:47 crc kubenswrapper[4808]: I0217 16:18:47.376807 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dbb88bf8c-fnvwp" Feb 17 16:18:47 crc kubenswrapper[4808]: I0217 16:18:47.383264 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Feb 17 16:18:47 crc kubenswrapper[4808]: I0217 16:18:47.391980 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-dbb88bf8c-fnvwp"] Feb 17 16:18:47 crc kubenswrapper[4808]: I0217 16:18:47.426224 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/409792c8-f6ab-44df-a8d8-8c08bc58ed30-dns-swift-storage-0\") pod \"dnsmasq-dns-dbb88bf8c-fnvwp\" (UID: \"409792c8-f6ab-44df-a8d8-8c08bc58ed30\") " pod="openstack/dnsmasq-dns-dbb88bf8c-fnvwp" Feb 17 16:18:47 crc kubenswrapper[4808]: I0217 16:18:47.426382 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/409792c8-f6ab-44df-a8d8-8c08bc58ed30-ovsdbserver-sb\") pod \"dnsmasq-dns-dbb88bf8c-fnvwp\" (UID: \"409792c8-f6ab-44df-a8d8-8c08bc58ed30\") " pod="openstack/dnsmasq-dns-dbb88bf8c-fnvwp" Feb 17 16:18:47 crc kubenswrapper[4808]: I0217 16:18:47.426421 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/409792c8-f6ab-44df-a8d8-8c08bc58ed30-ovsdbserver-nb\") pod \"dnsmasq-dns-dbb88bf8c-fnvwp\" (UID: \"409792c8-f6ab-44df-a8d8-8c08bc58ed30\") " pod="openstack/dnsmasq-dns-dbb88bf8c-fnvwp" Feb 17 16:18:47 crc kubenswrapper[4808]: I0217 16:18:47.426542 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhb5g\" (UniqueName: \"kubernetes.io/projected/409792c8-f6ab-44df-a8d8-8c08bc58ed30-kube-api-access-lhb5g\") pod \"dnsmasq-dns-dbb88bf8c-fnvwp\" (UID: \"409792c8-f6ab-44df-a8d8-8c08bc58ed30\") " pod="openstack/dnsmasq-dns-dbb88bf8c-fnvwp" Feb 17 16:18:47 crc kubenswrapper[4808]: I0217 16:18:47.426594 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/409792c8-f6ab-44df-a8d8-8c08bc58ed30-dns-svc\") pod \"dnsmasq-dns-dbb88bf8c-fnvwp\" (UID: \"409792c8-f6ab-44df-a8d8-8c08bc58ed30\") " pod="openstack/dnsmasq-dns-dbb88bf8c-fnvwp" Feb 17 16:18:47 crc kubenswrapper[4808]: I0217 16:18:47.426636 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/409792c8-f6ab-44df-a8d8-8c08bc58ed30-openstack-edpm-ipam\") pod \"dnsmasq-dns-dbb88bf8c-fnvwp\" (UID: \"409792c8-f6ab-44df-a8d8-8c08bc58ed30\") " pod="openstack/dnsmasq-dns-dbb88bf8c-fnvwp" Feb 17 16:18:47 crc kubenswrapper[4808]: I0217 16:18:47.426708 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/409792c8-f6ab-44df-a8d8-8c08bc58ed30-config\") pod \"dnsmasq-dns-dbb88bf8c-fnvwp\" (UID: \"409792c8-f6ab-44df-a8d8-8c08bc58ed30\") " pod="openstack/dnsmasq-dns-dbb88bf8c-fnvwp" Feb 17 16:18:47 crc kubenswrapper[4808]: I0217 16:18:47.498768 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"357e5513-bef7-45cc-b62f-072a161ccce3","Type":"ContainerStarted","Data":"5ca487733509062335b917cabbb5c95c9c9189e5d3adc4142b7ced90b7a9fc87"} Feb 17 16:18:47 crc kubenswrapper[4808]: I0217 16:18:47.528693 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/409792c8-f6ab-44df-a8d8-8c08bc58ed30-dns-swift-storage-0\") pod \"dnsmasq-dns-dbb88bf8c-fnvwp\" (UID: \"409792c8-f6ab-44df-a8d8-8c08bc58ed30\") " pod="openstack/dnsmasq-dns-dbb88bf8c-fnvwp" Feb 17 16:18:47 crc kubenswrapper[4808]: I0217 16:18:47.528771 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/409792c8-f6ab-44df-a8d8-8c08bc58ed30-ovsdbserver-sb\") pod \"dnsmasq-dns-dbb88bf8c-fnvwp\" (UID: \"409792c8-f6ab-44df-a8d8-8c08bc58ed30\") " pod="openstack/dnsmasq-dns-dbb88bf8c-fnvwp" Feb 17 16:18:47 crc kubenswrapper[4808]: I0217 16:18:47.528810 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/409792c8-f6ab-44df-a8d8-8c08bc58ed30-ovsdbserver-nb\") pod \"dnsmasq-dns-dbb88bf8c-fnvwp\" (UID: \"409792c8-f6ab-44df-a8d8-8c08bc58ed30\") " pod="openstack/dnsmasq-dns-dbb88bf8c-fnvwp" Feb 17 16:18:47 crc kubenswrapper[4808]: I0217 16:18:47.528890 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhb5g\" (UniqueName: \"kubernetes.io/projected/409792c8-f6ab-44df-a8d8-8c08bc58ed30-kube-api-access-lhb5g\") pod \"dnsmasq-dns-dbb88bf8c-fnvwp\" (UID: \"409792c8-f6ab-44df-a8d8-8c08bc58ed30\") " pod="openstack/dnsmasq-dns-dbb88bf8c-fnvwp" Feb 17 16:18:47 crc kubenswrapper[4808]: I0217 16:18:47.528916 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/409792c8-f6ab-44df-a8d8-8c08bc58ed30-dns-svc\") pod \"dnsmasq-dns-dbb88bf8c-fnvwp\" (UID: \"409792c8-f6ab-44df-a8d8-8c08bc58ed30\") " pod="openstack/dnsmasq-dns-dbb88bf8c-fnvwp" Feb 17 16:18:47 crc kubenswrapper[4808]: I0217 16:18:47.528967 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/409792c8-f6ab-44df-a8d8-8c08bc58ed30-openstack-edpm-ipam\") pod \"dnsmasq-dns-dbb88bf8c-fnvwp\" (UID: \"409792c8-f6ab-44df-a8d8-8c08bc58ed30\") " pod="openstack/dnsmasq-dns-dbb88bf8c-fnvwp" Feb 17 16:18:47 crc kubenswrapper[4808]: I0217 16:18:47.529026 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/409792c8-f6ab-44df-a8d8-8c08bc58ed30-config\") pod \"dnsmasq-dns-dbb88bf8c-fnvwp\" (UID: \"409792c8-f6ab-44df-a8d8-8c08bc58ed30\") " pod="openstack/dnsmasq-dns-dbb88bf8c-fnvwp" Feb 17 16:18:47 crc kubenswrapper[4808]: I0217 16:18:47.529891 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/409792c8-f6ab-44df-a8d8-8c08bc58ed30-config\") pod \"dnsmasq-dns-dbb88bf8c-fnvwp\" (UID: \"409792c8-f6ab-44df-a8d8-8c08bc58ed30\") " pod="openstack/dnsmasq-dns-dbb88bf8c-fnvwp" Feb 17 16:18:47 crc kubenswrapper[4808]: I0217 16:18:47.530419 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/409792c8-f6ab-44df-a8d8-8c08bc58ed30-dns-swift-storage-0\") pod \"dnsmasq-dns-dbb88bf8c-fnvwp\" (UID: \"409792c8-f6ab-44df-a8d8-8c08bc58ed30\") " pod="openstack/dnsmasq-dns-dbb88bf8c-fnvwp" Feb 17 16:18:47 crc kubenswrapper[4808]: I0217 16:18:47.530953 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/409792c8-f6ab-44df-a8d8-8c08bc58ed30-ovsdbserver-sb\") pod \"dnsmasq-dns-dbb88bf8c-fnvwp\" (UID: \"409792c8-f6ab-44df-a8d8-8c08bc58ed30\") " pod="openstack/dnsmasq-dns-dbb88bf8c-fnvwp" Feb 17 16:18:47 crc kubenswrapper[4808]: I0217 16:18:47.531495 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/409792c8-f6ab-44df-a8d8-8c08bc58ed30-dns-svc\") pod \"dnsmasq-dns-dbb88bf8c-fnvwp\" (UID: \"409792c8-f6ab-44df-a8d8-8c08bc58ed30\") " pod="openstack/dnsmasq-dns-dbb88bf8c-fnvwp" Feb 17 16:18:47 crc kubenswrapper[4808]: I0217 16:18:47.531515 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/409792c8-f6ab-44df-a8d8-8c08bc58ed30-openstack-edpm-ipam\") pod \"dnsmasq-dns-dbb88bf8c-fnvwp\" (UID: \"409792c8-f6ab-44df-a8d8-8c08bc58ed30\") " pod="openstack/dnsmasq-dns-dbb88bf8c-fnvwp" Feb 17 16:18:47 crc kubenswrapper[4808]: I0217 16:18:47.531907 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/409792c8-f6ab-44df-a8d8-8c08bc58ed30-ovsdbserver-nb\") pod \"dnsmasq-dns-dbb88bf8c-fnvwp\" (UID: \"409792c8-f6ab-44df-a8d8-8c08bc58ed30\") " pod="openstack/dnsmasq-dns-dbb88bf8c-fnvwp" Feb 17 16:18:47 crc kubenswrapper[4808]: I0217 16:18:47.553376 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhb5g\" (UniqueName: \"kubernetes.io/projected/409792c8-f6ab-44df-a8d8-8c08bc58ed30-kube-api-access-lhb5g\") pod \"dnsmasq-dns-dbb88bf8c-fnvwp\" (UID: \"409792c8-f6ab-44df-a8d8-8c08bc58ed30\") " pod="openstack/dnsmasq-dns-dbb88bf8c-fnvwp" Feb 17 16:18:47 crc kubenswrapper[4808]: I0217 16:18:47.696479 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dbb88bf8c-fnvwp" Feb 17 16:18:48 crc kubenswrapper[4808]: W0217 16:18:48.221419 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod409792c8_f6ab_44df_a8d8_8c08bc58ed30.slice/crio-c729358417ccc142b4f7228661c72ca3b99c7f68bec9bdccba36c4b7349760df WatchSource:0}: Error finding container c729358417ccc142b4f7228661c72ca3b99c7f68bec9bdccba36c4b7349760df: Status 404 returned error can't find the container with id c729358417ccc142b4f7228661c72ca3b99c7f68bec9bdccba36c4b7349760df Feb 17 16:18:48 crc kubenswrapper[4808]: I0217 16:18:48.221502 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-dbb88bf8c-fnvwp"] Feb 17 16:18:48 crc kubenswrapper[4808]: I0217 16:18:48.514055 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9da8d67e-00c6-4ba1-a08b-09c5653d93fd","Type":"ContainerStarted","Data":"ae77a46583c3e8204d183609b0e2514ca4873bf349237e9718653cb5859c2857"} Feb 17 16:18:48 crc kubenswrapper[4808]: I0217 16:18:48.516134 4808 generic.go:334] "Generic (PLEG): container finished" podID="409792c8-f6ab-44df-a8d8-8c08bc58ed30" containerID="20dc982f9bc098e9d7e98d8a7978009b4306c29975504eb93ecc3923345a7b57" exitCode=0 Feb 17 16:18:48 crc kubenswrapper[4808]: I0217 16:18:48.516179 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dbb88bf8c-fnvwp" event={"ID":"409792c8-f6ab-44df-a8d8-8c08bc58ed30","Type":"ContainerDied","Data":"20dc982f9bc098e9d7e98d8a7978009b4306c29975504eb93ecc3923345a7b57"} Feb 17 16:18:48 crc kubenswrapper[4808]: I0217 16:18:48.516223 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dbb88bf8c-fnvwp" event={"ID":"409792c8-f6ab-44df-a8d8-8c08bc58ed30","Type":"ContainerStarted","Data":"c729358417ccc142b4f7228661c72ca3b99c7f68bec9bdccba36c4b7349760df"} Feb 17 16:18:49 crc kubenswrapper[4808]: I0217 16:18:49.531425 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dbb88bf8c-fnvwp" event={"ID":"409792c8-f6ab-44df-a8d8-8c08bc58ed30","Type":"ContainerStarted","Data":"d89b6a5725897056022cd0fbaaed349b8829b23e00c04e7df288e7961d3651d1"} Feb 17 16:18:49 crc kubenswrapper[4808]: I0217 16:18:49.531732 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-dbb88bf8c-fnvwp" Feb 17 16:18:49 crc kubenswrapper[4808]: I0217 16:18:49.554897 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-dbb88bf8c-fnvwp" podStartSLOduration=2.554877162 podStartE2EDuration="2.554877162s" podCreationTimestamp="2026-02-17 16:18:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:18:49.550074954 +0000 UTC m=+1493.066434027" watchObservedRunningTime="2026-02-17 16:18:49.554877162 +0000 UTC m=+1493.071236235" Feb 17 16:18:51 crc kubenswrapper[4808]: I0217 16:18:51.592946 4808 patch_prober.go:28] interesting pod/machine-config-daemon-k8v8k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:18:51 crc kubenswrapper[4808]: I0217 16:18:51.593257 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:18:55 crc kubenswrapper[4808]: I0217 16:18:55.159306 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 17 16:18:55 crc kubenswrapper[4808]: E0217 16:18:55.277365 4808 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:18:55 crc kubenswrapper[4808]: E0217 16:18:55.277467 4808 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:18:55 crc kubenswrapper[4808]: E0217 16:18:55.277717 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nfchb4h678h649h5fbh664h79h7fh666h5bfh68h565h555h59dh5b6h5bfh66ch645h547h5cbh549h9fh58bh5d4hcfh78h68chc7h5ch67dhc7h5b4q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rjgf2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(2876084b-7055-449d-9ddb-447d3a515d80): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:18:55 crc kubenswrapper[4808]: E0217 16:18:55.279068 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:18:55 crc kubenswrapper[4808]: E0217 16:18:55.602930 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:18:57 crc kubenswrapper[4808]: I0217 16:18:57.697762 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-dbb88bf8c-fnvwp" Feb 17 16:18:57 crc kubenswrapper[4808]: I0217 16:18:57.795487 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5fd9b586ff-kf4dn"] Feb 17 16:18:57 crc kubenswrapper[4808]: I0217 16:18:57.795709 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5fd9b586ff-kf4dn" podUID="236a76a9-e108-4cb9-b76d-825e33bdad41" containerName="dnsmasq-dns" containerID="cri-o://726982a5e02918c4f9048d79766ece8c9bd2f3298827c5b5c0acd8c07d834e65" gracePeriod=10 Feb 17 16:18:57 crc kubenswrapper[4808]: I0217 16:18:57.987780 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-85f64749dc-mqnbz"] Feb 17 16:18:58 crc kubenswrapper[4808]: I0217 16:18:57.993718 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85f64749dc-mqnbz" Feb 17 16:18:58 crc kubenswrapper[4808]: I0217 16:18:58.012627 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85f64749dc-mqnbz"] Feb 17 16:18:58 crc kubenswrapper[4808]: I0217 16:18:58.086609 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3d16d4be-1ab3-4261-97a7-054701cf9dba-dns-svc\") pod \"dnsmasq-dns-85f64749dc-mqnbz\" (UID: \"3d16d4be-1ab3-4261-97a7-054701cf9dba\") " pod="openstack/dnsmasq-dns-85f64749dc-mqnbz" Feb 17 16:18:58 crc kubenswrapper[4808]: I0217 16:18:58.086694 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d16d4be-1ab3-4261-97a7-054701cf9dba-config\") pod \"dnsmasq-dns-85f64749dc-mqnbz\" (UID: \"3d16d4be-1ab3-4261-97a7-054701cf9dba\") " pod="openstack/dnsmasq-dns-85f64749dc-mqnbz" Feb 17 16:18:58 crc kubenswrapper[4808]: I0217 16:18:58.086734 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s22fq\" (UniqueName: \"kubernetes.io/projected/3d16d4be-1ab3-4261-97a7-054701cf9dba-kube-api-access-s22fq\") pod \"dnsmasq-dns-85f64749dc-mqnbz\" (UID: \"3d16d4be-1ab3-4261-97a7-054701cf9dba\") " pod="openstack/dnsmasq-dns-85f64749dc-mqnbz" Feb 17 16:18:58 crc kubenswrapper[4808]: I0217 16:18:58.086849 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3d16d4be-1ab3-4261-97a7-054701cf9dba-ovsdbserver-sb\") pod \"dnsmasq-dns-85f64749dc-mqnbz\" (UID: \"3d16d4be-1ab3-4261-97a7-054701cf9dba\") " pod="openstack/dnsmasq-dns-85f64749dc-mqnbz" Feb 17 16:18:58 crc kubenswrapper[4808]: I0217 16:18:58.086933 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3d16d4be-1ab3-4261-97a7-054701cf9dba-ovsdbserver-nb\") pod \"dnsmasq-dns-85f64749dc-mqnbz\" (UID: \"3d16d4be-1ab3-4261-97a7-054701cf9dba\") " pod="openstack/dnsmasq-dns-85f64749dc-mqnbz" Feb 17 16:18:58 crc kubenswrapper[4808]: I0217 16:18:58.087011 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/3d16d4be-1ab3-4261-97a7-054701cf9dba-openstack-edpm-ipam\") pod \"dnsmasq-dns-85f64749dc-mqnbz\" (UID: \"3d16d4be-1ab3-4261-97a7-054701cf9dba\") " pod="openstack/dnsmasq-dns-85f64749dc-mqnbz" Feb 17 16:18:58 crc kubenswrapper[4808]: I0217 16:18:58.087053 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3d16d4be-1ab3-4261-97a7-054701cf9dba-dns-swift-storage-0\") pod \"dnsmasq-dns-85f64749dc-mqnbz\" (UID: \"3d16d4be-1ab3-4261-97a7-054701cf9dba\") " pod="openstack/dnsmasq-dns-85f64749dc-mqnbz" Feb 17 16:18:58 crc kubenswrapper[4808]: I0217 16:18:58.189236 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3d16d4be-1ab3-4261-97a7-054701cf9dba-dns-svc\") pod \"dnsmasq-dns-85f64749dc-mqnbz\" (UID: \"3d16d4be-1ab3-4261-97a7-054701cf9dba\") " pod="openstack/dnsmasq-dns-85f64749dc-mqnbz" Feb 17 16:18:58 crc kubenswrapper[4808]: I0217 16:18:58.189287 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d16d4be-1ab3-4261-97a7-054701cf9dba-config\") pod \"dnsmasq-dns-85f64749dc-mqnbz\" (UID: \"3d16d4be-1ab3-4261-97a7-054701cf9dba\") " pod="openstack/dnsmasq-dns-85f64749dc-mqnbz" Feb 17 16:18:58 crc kubenswrapper[4808]: I0217 16:18:58.189310 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s22fq\" (UniqueName: \"kubernetes.io/projected/3d16d4be-1ab3-4261-97a7-054701cf9dba-kube-api-access-s22fq\") pod \"dnsmasq-dns-85f64749dc-mqnbz\" (UID: \"3d16d4be-1ab3-4261-97a7-054701cf9dba\") " pod="openstack/dnsmasq-dns-85f64749dc-mqnbz" Feb 17 16:18:58 crc kubenswrapper[4808]: I0217 16:18:58.189344 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3d16d4be-1ab3-4261-97a7-054701cf9dba-ovsdbserver-sb\") pod \"dnsmasq-dns-85f64749dc-mqnbz\" (UID: \"3d16d4be-1ab3-4261-97a7-054701cf9dba\") " pod="openstack/dnsmasq-dns-85f64749dc-mqnbz" Feb 17 16:18:58 crc kubenswrapper[4808]: I0217 16:18:58.189402 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3d16d4be-1ab3-4261-97a7-054701cf9dba-ovsdbserver-nb\") pod \"dnsmasq-dns-85f64749dc-mqnbz\" (UID: \"3d16d4be-1ab3-4261-97a7-054701cf9dba\") " pod="openstack/dnsmasq-dns-85f64749dc-mqnbz" Feb 17 16:18:58 crc kubenswrapper[4808]: I0217 16:18:58.189462 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/3d16d4be-1ab3-4261-97a7-054701cf9dba-openstack-edpm-ipam\") pod \"dnsmasq-dns-85f64749dc-mqnbz\" (UID: \"3d16d4be-1ab3-4261-97a7-054701cf9dba\") " pod="openstack/dnsmasq-dns-85f64749dc-mqnbz" Feb 17 16:18:58 crc kubenswrapper[4808]: I0217 16:18:58.189494 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3d16d4be-1ab3-4261-97a7-054701cf9dba-dns-swift-storage-0\") pod \"dnsmasq-dns-85f64749dc-mqnbz\" (UID: \"3d16d4be-1ab3-4261-97a7-054701cf9dba\") " pod="openstack/dnsmasq-dns-85f64749dc-mqnbz" Feb 17 16:18:58 crc kubenswrapper[4808]: I0217 16:18:58.191128 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3d16d4be-1ab3-4261-97a7-054701cf9dba-ovsdbserver-sb\") pod \"dnsmasq-dns-85f64749dc-mqnbz\" (UID: \"3d16d4be-1ab3-4261-97a7-054701cf9dba\") " pod="openstack/dnsmasq-dns-85f64749dc-mqnbz" Feb 17 16:18:58 crc kubenswrapper[4808]: I0217 16:18:58.191238 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d16d4be-1ab3-4261-97a7-054701cf9dba-config\") pod \"dnsmasq-dns-85f64749dc-mqnbz\" (UID: \"3d16d4be-1ab3-4261-97a7-054701cf9dba\") " pod="openstack/dnsmasq-dns-85f64749dc-mqnbz" Feb 17 16:18:58 crc kubenswrapper[4808]: I0217 16:18:58.191509 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/3d16d4be-1ab3-4261-97a7-054701cf9dba-openstack-edpm-ipam\") pod \"dnsmasq-dns-85f64749dc-mqnbz\" (UID: \"3d16d4be-1ab3-4261-97a7-054701cf9dba\") " pod="openstack/dnsmasq-dns-85f64749dc-mqnbz" Feb 17 16:18:58 crc kubenswrapper[4808]: I0217 16:18:58.191865 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3d16d4be-1ab3-4261-97a7-054701cf9dba-dns-svc\") pod \"dnsmasq-dns-85f64749dc-mqnbz\" (UID: \"3d16d4be-1ab3-4261-97a7-054701cf9dba\") " pod="openstack/dnsmasq-dns-85f64749dc-mqnbz" Feb 17 16:18:58 crc kubenswrapper[4808]: I0217 16:18:58.191926 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3d16d4be-1ab3-4261-97a7-054701cf9dba-dns-swift-storage-0\") pod \"dnsmasq-dns-85f64749dc-mqnbz\" (UID: \"3d16d4be-1ab3-4261-97a7-054701cf9dba\") " pod="openstack/dnsmasq-dns-85f64749dc-mqnbz" Feb 17 16:18:58 crc kubenswrapper[4808]: I0217 16:18:58.192121 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3d16d4be-1ab3-4261-97a7-054701cf9dba-ovsdbserver-nb\") pod \"dnsmasq-dns-85f64749dc-mqnbz\" (UID: \"3d16d4be-1ab3-4261-97a7-054701cf9dba\") " pod="openstack/dnsmasq-dns-85f64749dc-mqnbz" Feb 17 16:18:58 crc kubenswrapper[4808]: I0217 16:18:58.219914 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s22fq\" (UniqueName: \"kubernetes.io/projected/3d16d4be-1ab3-4261-97a7-054701cf9dba-kube-api-access-s22fq\") pod \"dnsmasq-dns-85f64749dc-mqnbz\" (UID: \"3d16d4be-1ab3-4261-97a7-054701cf9dba\") " pod="openstack/dnsmasq-dns-85f64749dc-mqnbz" Feb 17 16:18:58 crc kubenswrapper[4808]: I0217 16:18:58.332495 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85f64749dc-mqnbz" Feb 17 16:18:58 crc kubenswrapper[4808]: I0217 16:18:58.477711 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fd9b586ff-kf4dn" Feb 17 16:18:58 crc kubenswrapper[4808]: I0217 16:18:58.596725 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/236a76a9-e108-4cb9-b76d-825e33bdad41-config\") pod \"236a76a9-e108-4cb9-b76d-825e33bdad41\" (UID: \"236a76a9-e108-4cb9-b76d-825e33bdad41\") " Feb 17 16:18:58 crc kubenswrapper[4808]: I0217 16:18:58.596806 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/236a76a9-e108-4cb9-b76d-825e33bdad41-dns-svc\") pod \"236a76a9-e108-4cb9-b76d-825e33bdad41\" (UID: \"236a76a9-e108-4cb9-b76d-825e33bdad41\") " Feb 17 16:18:58 crc kubenswrapper[4808]: I0217 16:18:58.596897 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/236a76a9-e108-4cb9-b76d-825e33bdad41-ovsdbserver-nb\") pod \"236a76a9-e108-4cb9-b76d-825e33bdad41\" (UID: \"236a76a9-e108-4cb9-b76d-825e33bdad41\") " Feb 17 16:18:58 crc kubenswrapper[4808]: I0217 16:18:58.597007 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/236a76a9-e108-4cb9-b76d-825e33bdad41-dns-swift-storage-0\") pod \"236a76a9-e108-4cb9-b76d-825e33bdad41\" (UID: \"236a76a9-e108-4cb9-b76d-825e33bdad41\") " Feb 17 16:18:58 crc kubenswrapper[4808]: I0217 16:18:58.597041 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fxgsc\" (UniqueName: \"kubernetes.io/projected/236a76a9-e108-4cb9-b76d-825e33bdad41-kube-api-access-fxgsc\") pod \"236a76a9-e108-4cb9-b76d-825e33bdad41\" (UID: \"236a76a9-e108-4cb9-b76d-825e33bdad41\") " Feb 17 16:18:58 crc kubenswrapper[4808]: I0217 16:18:58.597079 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/236a76a9-e108-4cb9-b76d-825e33bdad41-ovsdbserver-sb\") pod \"236a76a9-e108-4cb9-b76d-825e33bdad41\" (UID: \"236a76a9-e108-4cb9-b76d-825e33bdad41\") " Feb 17 16:18:58 crc kubenswrapper[4808]: I0217 16:18:58.602055 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/236a76a9-e108-4cb9-b76d-825e33bdad41-kube-api-access-fxgsc" (OuterVolumeSpecName: "kube-api-access-fxgsc") pod "236a76a9-e108-4cb9-b76d-825e33bdad41" (UID: "236a76a9-e108-4cb9-b76d-825e33bdad41"). InnerVolumeSpecName "kube-api-access-fxgsc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:18:58 crc kubenswrapper[4808]: I0217 16:18:58.648058 4808 generic.go:334] "Generic (PLEG): container finished" podID="236a76a9-e108-4cb9-b76d-825e33bdad41" containerID="726982a5e02918c4f9048d79766ece8c9bd2f3298827c5b5c0acd8c07d834e65" exitCode=0 Feb 17 16:18:58 crc kubenswrapper[4808]: I0217 16:18:58.648113 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fd9b586ff-kf4dn" event={"ID":"236a76a9-e108-4cb9-b76d-825e33bdad41","Type":"ContainerDied","Data":"726982a5e02918c4f9048d79766ece8c9bd2f3298827c5b5c0acd8c07d834e65"} Feb 17 16:18:58 crc kubenswrapper[4808]: I0217 16:18:58.648148 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fd9b586ff-kf4dn" event={"ID":"236a76a9-e108-4cb9-b76d-825e33bdad41","Type":"ContainerDied","Data":"8fe947d0790a922756d78327f84cf510a97c6419a7ba4cf6d5a3665a8b91aebe"} Feb 17 16:18:58 crc kubenswrapper[4808]: I0217 16:18:58.648169 4808 scope.go:117] "RemoveContainer" containerID="726982a5e02918c4f9048d79766ece8c9bd2f3298827c5b5c0acd8c07d834e65" Feb 17 16:18:58 crc kubenswrapper[4808]: I0217 16:18:58.648358 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fd9b586ff-kf4dn" Feb 17 16:18:58 crc kubenswrapper[4808]: I0217 16:18:58.663632 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/236a76a9-e108-4cb9-b76d-825e33bdad41-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "236a76a9-e108-4cb9-b76d-825e33bdad41" (UID: "236a76a9-e108-4cb9-b76d-825e33bdad41"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:18:58 crc kubenswrapper[4808]: I0217 16:18:58.665419 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/236a76a9-e108-4cb9-b76d-825e33bdad41-config" (OuterVolumeSpecName: "config") pod "236a76a9-e108-4cb9-b76d-825e33bdad41" (UID: "236a76a9-e108-4cb9-b76d-825e33bdad41"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:18:58 crc kubenswrapper[4808]: I0217 16:18:58.667096 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/236a76a9-e108-4cb9-b76d-825e33bdad41-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "236a76a9-e108-4cb9-b76d-825e33bdad41" (UID: "236a76a9-e108-4cb9-b76d-825e33bdad41"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:18:58 crc kubenswrapper[4808]: I0217 16:18:58.672308 4808 scope.go:117] "RemoveContainer" containerID="b1830bc8bbf4b2312521eeaea4fe1cc258bc9a13a7a1aef82477a26dccb0e21e" Feb 17 16:18:58 crc kubenswrapper[4808]: I0217 16:18:58.676226 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/236a76a9-e108-4cb9-b76d-825e33bdad41-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "236a76a9-e108-4cb9-b76d-825e33bdad41" (UID: "236a76a9-e108-4cb9-b76d-825e33bdad41"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:18:58 crc kubenswrapper[4808]: I0217 16:18:58.688283 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/236a76a9-e108-4cb9-b76d-825e33bdad41-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "236a76a9-e108-4cb9-b76d-825e33bdad41" (UID: "236a76a9-e108-4cb9-b76d-825e33bdad41"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:18:58 crc kubenswrapper[4808]: I0217 16:18:58.693369 4808 scope.go:117] "RemoveContainer" containerID="726982a5e02918c4f9048d79766ece8c9bd2f3298827c5b5c0acd8c07d834e65" Feb 17 16:18:58 crc kubenswrapper[4808]: E0217 16:18:58.693713 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"726982a5e02918c4f9048d79766ece8c9bd2f3298827c5b5c0acd8c07d834e65\": container with ID starting with 726982a5e02918c4f9048d79766ece8c9bd2f3298827c5b5c0acd8c07d834e65 not found: ID does not exist" containerID="726982a5e02918c4f9048d79766ece8c9bd2f3298827c5b5c0acd8c07d834e65" Feb 17 16:18:58 crc kubenswrapper[4808]: I0217 16:18:58.693748 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"726982a5e02918c4f9048d79766ece8c9bd2f3298827c5b5c0acd8c07d834e65"} err="failed to get container status \"726982a5e02918c4f9048d79766ece8c9bd2f3298827c5b5c0acd8c07d834e65\": rpc error: code = NotFound desc = could not find container \"726982a5e02918c4f9048d79766ece8c9bd2f3298827c5b5c0acd8c07d834e65\": container with ID starting with 726982a5e02918c4f9048d79766ece8c9bd2f3298827c5b5c0acd8c07d834e65 not found: ID does not exist" Feb 17 16:18:58 crc kubenswrapper[4808]: I0217 16:18:58.693769 4808 scope.go:117] "RemoveContainer" containerID="b1830bc8bbf4b2312521eeaea4fe1cc258bc9a13a7a1aef82477a26dccb0e21e" Feb 17 16:18:58 crc kubenswrapper[4808]: E0217 16:18:58.693955 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1830bc8bbf4b2312521eeaea4fe1cc258bc9a13a7a1aef82477a26dccb0e21e\": container with ID starting with b1830bc8bbf4b2312521eeaea4fe1cc258bc9a13a7a1aef82477a26dccb0e21e not found: ID does not exist" containerID="b1830bc8bbf4b2312521eeaea4fe1cc258bc9a13a7a1aef82477a26dccb0e21e" Feb 17 16:18:58 crc kubenswrapper[4808]: I0217 16:18:58.693979 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1830bc8bbf4b2312521eeaea4fe1cc258bc9a13a7a1aef82477a26dccb0e21e"} err="failed to get container status \"b1830bc8bbf4b2312521eeaea4fe1cc258bc9a13a7a1aef82477a26dccb0e21e\": rpc error: code = NotFound desc = could not find container \"b1830bc8bbf4b2312521eeaea4fe1cc258bc9a13a7a1aef82477a26dccb0e21e\": container with ID starting with b1830bc8bbf4b2312521eeaea4fe1cc258bc9a13a7a1aef82477a26dccb0e21e not found: ID does not exist" Feb 17 16:18:58 crc kubenswrapper[4808]: I0217 16:18:58.699767 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fxgsc\" (UniqueName: \"kubernetes.io/projected/236a76a9-e108-4cb9-b76d-825e33bdad41-kube-api-access-fxgsc\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:58 crc kubenswrapper[4808]: I0217 16:18:58.699790 4808 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/236a76a9-e108-4cb9-b76d-825e33bdad41-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:58 crc kubenswrapper[4808]: I0217 16:18:58.699799 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/236a76a9-e108-4cb9-b76d-825e33bdad41-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:58 crc kubenswrapper[4808]: I0217 16:18:58.699811 4808 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/236a76a9-e108-4cb9-b76d-825e33bdad41-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:58 crc kubenswrapper[4808]: I0217 16:18:58.699818 4808 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/236a76a9-e108-4cb9-b76d-825e33bdad41-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:58 crc kubenswrapper[4808]: I0217 16:18:58.699827 4808 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/236a76a9-e108-4cb9-b76d-825e33bdad41-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:58 crc kubenswrapper[4808]: I0217 16:18:58.835990 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85f64749dc-mqnbz"] Feb 17 16:18:58 crc kubenswrapper[4808]: W0217 16:18:58.838420 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3d16d4be_1ab3_4261_97a7_054701cf9dba.slice/crio-4f8abf5a3106c8db16366268419f6ed688fd3a9470de416f1149409e30f54637 WatchSource:0}: Error finding container 4f8abf5a3106c8db16366268419f6ed688fd3a9470de416f1149409e30f54637: Status 404 returned error can't find the container with id 4f8abf5a3106c8db16366268419f6ed688fd3a9470de416f1149409e30f54637 Feb 17 16:18:59 crc kubenswrapper[4808]: I0217 16:18:59.088953 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5fd9b586ff-kf4dn"] Feb 17 16:18:59 crc kubenswrapper[4808]: I0217 16:18:59.097515 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5fd9b586ff-kf4dn"] Feb 17 16:18:59 crc kubenswrapper[4808]: I0217 16:18:59.158484 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="236a76a9-e108-4cb9-b76d-825e33bdad41" path="/var/lib/kubelet/pods/236a76a9-e108-4cb9-b76d-825e33bdad41/volumes" Feb 17 16:18:59 crc kubenswrapper[4808]: I0217 16:18:59.665888 4808 generic.go:334] "Generic (PLEG): container finished" podID="3d16d4be-1ab3-4261-97a7-054701cf9dba" containerID="9a7fc5641b68862f1d3e76b5ba9a8b27b392b25b5b1b1869bd5782ffe16d7cfb" exitCode=0 Feb 17 16:18:59 crc kubenswrapper[4808]: I0217 16:18:59.665940 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85f64749dc-mqnbz" event={"ID":"3d16d4be-1ab3-4261-97a7-054701cf9dba","Type":"ContainerDied","Data":"9a7fc5641b68862f1d3e76b5ba9a8b27b392b25b5b1b1869bd5782ffe16d7cfb"} Feb 17 16:18:59 crc kubenswrapper[4808]: I0217 16:18:59.666520 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85f64749dc-mqnbz" event={"ID":"3d16d4be-1ab3-4261-97a7-054701cf9dba","Type":"ContainerStarted","Data":"4f8abf5a3106c8db16366268419f6ed688fd3a9470de416f1149409e30f54637"} Feb 17 16:19:00 crc kubenswrapper[4808]: I0217 16:19:00.678520 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85f64749dc-mqnbz" event={"ID":"3d16d4be-1ab3-4261-97a7-054701cf9dba","Type":"ContainerStarted","Data":"016ca0b56ec9c54e7a9608d389c503625fd7451d943ef0dd7f826bf37802c0bf"} Feb 17 16:19:00 crc kubenswrapper[4808]: I0217 16:19:00.678972 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-85f64749dc-mqnbz" Feb 17 16:19:00 crc kubenswrapper[4808]: I0217 16:19:00.716403 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-85f64749dc-mqnbz" podStartSLOduration=3.716376855 podStartE2EDuration="3.716376855s" podCreationTimestamp="2026-02-17 16:18:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:19:00.701803606 +0000 UTC m=+1504.218162689" watchObservedRunningTime="2026-02-17 16:19:00.716376855 +0000 UTC m=+1504.232735948" Feb 17 16:19:02 crc kubenswrapper[4808]: E0217 16:19:02.148907 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:19:08 crc kubenswrapper[4808]: E0217 16:19:08.148171 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:19:08 crc kubenswrapper[4808]: I0217 16:19:08.333708 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-85f64749dc-mqnbz" Feb 17 16:19:08 crc kubenswrapper[4808]: I0217 16:19:08.398064 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-dbb88bf8c-fnvwp"] Feb 17 16:19:08 crc kubenswrapper[4808]: I0217 16:19:08.398388 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-dbb88bf8c-fnvwp" podUID="409792c8-f6ab-44df-a8d8-8c08bc58ed30" containerName="dnsmasq-dns" containerID="cri-o://d89b6a5725897056022cd0fbaaed349b8829b23e00c04e7df288e7961d3651d1" gracePeriod=10 Feb 17 16:19:08 crc kubenswrapper[4808]: I0217 16:19:08.778696 4808 generic.go:334] "Generic (PLEG): container finished" podID="409792c8-f6ab-44df-a8d8-8c08bc58ed30" containerID="d89b6a5725897056022cd0fbaaed349b8829b23e00c04e7df288e7961d3651d1" exitCode=0 Feb 17 16:19:08 crc kubenswrapper[4808]: I0217 16:19:08.778767 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dbb88bf8c-fnvwp" event={"ID":"409792c8-f6ab-44df-a8d8-8c08bc58ed30","Type":"ContainerDied","Data":"d89b6a5725897056022cd0fbaaed349b8829b23e00c04e7df288e7961d3651d1"} Feb 17 16:19:09 crc kubenswrapper[4808]: I0217 16:19:09.002763 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dbb88bf8c-fnvwp" Feb 17 16:19:09 crc kubenswrapper[4808]: I0217 16:19:09.050743 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/409792c8-f6ab-44df-a8d8-8c08bc58ed30-ovsdbserver-nb\") pod \"409792c8-f6ab-44df-a8d8-8c08bc58ed30\" (UID: \"409792c8-f6ab-44df-a8d8-8c08bc58ed30\") " Feb 17 16:19:09 crc kubenswrapper[4808]: I0217 16:19:09.050785 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/409792c8-f6ab-44df-a8d8-8c08bc58ed30-config\") pod \"409792c8-f6ab-44df-a8d8-8c08bc58ed30\" (UID: \"409792c8-f6ab-44df-a8d8-8c08bc58ed30\") " Feb 17 16:19:09 crc kubenswrapper[4808]: I0217 16:19:09.050835 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/409792c8-f6ab-44df-a8d8-8c08bc58ed30-dns-svc\") pod \"409792c8-f6ab-44df-a8d8-8c08bc58ed30\" (UID: \"409792c8-f6ab-44df-a8d8-8c08bc58ed30\") " Feb 17 16:19:09 crc kubenswrapper[4808]: I0217 16:19:09.050900 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/409792c8-f6ab-44df-a8d8-8c08bc58ed30-openstack-edpm-ipam\") pod \"409792c8-f6ab-44df-a8d8-8c08bc58ed30\" (UID: \"409792c8-f6ab-44df-a8d8-8c08bc58ed30\") " Feb 17 16:19:09 crc kubenswrapper[4808]: I0217 16:19:09.050922 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/409792c8-f6ab-44df-a8d8-8c08bc58ed30-ovsdbserver-sb\") pod \"409792c8-f6ab-44df-a8d8-8c08bc58ed30\" (UID: \"409792c8-f6ab-44df-a8d8-8c08bc58ed30\") " Feb 17 16:19:09 crc kubenswrapper[4808]: I0217 16:19:09.051070 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/409792c8-f6ab-44df-a8d8-8c08bc58ed30-dns-swift-storage-0\") pod \"409792c8-f6ab-44df-a8d8-8c08bc58ed30\" (UID: \"409792c8-f6ab-44df-a8d8-8c08bc58ed30\") " Feb 17 16:19:09 crc kubenswrapper[4808]: I0217 16:19:09.051142 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lhb5g\" (UniqueName: \"kubernetes.io/projected/409792c8-f6ab-44df-a8d8-8c08bc58ed30-kube-api-access-lhb5g\") pod \"409792c8-f6ab-44df-a8d8-8c08bc58ed30\" (UID: \"409792c8-f6ab-44df-a8d8-8c08bc58ed30\") " Feb 17 16:19:09 crc kubenswrapper[4808]: I0217 16:19:09.070000 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/409792c8-f6ab-44df-a8d8-8c08bc58ed30-kube-api-access-lhb5g" (OuterVolumeSpecName: "kube-api-access-lhb5g") pod "409792c8-f6ab-44df-a8d8-8c08bc58ed30" (UID: "409792c8-f6ab-44df-a8d8-8c08bc58ed30"). InnerVolumeSpecName "kube-api-access-lhb5g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:19:09 crc kubenswrapper[4808]: I0217 16:19:09.119274 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/409792c8-f6ab-44df-a8d8-8c08bc58ed30-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "409792c8-f6ab-44df-a8d8-8c08bc58ed30" (UID: "409792c8-f6ab-44df-a8d8-8c08bc58ed30"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:19:09 crc kubenswrapper[4808]: I0217 16:19:09.125996 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/409792c8-f6ab-44df-a8d8-8c08bc58ed30-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "409792c8-f6ab-44df-a8d8-8c08bc58ed30" (UID: "409792c8-f6ab-44df-a8d8-8c08bc58ed30"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:19:09 crc kubenswrapper[4808]: I0217 16:19:09.132995 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/409792c8-f6ab-44df-a8d8-8c08bc58ed30-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "409792c8-f6ab-44df-a8d8-8c08bc58ed30" (UID: "409792c8-f6ab-44df-a8d8-8c08bc58ed30"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:19:09 crc kubenswrapper[4808]: I0217 16:19:09.134985 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/409792c8-f6ab-44df-a8d8-8c08bc58ed30-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "409792c8-f6ab-44df-a8d8-8c08bc58ed30" (UID: "409792c8-f6ab-44df-a8d8-8c08bc58ed30"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:19:09 crc kubenswrapper[4808]: I0217 16:19:09.142866 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/409792c8-f6ab-44df-a8d8-8c08bc58ed30-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "409792c8-f6ab-44df-a8d8-8c08bc58ed30" (UID: "409792c8-f6ab-44df-a8d8-8c08bc58ed30"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:19:09 crc kubenswrapper[4808]: I0217 16:19:09.146920 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/409792c8-f6ab-44df-a8d8-8c08bc58ed30-config" (OuterVolumeSpecName: "config") pod "409792c8-f6ab-44df-a8d8-8c08bc58ed30" (UID: "409792c8-f6ab-44df-a8d8-8c08bc58ed30"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:19:09 crc kubenswrapper[4808]: I0217 16:19:09.152434 4808 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/409792c8-f6ab-44df-a8d8-8c08bc58ed30-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:09 crc kubenswrapper[4808]: I0217 16:19:09.152461 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lhb5g\" (UniqueName: \"kubernetes.io/projected/409792c8-f6ab-44df-a8d8-8c08bc58ed30-kube-api-access-lhb5g\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:09 crc kubenswrapper[4808]: I0217 16:19:09.152471 4808 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/409792c8-f6ab-44df-a8d8-8c08bc58ed30-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:09 crc kubenswrapper[4808]: I0217 16:19:09.152483 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/409792c8-f6ab-44df-a8d8-8c08bc58ed30-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:09 crc kubenswrapper[4808]: I0217 16:19:09.152492 4808 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/409792c8-f6ab-44df-a8d8-8c08bc58ed30-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:09 crc kubenswrapper[4808]: I0217 16:19:09.152500 4808 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/409792c8-f6ab-44df-a8d8-8c08bc58ed30-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:09 crc kubenswrapper[4808]: I0217 16:19:09.152510 4808 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/409792c8-f6ab-44df-a8d8-8c08bc58ed30-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:09 crc kubenswrapper[4808]: I0217 16:19:09.797927 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dbb88bf8c-fnvwp" event={"ID":"409792c8-f6ab-44df-a8d8-8c08bc58ed30","Type":"ContainerDied","Data":"c729358417ccc142b4f7228661c72ca3b99c7f68bec9bdccba36c4b7349760df"} Feb 17 16:19:09 crc kubenswrapper[4808]: I0217 16:19:09.798004 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dbb88bf8c-fnvwp" Feb 17 16:19:09 crc kubenswrapper[4808]: I0217 16:19:09.798024 4808 scope.go:117] "RemoveContainer" containerID="d89b6a5725897056022cd0fbaaed349b8829b23e00c04e7df288e7961d3651d1" Feb 17 16:19:09 crc kubenswrapper[4808]: I0217 16:19:09.834428 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-dbb88bf8c-fnvwp"] Feb 17 16:19:09 crc kubenswrapper[4808]: I0217 16:19:09.844232 4808 scope.go:117] "RemoveContainer" containerID="20dc982f9bc098e9d7e98d8a7978009b4306c29975504eb93ecc3923345a7b57" Feb 17 16:19:09 crc kubenswrapper[4808]: I0217 16:19:09.849324 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-dbb88bf8c-fnvwp"] Feb 17 16:19:11 crc kubenswrapper[4808]: I0217 16:19:11.159462 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="409792c8-f6ab-44df-a8d8-8c08bc58ed30" path="/var/lib/kubelet/pods/409792c8-f6ab-44df-a8d8-8c08bc58ed30/volumes" Feb 17 16:19:14 crc kubenswrapper[4808]: E0217 16:19:14.245267 4808 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 17 16:19:14 crc kubenswrapper[4808]: E0217 16:19:14.245992 4808 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 17 16:19:14 crc kubenswrapper[4808]: E0217 16:19:14.246152 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fnd2x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-zl7nk_openstack(a4b182d0-48fc-4487-b7ad-18f7803a4d4c): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:19:14 crc kubenswrapper[4808]: E0217 16:19:14.247427 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:19:19 crc kubenswrapper[4808]: I0217 16:19:19.923381 4808 generic.go:334] "Generic (PLEG): container finished" podID="357e5513-bef7-45cc-b62f-072a161ccce3" containerID="5ca487733509062335b917cabbb5c95c9c9189e5d3adc4142b7ced90b7a9fc87" exitCode=0 Feb 17 16:19:19 crc kubenswrapper[4808]: I0217 16:19:19.923959 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"357e5513-bef7-45cc-b62f-072a161ccce3","Type":"ContainerDied","Data":"5ca487733509062335b917cabbb5c95c9c9189e5d3adc4142b7ced90b7a9fc87"} Feb 17 16:19:19 crc kubenswrapper[4808]: I0217 16:19:19.931626 4808 generic.go:334] "Generic (PLEG): container finished" podID="9da8d67e-00c6-4ba1-a08b-09c5653d93fd" containerID="ae77a46583c3e8204d183609b0e2514ca4873bf349237e9718653cb5859c2857" exitCode=0 Feb 17 16:19:19 crc kubenswrapper[4808]: I0217 16:19:19.931681 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9da8d67e-00c6-4ba1-a08b-09c5653d93fd","Type":"ContainerDied","Data":"ae77a46583c3e8204d183609b0e2514ca4873bf349237e9718653cb5859c2857"} Feb 17 16:19:20 crc kubenswrapper[4808]: I0217 16:19:20.943839 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"357e5513-bef7-45cc-b62f-072a161ccce3","Type":"ContainerStarted","Data":"904f6f9146b129e8fb603f170c4eb5fe656441b9f59b4dd19f9f8151ed9b9506"} Feb 17 16:19:20 crc kubenswrapper[4808]: I0217 16:19:20.944265 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 17 16:19:20 crc kubenswrapper[4808]: I0217 16:19:20.945452 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9da8d67e-00c6-4ba1-a08b-09c5653d93fd","Type":"ContainerStarted","Data":"7100610f263d6b00c7051e727dbccb6f0db8d39cdc23ff03e93b119fa0586576"} Feb 17 16:19:20 crc kubenswrapper[4808]: I0217 16:19:20.945644 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:19:20 crc kubenswrapper[4808]: I0217 16:19:20.967625 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.967599825 podStartE2EDuration="36.967599825s" podCreationTimestamp="2026-02-17 16:18:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:19:20.96294338 +0000 UTC m=+1524.479302443" watchObservedRunningTime="2026-02-17 16:19:20.967599825 +0000 UTC m=+1524.483958918" Feb 17 16:19:20 crc kubenswrapper[4808]: I0217 16:19:20.984947 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=36.984899916 podStartE2EDuration="36.984899916s" podCreationTimestamp="2026-02-17 16:18:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:19:20.983289573 +0000 UTC m=+1524.499648666" watchObservedRunningTime="2026-02-17 16:19:20.984899916 +0000 UTC m=+1524.501258989" Feb 17 16:19:21 crc kubenswrapper[4808]: I0217 16:19:21.592713 4808 patch_prober.go:28] interesting pod/machine-config-daemon-k8v8k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:19:21 crc kubenswrapper[4808]: I0217 16:19:21.592981 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:19:21 crc kubenswrapper[4808]: I0217 16:19:21.744220 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4n9tl"] Feb 17 16:19:21 crc kubenswrapper[4808]: E0217 16:19:21.744876 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="409792c8-f6ab-44df-a8d8-8c08bc58ed30" containerName="init" Feb 17 16:19:21 crc kubenswrapper[4808]: I0217 16:19:21.744953 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="409792c8-f6ab-44df-a8d8-8c08bc58ed30" containerName="init" Feb 17 16:19:21 crc kubenswrapper[4808]: E0217 16:19:21.745336 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="409792c8-f6ab-44df-a8d8-8c08bc58ed30" containerName="dnsmasq-dns" Feb 17 16:19:21 crc kubenswrapper[4808]: I0217 16:19:21.745439 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="409792c8-f6ab-44df-a8d8-8c08bc58ed30" containerName="dnsmasq-dns" Feb 17 16:19:21 crc kubenswrapper[4808]: E0217 16:19:21.745532 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="236a76a9-e108-4cb9-b76d-825e33bdad41" containerName="dnsmasq-dns" Feb 17 16:19:21 crc kubenswrapper[4808]: I0217 16:19:21.745600 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="236a76a9-e108-4cb9-b76d-825e33bdad41" containerName="dnsmasq-dns" Feb 17 16:19:21 crc kubenswrapper[4808]: E0217 16:19:21.745661 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="236a76a9-e108-4cb9-b76d-825e33bdad41" containerName="init" Feb 17 16:19:21 crc kubenswrapper[4808]: I0217 16:19:21.745714 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="236a76a9-e108-4cb9-b76d-825e33bdad41" containerName="init" Feb 17 16:19:21 crc kubenswrapper[4808]: I0217 16:19:21.745962 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="236a76a9-e108-4cb9-b76d-825e33bdad41" containerName="dnsmasq-dns" Feb 17 16:19:21 crc kubenswrapper[4808]: I0217 16:19:21.746023 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="409792c8-f6ab-44df-a8d8-8c08bc58ed30" containerName="dnsmasq-dns" Feb 17 16:19:21 crc kubenswrapper[4808]: I0217 16:19:21.746762 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4n9tl" Feb 17 16:19:21 crc kubenswrapper[4808]: I0217 16:19:21.748353 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 17 16:19:21 crc kubenswrapper[4808]: I0217 16:19:21.748814 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 17 16:19:21 crc kubenswrapper[4808]: I0217 16:19:21.748987 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 17 16:19:21 crc kubenswrapper[4808]: I0217 16:19:21.750093 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-gpcsv" Feb 17 16:19:21 crc kubenswrapper[4808]: I0217 16:19:21.791599 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4n9tl"] Feb 17 16:19:21 crc kubenswrapper[4808]: I0217 16:19:21.821350 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/785a49f6-7a06-4787-a829-fc9956730c15-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-4n9tl\" (UID: \"785a49f6-7a06-4787-a829-fc9956730c15\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4n9tl" Feb 17 16:19:21 crc kubenswrapper[4808]: I0217 16:19:21.821615 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/785a49f6-7a06-4787-a829-fc9956730c15-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-4n9tl\" (UID: \"785a49f6-7a06-4787-a829-fc9956730c15\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4n9tl" Feb 17 16:19:21 crc kubenswrapper[4808]: I0217 16:19:21.821865 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/785a49f6-7a06-4787-a829-fc9956730c15-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-4n9tl\" (UID: \"785a49f6-7a06-4787-a829-fc9956730c15\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4n9tl" Feb 17 16:19:21 crc kubenswrapper[4808]: I0217 16:19:21.822079 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwxgm\" (UniqueName: \"kubernetes.io/projected/785a49f6-7a06-4787-a829-fc9956730c15-kube-api-access-nwxgm\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-4n9tl\" (UID: \"785a49f6-7a06-4787-a829-fc9956730c15\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4n9tl" Feb 17 16:19:21 crc kubenswrapper[4808]: I0217 16:19:21.924623 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/785a49f6-7a06-4787-a829-fc9956730c15-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-4n9tl\" (UID: \"785a49f6-7a06-4787-a829-fc9956730c15\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4n9tl" Feb 17 16:19:21 crc kubenswrapper[4808]: I0217 16:19:21.924686 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/785a49f6-7a06-4787-a829-fc9956730c15-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-4n9tl\" (UID: \"785a49f6-7a06-4787-a829-fc9956730c15\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4n9tl" Feb 17 16:19:21 crc kubenswrapper[4808]: I0217 16:19:21.924743 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwxgm\" (UniqueName: \"kubernetes.io/projected/785a49f6-7a06-4787-a829-fc9956730c15-kube-api-access-nwxgm\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-4n9tl\" (UID: \"785a49f6-7a06-4787-a829-fc9956730c15\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4n9tl" Feb 17 16:19:21 crc kubenswrapper[4808]: I0217 16:19:21.924800 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/785a49f6-7a06-4787-a829-fc9956730c15-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-4n9tl\" (UID: \"785a49f6-7a06-4787-a829-fc9956730c15\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4n9tl" Feb 17 16:19:21 crc kubenswrapper[4808]: I0217 16:19:21.934414 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/785a49f6-7a06-4787-a829-fc9956730c15-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-4n9tl\" (UID: \"785a49f6-7a06-4787-a829-fc9956730c15\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4n9tl" Feb 17 16:19:21 crc kubenswrapper[4808]: I0217 16:19:21.934538 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/785a49f6-7a06-4787-a829-fc9956730c15-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-4n9tl\" (UID: \"785a49f6-7a06-4787-a829-fc9956730c15\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4n9tl" Feb 17 16:19:21 crc kubenswrapper[4808]: I0217 16:19:21.934879 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/785a49f6-7a06-4787-a829-fc9956730c15-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-4n9tl\" (UID: \"785a49f6-7a06-4787-a829-fc9956730c15\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4n9tl" Feb 17 16:19:21 crc kubenswrapper[4808]: I0217 16:19:21.955192 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwxgm\" (UniqueName: \"kubernetes.io/projected/785a49f6-7a06-4787-a829-fc9956730c15-kube-api-access-nwxgm\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-4n9tl\" (UID: \"785a49f6-7a06-4787-a829-fc9956730c15\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4n9tl" Feb 17 16:19:22 crc kubenswrapper[4808]: I0217 16:19:22.066826 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4n9tl" Feb 17 16:19:22 crc kubenswrapper[4808]: E0217 16:19:22.270185 4808 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:19:22 crc kubenswrapper[4808]: E0217 16:19:22.270525 4808 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:19:22 crc kubenswrapper[4808]: E0217 16:19:22.270711 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nfchb4h678h649h5fbh664h79h7fh666h5bfh68h565h555h59dh5b6h5bfh66ch645h547h5cbh549h9fh58bh5d4hcfh78h68chc7h5ch67dhc7h5b4q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rjgf2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(2876084b-7055-449d-9ddb-447d3a515d80): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:19:22 crc kubenswrapper[4808]: E0217 16:19:22.271828 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:19:22 crc kubenswrapper[4808]: I0217 16:19:22.660097 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4n9tl"] Feb 17 16:19:22 crc kubenswrapper[4808]: W0217 16:19:22.660388 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod785a49f6_7a06_4787_a829_fc9956730c15.slice/crio-7259b6afb6b29cded89d16ab2e57b5467e310105433978c0136192dfa9605c37 WatchSource:0}: Error finding container 7259b6afb6b29cded89d16ab2e57b5467e310105433978c0136192dfa9605c37: Status 404 returned error can't find the container with id 7259b6afb6b29cded89d16ab2e57b5467e310105433978c0136192dfa9605c37 Feb 17 16:19:22 crc kubenswrapper[4808]: I0217 16:19:22.963562 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4n9tl" event={"ID":"785a49f6-7a06-4787-a829-fc9956730c15","Type":"ContainerStarted","Data":"7259b6afb6b29cded89d16ab2e57b5467e310105433978c0136192dfa9605c37"} Feb 17 16:19:29 crc kubenswrapper[4808]: E0217 16:19:29.147074 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:19:34 crc kubenswrapper[4808]: I0217 16:19:34.892900 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 17 16:19:35 crc kubenswrapper[4808]: I0217 16:19:35.165171 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:19:36 crc kubenswrapper[4808]: E0217 16:19:36.152324 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:19:36 crc kubenswrapper[4808]: I0217 16:19:36.163054 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4n9tl" event={"ID":"785a49f6-7a06-4787-a829-fc9956730c15","Type":"ContainerStarted","Data":"3b8a8a2382ccfbae19a06c099cc5a82f7309486b57a54008ca868209da2f44e5"} Feb 17 16:19:36 crc kubenswrapper[4808]: I0217 16:19:36.229774 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4n9tl" podStartSLOduration=3.10474968 podStartE2EDuration="15.229748312s" podCreationTimestamp="2026-02-17 16:19:21 +0000 UTC" firstStartedPulling="2026-02-17 16:19:22.662692368 +0000 UTC m=+1526.179051441" lastFinishedPulling="2026-02-17 16:19:34.78769096 +0000 UTC m=+1538.304050073" observedRunningTime="2026-02-17 16:19:36.204329544 +0000 UTC m=+1539.720688627" watchObservedRunningTime="2026-02-17 16:19:36.229748312 +0000 UTC m=+1539.746107395" Feb 17 16:19:41 crc kubenswrapper[4808]: E0217 16:19:41.150091 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:19:48 crc kubenswrapper[4808]: I0217 16:19:48.320778 4808 generic.go:334] "Generic (PLEG): container finished" podID="785a49f6-7a06-4787-a829-fc9956730c15" containerID="3b8a8a2382ccfbae19a06c099cc5a82f7309486b57a54008ca868209da2f44e5" exitCode=0 Feb 17 16:19:48 crc kubenswrapper[4808]: I0217 16:19:48.320866 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4n9tl" event={"ID":"785a49f6-7a06-4787-a829-fc9956730c15","Type":"ContainerDied","Data":"3b8a8a2382ccfbae19a06c099cc5a82f7309486b57a54008ca868209da2f44e5"} Feb 17 16:19:50 crc kubenswrapper[4808]: I0217 16:19:50.025509 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4n9tl" Feb 17 16:19:50 crc kubenswrapper[4808]: I0217 16:19:50.102133 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/785a49f6-7a06-4787-a829-fc9956730c15-ssh-key-openstack-edpm-ipam\") pod \"785a49f6-7a06-4787-a829-fc9956730c15\" (UID: \"785a49f6-7a06-4787-a829-fc9956730c15\") " Feb 17 16:19:50 crc kubenswrapper[4808]: I0217 16:19:50.102301 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nwxgm\" (UniqueName: \"kubernetes.io/projected/785a49f6-7a06-4787-a829-fc9956730c15-kube-api-access-nwxgm\") pod \"785a49f6-7a06-4787-a829-fc9956730c15\" (UID: \"785a49f6-7a06-4787-a829-fc9956730c15\") " Feb 17 16:19:50 crc kubenswrapper[4808]: I0217 16:19:50.102360 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/785a49f6-7a06-4787-a829-fc9956730c15-repo-setup-combined-ca-bundle\") pod \"785a49f6-7a06-4787-a829-fc9956730c15\" (UID: \"785a49f6-7a06-4787-a829-fc9956730c15\") " Feb 17 16:19:50 crc kubenswrapper[4808]: I0217 16:19:50.102418 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/785a49f6-7a06-4787-a829-fc9956730c15-inventory\") pod \"785a49f6-7a06-4787-a829-fc9956730c15\" (UID: \"785a49f6-7a06-4787-a829-fc9956730c15\") " Feb 17 16:19:50 crc kubenswrapper[4808]: I0217 16:19:50.108442 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/785a49f6-7a06-4787-a829-fc9956730c15-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "785a49f6-7a06-4787-a829-fc9956730c15" (UID: "785a49f6-7a06-4787-a829-fc9956730c15"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:19:50 crc kubenswrapper[4808]: I0217 16:19:50.110251 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/785a49f6-7a06-4787-a829-fc9956730c15-kube-api-access-nwxgm" (OuterVolumeSpecName: "kube-api-access-nwxgm") pod "785a49f6-7a06-4787-a829-fc9956730c15" (UID: "785a49f6-7a06-4787-a829-fc9956730c15"). InnerVolumeSpecName "kube-api-access-nwxgm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:19:50 crc kubenswrapper[4808]: I0217 16:19:50.132506 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/785a49f6-7a06-4787-a829-fc9956730c15-inventory" (OuterVolumeSpecName: "inventory") pod "785a49f6-7a06-4787-a829-fc9956730c15" (UID: "785a49f6-7a06-4787-a829-fc9956730c15"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:19:50 crc kubenswrapper[4808]: I0217 16:19:50.155340 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/785a49f6-7a06-4787-a829-fc9956730c15-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "785a49f6-7a06-4787-a829-fc9956730c15" (UID: "785a49f6-7a06-4787-a829-fc9956730c15"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:19:50 crc kubenswrapper[4808]: I0217 16:19:50.205215 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nwxgm\" (UniqueName: \"kubernetes.io/projected/785a49f6-7a06-4787-a829-fc9956730c15-kube-api-access-nwxgm\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:50 crc kubenswrapper[4808]: I0217 16:19:50.205987 4808 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/785a49f6-7a06-4787-a829-fc9956730c15-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:50 crc kubenswrapper[4808]: I0217 16:19:50.206000 4808 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/785a49f6-7a06-4787-a829-fc9956730c15-inventory\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:50 crc kubenswrapper[4808]: I0217 16:19:50.206011 4808 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/785a49f6-7a06-4787-a829-fc9956730c15-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:50 crc kubenswrapper[4808]: I0217 16:19:50.348440 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4n9tl" event={"ID":"785a49f6-7a06-4787-a829-fc9956730c15","Type":"ContainerDied","Data":"7259b6afb6b29cded89d16ab2e57b5467e310105433978c0136192dfa9605c37"} Feb 17 16:19:50 crc kubenswrapper[4808]: I0217 16:19:50.348485 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7259b6afb6b29cded89d16ab2e57b5467e310105433978c0136192dfa9605c37" Feb 17 16:19:50 crc kubenswrapper[4808]: I0217 16:19:50.348507 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4n9tl" Feb 17 16:19:50 crc kubenswrapper[4808]: I0217 16:19:50.469732 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-8pfvq"] Feb 17 16:19:50 crc kubenswrapper[4808]: E0217 16:19:50.470524 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="785a49f6-7a06-4787-a829-fc9956730c15" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 17 16:19:50 crc kubenswrapper[4808]: I0217 16:19:50.470545 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="785a49f6-7a06-4787-a829-fc9956730c15" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 17 16:19:50 crc kubenswrapper[4808]: I0217 16:19:50.470830 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="785a49f6-7a06-4787-a829-fc9956730c15" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 17 16:19:50 crc kubenswrapper[4808]: I0217 16:19:50.471761 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8pfvq" Feb 17 16:19:50 crc kubenswrapper[4808]: I0217 16:19:50.474564 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 17 16:19:50 crc kubenswrapper[4808]: I0217 16:19:50.474882 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 17 16:19:50 crc kubenswrapper[4808]: I0217 16:19:50.475296 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 17 16:19:50 crc kubenswrapper[4808]: I0217 16:19:50.475391 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-gpcsv" Feb 17 16:19:50 crc kubenswrapper[4808]: I0217 16:19:50.484366 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-8pfvq"] Feb 17 16:19:50 crc kubenswrapper[4808]: I0217 16:19:50.512609 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/404291d9-a172-4a9a-8a0e-2f2514ce06ff-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-8pfvq\" (UID: \"404291d9-a172-4a9a-8a0e-2f2514ce06ff\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8pfvq" Feb 17 16:19:50 crc kubenswrapper[4808]: I0217 16:19:50.512718 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gttpl\" (UniqueName: \"kubernetes.io/projected/404291d9-a172-4a9a-8a0e-2f2514ce06ff-kube-api-access-gttpl\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-8pfvq\" (UID: \"404291d9-a172-4a9a-8a0e-2f2514ce06ff\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8pfvq" Feb 17 16:19:50 crc kubenswrapper[4808]: I0217 16:19:50.512748 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/404291d9-a172-4a9a-8a0e-2f2514ce06ff-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-8pfvq\" (UID: \"404291d9-a172-4a9a-8a0e-2f2514ce06ff\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8pfvq" Feb 17 16:19:50 crc kubenswrapper[4808]: I0217 16:19:50.614628 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gttpl\" (UniqueName: \"kubernetes.io/projected/404291d9-a172-4a9a-8a0e-2f2514ce06ff-kube-api-access-gttpl\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-8pfvq\" (UID: \"404291d9-a172-4a9a-8a0e-2f2514ce06ff\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8pfvq" Feb 17 16:19:50 crc kubenswrapper[4808]: I0217 16:19:50.614678 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/404291d9-a172-4a9a-8a0e-2f2514ce06ff-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-8pfvq\" (UID: \"404291d9-a172-4a9a-8a0e-2f2514ce06ff\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8pfvq" Feb 17 16:19:50 crc kubenswrapper[4808]: I0217 16:19:50.614792 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/404291d9-a172-4a9a-8a0e-2f2514ce06ff-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-8pfvq\" (UID: \"404291d9-a172-4a9a-8a0e-2f2514ce06ff\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8pfvq" Feb 17 16:19:50 crc kubenswrapper[4808]: I0217 16:19:50.623902 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/404291d9-a172-4a9a-8a0e-2f2514ce06ff-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-8pfvq\" (UID: \"404291d9-a172-4a9a-8a0e-2f2514ce06ff\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8pfvq" Feb 17 16:19:50 crc kubenswrapper[4808]: I0217 16:19:50.628009 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/404291d9-a172-4a9a-8a0e-2f2514ce06ff-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-8pfvq\" (UID: \"404291d9-a172-4a9a-8a0e-2f2514ce06ff\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8pfvq" Feb 17 16:19:50 crc kubenswrapper[4808]: I0217 16:19:50.633997 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gttpl\" (UniqueName: \"kubernetes.io/projected/404291d9-a172-4a9a-8a0e-2f2514ce06ff-kube-api-access-gttpl\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-8pfvq\" (UID: \"404291d9-a172-4a9a-8a0e-2f2514ce06ff\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8pfvq" Feb 17 16:19:50 crc kubenswrapper[4808]: I0217 16:19:50.822528 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8pfvq" Feb 17 16:19:51 crc kubenswrapper[4808]: E0217 16:19:51.148761 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:19:51 crc kubenswrapper[4808]: I0217 16:19:51.380695 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-8pfvq"] Feb 17 16:19:51 crc kubenswrapper[4808]: I0217 16:19:51.592006 4808 patch_prober.go:28] interesting pod/machine-config-daemon-k8v8k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:19:51 crc kubenswrapper[4808]: I0217 16:19:51.592082 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:19:51 crc kubenswrapper[4808]: I0217 16:19:51.592128 4808 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" Feb 17 16:19:51 crc kubenswrapper[4808]: I0217 16:19:51.592912 4808 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3d547770092f773b5c7f62497d5451390c51dc1c958b49576b85d692e046de5d"} pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 16:19:51 crc kubenswrapper[4808]: I0217 16:19:51.592975 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" containerID="cri-o://3d547770092f773b5c7f62497d5451390c51dc1c958b49576b85d692e046de5d" gracePeriod=600 Feb 17 16:19:51 crc kubenswrapper[4808]: E0217 16:19:51.719315 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:19:52 crc kubenswrapper[4808]: I0217 16:19:52.374058 4808 generic.go:334] "Generic (PLEG): container finished" podID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerID="3d547770092f773b5c7f62497d5451390c51dc1c958b49576b85d692e046de5d" exitCode=0 Feb 17 16:19:52 crc kubenswrapper[4808]: I0217 16:19:52.374101 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" event={"ID":"ca38b6e7-b21c-453d-8b6c-a163dac84b35","Type":"ContainerDied","Data":"3d547770092f773b5c7f62497d5451390c51dc1c958b49576b85d692e046de5d"} Feb 17 16:19:52 crc kubenswrapper[4808]: I0217 16:19:52.374543 4808 scope.go:117] "RemoveContainer" containerID="34e69d9ce6b54cc95e099ff98c49ef8661be9798a1b5f5a56fc276247e76ba49" Feb 17 16:19:52 crc kubenswrapper[4808]: I0217 16:19:52.375426 4808 scope.go:117] "RemoveContainer" containerID="3d547770092f773b5c7f62497d5451390c51dc1c958b49576b85d692e046de5d" Feb 17 16:19:52 crc kubenswrapper[4808]: E0217 16:19:52.376137 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:19:52 crc kubenswrapper[4808]: I0217 16:19:52.376817 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8pfvq" event={"ID":"404291d9-a172-4a9a-8a0e-2f2514ce06ff","Type":"ContainerStarted","Data":"85c281cb387270bbfc86bf45957a2a330927a4e4a3dc86d981d5d1496be3a77c"} Feb 17 16:19:52 crc kubenswrapper[4808]: I0217 16:19:52.376853 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8pfvq" event={"ID":"404291d9-a172-4a9a-8a0e-2f2514ce06ff","Type":"ContainerStarted","Data":"3bf03b7ceb2c96ff334dc08314d1c2d44c88e47d3e97f45d681b7dbbed8227ac"} Feb 17 16:19:52 crc kubenswrapper[4808]: I0217 16:19:52.409950 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8pfvq" podStartSLOduration=1.735716215 podStartE2EDuration="2.409935537s" podCreationTimestamp="2026-02-17 16:19:50 +0000 UTC" firstStartedPulling="2026-02-17 16:19:51.38102161 +0000 UTC m=+1554.897380703" lastFinishedPulling="2026-02-17 16:19:52.055240952 +0000 UTC m=+1555.571600025" observedRunningTime="2026-02-17 16:19:52.406781523 +0000 UTC m=+1555.923140586" watchObservedRunningTime="2026-02-17 16:19:52.409935537 +0000 UTC m=+1555.926294600" Feb 17 16:19:55 crc kubenswrapper[4808]: E0217 16:19:55.275812 4808 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 17 16:19:55 crc kubenswrapper[4808]: E0217 16:19:55.276255 4808 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 17 16:19:55 crc kubenswrapper[4808]: E0217 16:19:55.276404 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fnd2x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-zl7nk_openstack(a4b182d0-48fc-4487-b7ad-18f7803a4d4c): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:19:55 crc kubenswrapper[4808]: E0217 16:19:55.277707 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:19:55 crc kubenswrapper[4808]: I0217 16:19:55.414027 4808 generic.go:334] "Generic (PLEG): container finished" podID="404291d9-a172-4a9a-8a0e-2f2514ce06ff" containerID="85c281cb387270bbfc86bf45957a2a330927a4e4a3dc86d981d5d1496be3a77c" exitCode=0 Feb 17 16:19:55 crc kubenswrapper[4808]: I0217 16:19:55.414080 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8pfvq" event={"ID":"404291d9-a172-4a9a-8a0e-2f2514ce06ff","Type":"ContainerDied","Data":"85c281cb387270bbfc86bf45957a2a330927a4e4a3dc86d981d5d1496be3a77c"} Feb 17 16:19:56 crc kubenswrapper[4808]: I0217 16:19:56.998411 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8pfvq" Feb 17 16:19:57 crc kubenswrapper[4808]: I0217 16:19:57.086658 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/404291d9-a172-4a9a-8a0e-2f2514ce06ff-inventory\") pod \"404291d9-a172-4a9a-8a0e-2f2514ce06ff\" (UID: \"404291d9-a172-4a9a-8a0e-2f2514ce06ff\") " Feb 17 16:19:57 crc kubenswrapper[4808]: I0217 16:19:57.086725 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/404291d9-a172-4a9a-8a0e-2f2514ce06ff-ssh-key-openstack-edpm-ipam\") pod \"404291d9-a172-4a9a-8a0e-2f2514ce06ff\" (UID: \"404291d9-a172-4a9a-8a0e-2f2514ce06ff\") " Feb 17 16:19:57 crc kubenswrapper[4808]: I0217 16:19:57.086978 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gttpl\" (UniqueName: \"kubernetes.io/projected/404291d9-a172-4a9a-8a0e-2f2514ce06ff-kube-api-access-gttpl\") pod \"404291d9-a172-4a9a-8a0e-2f2514ce06ff\" (UID: \"404291d9-a172-4a9a-8a0e-2f2514ce06ff\") " Feb 17 16:19:57 crc kubenswrapper[4808]: I0217 16:19:57.092235 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/404291d9-a172-4a9a-8a0e-2f2514ce06ff-kube-api-access-gttpl" (OuterVolumeSpecName: "kube-api-access-gttpl") pod "404291d9-a172-4a9a-8a0e-2f2514ce06ff" (UID: "404291d9-a172-4a9a-8a0e-2f2514ce06ff"). InnerVolumeSpecName "kube-api-access-gttpl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:19:57 crc kubenswrapper[4808]: I0217 16:19:57.121268 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/404291d9-a172-4a9a-8a0e-2f2514ce06ff-inventory" (OuterVolumeSpecName: "inventory") pod "404291d9-a172-4a9a-8a0e-2f2514ce06ff" (UID: "404291d9-a172-4a9a-8a0e-2f2514ce06ff"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:19:57 crc kubenswrapper[4808]: I0217 16:19:57.123062 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/404291d9-a172-4a9a-8a0e-2f2514ce06ff-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "404291d9-a172-4a9a-8a0e-2f2514ce06ff" (UID: "404291d9-a172-4a9a-8a0e-2f2514ce06ff"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:19:57 crc kubenswrapper[4808]: I0217 16:19:57.189457 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gttpl\" (UniqueName: \"kubernetes.io/projected/404291d9-a172-4a9a-8a0e-2f2514ce06ff-kube-api-access-gttpl\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:57 crc kubenswrapper[4808]: I0217 16:19:57.189749 4808 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/404291d9-a172-4a9a-8a0e-2f2514ce06ff-inventory\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:57 crc kubenswrapper[4808]: I0217 16:19:57.189892 4808 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/404291d9-a172-4a9a-8a0e-2f2514ce06ff-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:57 crc kubenswrapper[4808]: I0217 16:19:57.439244 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8pfvq" event={"ID":"404291d9-a172-4a9a-8a0e-2f2514ce06ff","Type":"ContainerDied","Data":"3bf03b7ceb2c96ff334dc08314d1c2d44c88e47d3e97f45d681b7dbbed8227ac"} Feb 17 16:19:57 crc kubenswrapper[4808]: I0217 16:19:57.439286 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3bf03b7ceb2c96ff334dc08314d1c2d44c88e47d3e97f45d681b7dbbed8227ac" Feb 17 16:19:57 crc kubenswrapper[4808]: I0217 16:19:57.439351 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8pfvq" Feb 17 16:19:57 crc kubenswrapper[4808]: I0217 16:19:57.523320 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vwl2g"] Feb 17 16:19:57 crc kubenswrapper[4808]: E0217 16:19:57.525213 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="404291d9-a172-4a9a-8a0e-2f2514ce06ff" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 17 16:19:57 crc kubenswrapper[4808]: I0217 16:19:57.525240 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="404291d9-a172-4a9a-8a0e-2f2514ce06ff" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 17 16:19:57 crc kubenswrapper[4808]: I0217 16:19:57.525488 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="404291d9-a172-4a9a-8a0e-2f2514ce06ff" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 17 16:19:57 crc kubenswrapper[4808]: I0217 16:19:57.527263 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vwl2g" Feb 17 16:19:57 crc kubenswrapper[4808]: I0217 16:19:57.529205 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 17 16:19:57 crc kubenswrapper[4808]: I0217 16:19:57.530053 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 17 16:19:57 crc kubenswrapper[4808]: I0217 16:19:57.530343 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 17 16:19:57 crc kubenswrapper[4808]: I0217 16:19:57.530629 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-gpcsv" Feb 17 16:19:57 crc kubenswrapper[4808]: I0217 16:19:57.543844 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vwl2g"] Feb 17 16:19:57 crc kubenswrapper[4808]: I0217 16:19:57.597234 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e4a30af7-342e-49c0-8e89-c38f11b7cc63-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vwl2g\" (UID: \"e4a30af7-342e-49c0-8e89-c38f11b7cc63\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vwl2g" Feb 17 16:19:57 crc kubenswrapper[4808]: I0217 16:19:57.597428 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dsr2\" (UniqueName: \"kubernetes.io/projected/e4a30af7-342e-49c0-8e89-c38f11b7cc63-kube-api-access-9dsr2\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vwl2g\" (UID: \"e4a30af7-342e-49c0-8e89-c38f11b7cc63\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vwl2g" Feb 17 16:19:57 crc kubenswrapper[4808]: I0217 16:19:57.597720 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e4a30af7-342e-49c0-8e89-c38f11b7cc63-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vwl2g\" (UID: \"e4a30af7-342e-49c0-8e89-c38f11b7cc63\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vwl2g" Feb 17 16:19:57 crc kubenswrapper[4808]: I0217 16:19:57.597857 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4a30af7-342e-49c0-8e89-c38f11b7cc63-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vwl2g\" (UID: \"e4a30af7-342e-49c0-8e89-c38f11b7cc63\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vwl2g" Feb 17 16:19:57 crc kubenswrapper[4808]: I0217 16:19:57.699642 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4a30af7-342e-49c0-8e89-c38f11b7cc63-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vwl2g\" (UID: \"e4a30af7-342e-49c0-8e89-c38f11b7cc63\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vwl2g" Feb 17 16:19:57 crc kubenswrapper[4808]: I0217 16:19:57.699729 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e4a30af7-342e-49c0-8e89-c38f11b7cc63-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vwl2g\" (UID: \"e4a30af7-342e-49c0-8e89-c38f11b7cc63\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vwl2g" Feb 17 16:19:57 crc kubenswrapper[4808]: I0217 16:19:57.699790 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9dsr2\" (UniqueName: \"kubernetes.io/projected/e4a30af7-342e-49c0-8e89-c38f11b7cc63-kube-api-access-9dsr2\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vwl2g\" (UID: \"e4a30af7-342e-49c0-8e89-c38f11b7cc63\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vwl2g" Feb 17 16:19:57 crc kubenswrapper[4808]: I0217 16:19:57.699872 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e4a30af7-342e-49c0-8e89-c38f11b7cc63-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vwl2g\" (UID: \"e4a30af7-342e-49c0-8e89-c38f11b7cc63\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vwl2g" Feb 17 16:19:57 crc kubenswrapper[4808]: I0217 16:19:57.704341 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e4a30af7-342e-49c0-8e89-c38f11b7cc63-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vwl2g\" (UID: \"e4a30af7-342e-49c0-8e89-c38f11b7cc63\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vwl2g" Feb 17 16:19:57 crc kubenswrapper[4808]: I0217 16:19:57.705451 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4a30af7-342e-49c0-8e89-c38f11b7cc63-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vwl2g\" (UID: \"e4a30af7-342e-49c0-8e89-c38f11b7cc63\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vwl2g" Feb 17 16:19:57 crc kubenswrapper[4808]: I0217 16:19:57.710108 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e4a30af7-342e-49c0-8e89-c38f11b7cc63-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vwl2g\" (UID: \"e4a30af7-342e-49c0-8e89-c38f11b7cc63\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vwl2g" Feb 17 16:19:57 crc kubenswrapper[4808]: I0217 16:19:57.723238 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dsr2\" (UniqueName: \"kubernetes.io/projected/e4a30af7-342e-49c0-8e89-c38f11b7cc63-kube-api-access-9dsr2\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vwl2g\" (UID: \"e4a30af7-342e-49c0-8e89-c38f11b7cc63\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vwl2g" Feb 17 16:19:57 crc kubenswrapper[4808]: I0217 16:19:57.867872 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vwl2g" Feb 17 16:19:58 crc kubenswrapper[4808]: I0217 16:19:58.468545 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vwl2g"] Feb 17 16:19:58 crc kubenswrapper[4808]: W0217 16:19:58.479120 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode4a30af7_342e_49c0_8e89_c38f11b7cc63.slice/crio-ec611864a405eeef1eea8b1792d33b647fe4a37506f5f9ad7454e52f00a3b863 WatchSource:0}: Error finding container ec611864a405eeef1eea8b1792d33b647fe4a37506f5f9ad7454e52f00a3b863: Status 404 returned error can't find the container with id ec611864a405eeef1eea8b1792d33b647fe4a37506f5f9ad7454e52f00a3b863 Feb 17 16:19:59 crc kubenswrapper[4808]: I0217 16:19:59.462689 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vwl2g" event={"ID":"e4a30af7-342e-49c0-8e89-c38f11b7cc63","Type":"ContainerStarted","Data":"71c91d6451b64c7f7e3bd20b7f8ce8d6da0a6dbf093d38be3cac5d1529528868"} Feb 17 16:19:59 crc kubenswrapper[4808]: I0217 16:19:59.463011 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vwl2g" event={"ID":"e4a30af7-342e-49c0-8e89-c38f11b7cc63","Type":"ContainerStarted","Data":"ec611864a405eeef1eea8b1792d33b647fe4a37506f5f9ad7454e52f00a3b863"} Feb 17 16:19:59 crc kubenswrapper[4808]: I0217 16:19:59.487790 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vwl2g" podStartSLOduration=2.09250356 podStartE2EDuration="2.487762067s" podCreationTimestamp="2026-02-17 16:19:57 +0000 UTC" firstStartedPulling="2026-02-17 16:19:58.486982552 +0000 UTC m=+1562.003341625" lastFinishedPulling="2026-02-17 16:19:58.882241059 +0000 UTC m=+1562.398600132" observedRunningTime="2026-02-17 16:19:59.480854503 +0000 UTC m=+1562.997213576" watchObservedRunningTime="2026-02-17 16:19:59.487762067 +0000 UTC m=+1563.004121150" Feb 17 16:20:02 crc kubenswrapper[4808]: E0217 16:20:02.149096 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:20:03 crc kubenswrapper[4808]: I0217 16:20:03.145868 4808 scope.go:117] "RemoveContainer" containerID="3d547770092f773b5c7f62497d5451390c51dc1c958b49576b85d692e046de5d" Feb 17 16:20:03 crc kubenswrapper[4808]: E0217 16:20:03.146376 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:20:06 crc kubenswrapper[4808]: E0217 16:20:06.187403 4808 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod404291d9_a172_4a9a_8a0e_2f2514ce06ff.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod404291d9_a172_4a9a_8a0e_2f2514ce06ff.slice/crio-3bf03b7ceb2c96ff334dc08314d1c2d44c88e47d3e97f45d681b7dbbed8227ac\": RecentStats: unable to find data in memory cache]" Feb 17 16:20:07 crc kubenswrapper[4808]: E0217 16:20:07.154718 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:20:14 crc kubenswrapper[4808]: I0217 16:20:14.578339 4808 scope.go:117] "RemoveContainer" containerID="393504cd886f25701edec85a116ae5e2c966bd8cc6f3213385ba9edc2a2c6ec3" Feb 17 16:20:14 crc kubenswrapper[4808]: I0217 16:20:14.625348 4808 scope.go:117] "RemoveContainer" containerID="7aea08d602941315a47910cfb8dca2a1ac4425726486c35b99c77739c12a5b14" Feb 17 16:20:14 crc kubenswrapper[4808]: I0217 16:20:14.687921 4808 scope.go:117] "RemoveContainer" containerID="b60fbde46c6075a50ace4cd1663669a692d98861f29087030c80fceb181a0f6f" Feb 17 16:20:14 crc kubenswrapper[4808]: I0217 16:20:14.732478 4808 scope.go:117] "RemoveContainer" containerID="aa9c642e8bb62ae5d91fda2bdf24643392c75706213200f28e2d16c8e6a33f94" Feb 17 16:20:14 crc kubenswrapper[4808]: I0217 16:20:14.774761 4808 scope.go:117] "RemoveContainer" containerID="8e5f6f7a728607504ca216d406d1d8a535d1573f6c6ba0a924dbe399f84dae18" Feb 17 16:20:15 crc kubenswrapper[4808]: I0217 16:20:15.146451 4808 scope.go:117] "RemoveContainer" containerID="3d547770092f773b5c7f62497d5451390c51dc1c958b49576b85d692e046de5d" Feb 17 16:20:15 crc kubenswrapper[4808]: E0217 16:20:15.146917 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:20:15 crc kubenswrapper[4808]: E0217 16:20:15.246972 4808 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:20:15 crc kubenswrapper[4808]: E0217 16:20:15.247306 4808 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:20:15 crc kubenswrapper[4808]: E0217 16:20:15.247490 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nfchb4h678h649h5fbh664h79h7fh666h5bfh68h565h555h59dh5b6h5bfh66ch645h547h5cbh549h9fh58bh5d4hcfh78h68chc7h5ch67dhc7h5b4q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rjgf2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(2876084b-7055-449d-9ddb-447d3a515d80): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:20:15 crc kubenswrapper[4808]: E0217 16:20:15.248771 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:20:16 crc kubenswrapper[4808]: E0217 16:20:16.443833 4808 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod404291d9_a172_4a9a_8a0e_2f2514ce06ff.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod404291d9_a172_4a9a_8a0e_2f2514ce06ff.slice/crio-3bf03b7ceb2c96ff334dc08314d1c2d44c88e47d3e97f45d681b7dbbed8227ac\": RecentStats: unable to find data in memory cache]" Feb 17 16:20:19 crc kubenswrapper[4808]: E0217 16:20:19.147869 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:20:26 crc kubenswrapper[4808]: E0217 16:20:26.777123 4808 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod404291d9_a172_4a9a_8a0e_2f2514ce06ff.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod404291d9_a172_4a9a_8a0e_2f2514ce06ff.slice/crio-3bf03b7ceb2c96ff334dc08314d1c2d44c88e47d3e97f45d681b7dbbed8227ac\": RecentStats: unable to find data in memory cache]" Feb 17 16:20:29 crc kubenswrapper[4808]: I0217 16:20:29.146684 4808 scope.go:117] "RemoveContainer" containerID="3d547770092f773b5c7f62497d5451390c51dc1c958b49576b85d692e046de5d" Feb 17 16:20:29 crc kubenswrapper[4808]: E0217 16:20:29.147696 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:20:29 crc kubenswrapper[4808]: E0217 16:20:29.149203 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:20:32 crc kubenswrapper[4808]: E0217 16:20:32.150685 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:20:37 crc kubenswrapper[4808]: E0217 16:20:37.003967 4808 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod404291d9_a172_4a9a_8a0e_2f2514ce06ff.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod404291d9_a172_4a9a_8a0e_2f2514ce06ff.slice/crio-3bf03b7ceb2c96ff334dc08314d1c2d44c88e47d3e97f45d681b7dbbed8227ac\": RecentStats: unable to find data in memory cache]" Feb 17 16:20:40 crc kubenswrapper[4808]: I0217 16:20:40.146005 4808 scope.go:117] "RemoveContainer" containerID="3d547770092f773b5c7f62497d5451390c51dc1c958b49576b85d692e046de5d" Feb 17 16:20:40 crc kubenswrapper[4808]: E0217 16:20:40.147641 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:20:43 crc kubenswrapper[4808]: E0217 16:20:43.149292 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:20:44 crc kubenswrapper[4808]: E0217 16:20:44.148294 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:20:47 crc kubenswrapper[4808]: E0217 16:20:47.304827 4808 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod404291d9_a172_4a9a_8a0e_2f2514ce06ff.slice/crio-3bf03b7ceb2c96ff334dc08314d1c2d44c88e47d3e97f45d681b7dbbed8227ac\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod404291d9_a172_4a9a_8a0e_2f2514ce06ff.slice\": RecentStats: unable to find data in memory cache]" Feb 17 16:20:54 crc kubenswrapper[4808]: I0217 16:20:54.145674 4808 scope.go:117] "RemoveContainer" containerID="3d547770092f773b5c7f62497d5451390c51dc1c958b49576b85d692e046de5d" Feb 17 16:20:54 crc kubenswrapper[4808]: E0217 16:20:54.146428 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:20:56 crc kubenswrapper[4808]: E0217 16:20:56.148937 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:20:58 crc kubenswrapper[4808]: E0217 16:20:58.147954 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:21:02 crc kubenswrapper[4808]: I0217 16:21:02.019060 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-7kpkn"] Feb 17 16:21:02 crc kubenswrapper[4808]: I0217 16:21:02.036920 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7kpkn" Feb 17 16:21:02 crc kubenswrapper[4808]: I0217 16:21:02.074888 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7kpkn"] Feb 17 16:21:02 crc kubenswrapper[4808]: I0217 16:21:02.217330 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4e6a34f-a3c5-453d-a8e0-244c279aa68f-catalog-content\") pod \"redhat-marketplace-7kpkn\" (UID: \"c4e6a34f-a3c5-453d-a8e0-244c279aa68f\") " pod="openshift-marketplace/redhat-marketplace-7kpkn" Feb 17 16:21:02 crc kubenswrapper[4808]: I0217 16:21:02.217678 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4e6a34f-a3c5-453d-a8e0-244c279aa68f-utilities\") pod \"redhat-marketplace-7kpkn\" (UID: \"c4e6a34f-a3c5-453d-a8e0-244c279aa68f\") " pod="openshift-marketplace/redhat-marketplace-7kpkn" Feb 17 16:21:02 crc kubenswrapper[4808]: I0217 16:21:02.217749 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mh9qc\" (UniqueName: \"kubernetes.io/projected/c4e6a34f-a3c5-453d-a8e0-244c279aa68f-kube-api-access-mh9qc\") pod \"redhat-marketplace-7kpkn\" (UID: \"c4e6a34f-a3c5-453d-a8e0-244c279aa68f\") " pod="openshift-marketplace/redhat-marketplace-7kpkn" Feb 17 16:21:02 crc kubenswrapper[4808]: I0217 16:21:02.320063 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4e6a34f-a3c5-453d-a8e0-244c279aa68f-catalog-content\") pod \"redhat-marketplace-7kpkn\" (UID: \"c4e6a34f-a3c5-453d-a8e0-244c279aa68f\") " pod="openshift-marketplace/redhat-marketplace-7kpkn" Feb 17 16:21:02 crc kubenswrapper[4808]: I0217 16:21:02.320183 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4e6a34f-a3c5-453d-a8e0-244c279aa68f-utilities\") pod \"redhat-marketplace-7kpkn\" (UID: \"c4e6a34f-a3c5-453d-a8e0-244c279aa68f\") " pod="openshift-marketplace/redhat-marketplace-7kpkn" Feb 17 16:21:02 crc kubenswrapper[4808]: I0217 16:21:02.320320 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mh9qc\" (UniqueName: \"kubernetes.io/projected/c4e6a34f-a3c5-453d-a8e0-244c279aa68f-kube-api-access-mh9qc\") pod \"redhat-marketplace-7kpkn\" (UID: \"c4e6a34f-a3c5-453d-a8e0-244c279aa68f\") " pod="openshift-marketplace/redhat-marketplace-7kpkn" Feb 17 16:21:02 crc kubenswrapper[4808]: I0217 16:21:02.321131 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4e6a34f-a3c5-453d-a8e0-244c279aa68f-utilities\") pod \"redhat-marketplace-7kpkn\" (UID: \"c4e6a34f-a3c5-453d-a8e0-244c279aa68f\") " pod="openshift-marketplace/redhat-marketplace-7kpkn" Feb 17 16:21:02 crc kubenswrapper[4808]: I0217 16:21:02.321425 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4e6a34f-a3c5-453d-a8e0-244c279aa68f-catalog-content\") pod \"redhat-marketplace-7kpkn\" (UID: \"c4e6a34f-a3c5-453d-a8e0-244c279aa68f\") " pod="openshift-marketplace/redhat-marketplace-7kpkn" Feb 17 16:21:02 crc kubenswrapper[4808]: I0217 16:21:02.349368 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mh9qc\" (UniqueName: \"kubernetes.io/projected/c4e6a34f-a3c5-453d-a8e0-244c279aa68f-kube-api-access-mh9qc\") pod \"redhat-marketplace-7kpkn\" (UID: \"c4e6a34f-a3c5-453d-a8e0-244c279aa68f\") " pod="openshift-marketplace/redhat-marketplace-7kpkn" Feb 17 16:21:02 crc kubenswrapper[4808]: I0217 16:21:02.362677 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7kpkn" Feb 17 16:21:02 crc kubenswrapper[4808]: I0217 16:21:02.894623 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7kpkn"] Feb 17 16:21:02 crc kubenswrapper[4808]: W0217 16:21:02.896499 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc4e6a34f_a3c5_453d_a8e0_244c279aa68f.slice/crio-58a123d0cf872ccfa1d13d556eeba502e7247af82124e5887cddf5c4618985da WatchSource:0}: Error finding container 58a123d0cf872ccfa1d13d556eeba502e7247af82124e5887cddf5c4618985da: Status 404 returned error can't find the container with id 58a123d0cf872ccfa1d13d556eeba502e7247af82124e5887cddf5c4618985da Feb 17 16:21:03 crc kubenswrapper[4808]: I0217 16:21:03.646718 4808 generic.go:334] "Generic (PLEG): container finished" podID="c4e6a34f-a3c5-453d-a8e0-244c279aa68f" containerID="1d210c635ed371a09b67590952111fc432c489ddf228de0a62fc51e181a3886f" exitCode=0 Feb 17 16:21:03 crc kubenswrapper[4808]: I0217 16:21:03.647083 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7kpkn" event={"ID":"c4e6a34f-a3c5-453d-a8e0-244c279aa68f","Type":"ContainerDied","Data":"1d210c635ed371a09b67590952111fc432c489ddf228de0a62fc51e181a3886f"} Feb 17 16:21:03 crc kubenswrapper[4808]: I0217 16:21:03.647119 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7kpkn" event={"ID":"c4e6a34f-a3c5-453d-a8e0-244c279aa68f","Type":"ContainerStarted","Data":"58a123d0cf872ccfa1d13d556eeba502e7247af82124e5887cddf5c4618985da"} Feb 17 16:21:03 crc kubenswrapper[4808]: I0217 16:21:03.650812 4808 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 16:21:04 crc kubenswrapper[4808]: I0217 16:21:04.660214 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7kpkn" event={"ID":"c4e6a34f-a3c5-453d-a8e0-244c279aa68f","Type":"ContainerStarted","Data":"96c0bb98b88359fe533d8206e5e69230b0be81e672510db74ff3204e1943906a"} Feb 17 16:21:05 crc kubenswrapper[4808]: I0217 16:21:05.681260 4808 generic.go:334] "Generic (PLEG): container finished" podID="c4e6a34f-a3c5-453d-a8e0-244c279aa68f" containerID="96c0bb98b88359fe533d8206e5e69230b0be81e672510db74ff3204e1943906a" exitCode=0 Feb 17 16:21:05 crc kubenswrapper[4808]: I0217 16:21:05.681374 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7kpkn" event={"ID":"c4e6a34f-a3c5-453d-a8e0-244c279aa68f","Type":"ContainerDied","Data":"96c0bb98b88359fe533d8206e5e69230b0be81e672510db74ff3204e1943906a"} Feb 17 16:21:07 crc kubenswrapper[4808]: I0217 16:21:07.704446 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7kpkn" event={"ID":"c4e6a34f-a3c5-453d-a8e0-244c279aa68f","Type":"ContainerStarted","Data":"bc678ceb9ba35b9d93f987954ff15a382cc01cf598d3e6929ad41e00b1326797"} Feb 17 16:21:07 crc kubenswrapper[4808]: I0217 16:21:07.722050 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-7kpkn" podStartSLOduration=3.2642923059999998 podStartE2EDuration="6.722031284s" podCreationTimestamp="2026-02-17 16:21:01 +0000 UTC" firstStartedPulling="2026-02-17 16:21:03.648849112 +0000 UTC m=+1627.165208195" lastFinishedPulling="2026-02-17 16:21:07.1065881 +0000 UTC m=+1630.622947173" observedRunningTime="2026-02-17 16:21:07.719492036 +0000 UTC m=+1631.235851129" watchObservedRunningTime="2026-02-17 16:21:07.722031284 +0000 UTC m=+1631.238390357" Feb 17 16:21:08 crc kubenswrapper[4808]: E0217 16:21:08.148420 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:21:09 crc kubenswrapper[4808]: I0217 16:21:09.146499 4808 scope.go:117] "RemoveContainer" containerID="3d547770092f773b5c7f62497d5451390c51dc1c958b49576b85d692e046de5d" Feb 17 16:21:09 crc kubenswrapper[4808]: E0217 16:21:09.147060 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:21:11 crc kubenswrapper[4808]: E0217 16:21:11.149285 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:21:12 crc kubenswrapper[4808]: I0217 16:21:12.363209 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-7kpkn" Feb 17 16:21:12 crc kubenswrapper[4808]: I0217 16:21:12.363641 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-7kpkn" Feb 17 16:21:12 crc kubenswrapper[4808]: I0217 16:21:12.425329 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-7kpkn" Feb 17 16:21:12 crc kubenswrapper[4808]: I0217 16:21:12.859993 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-7kpkn" Feb 17 16:21:12 crc kubenswrapper[4808]: I0217 16:21:12.915421 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7kpkn"] Feb 17 16:21:14 crc kubenswrapper[4808]: I0217 16:21:14.831672 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-7kpkn" podUID="c4e6a34f-a3c5-453d-a8e0-244c279aa68f" containerName="registry-server" containerID="cri-o://bc678ceb9ba35b9d93f987954ff15a382cc01cf598d3e6929ad41e00b1326797" gracePeriod=2 Feb 17 16:21:14 crc kubenswrapper[4808]: I0217 16:21:14.920621 4808 scope.go:117] "RemoveContainer" containerID="256eec0493e7fac44365f09c9ecea2db586554f077823fc95da099751524686d" Feb 17 16:21:14 crc kubenswrapper[4808]: I0217 16:21:14.964943 4808 scope.go:117] "RemoveContainer" containerID="a81fffa1dbaddd4905f2490f1b43e8825142981115e721e7e79501c10a7af652" Feb 17 16:21:15 crc kubenswrapper[4808]: I0217 16:21:15.449655 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7kpkn" Feb 17 16:21:15 crc kubenswrapper[4808]: I0217 16:21:15.513283 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4e6a34f-a3c5-453d-a8e0-244c279aa68f-utilities\") pod \"c4e6a34f-a3c5-453d-a8e0-244c279aa68f\" (UID: \"c4e6a34f-a3c5-453d-a8e0-244c279aa68f\") " Feb 17 16:21:15 crc kubenswrapper[4808]: I0217 16:21:15.513405 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4e6a34f-a3c5-453d-a8e0-244c279aa68f-catalog-content\") pod \"c4e6a34f-a3c5-453d-a8e0-244c279aa68f\" (UID: \"c4e6a34f-a3c5-453d-a8e0-244c279aa68f\") " Feb 17 16:21:15 crc kubenswrapper[4808]: I0217 16:21:15.513511 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mh9qc\" (UniqueName: \"kubernetes.io/projected/c4e6a34f-a3c5-453d-a8e0-244c279aa68f-kube-api-access-mh9qc\") pod \"c4e6a34f-a3c5-453d-a8e0-244c279aa68f\" (UID: \"c4e6a34f-a3c5-453d-a8e0-244c279aa68f\") " Feb 17 16:21:15 crc kubenswrapper[4808]: I0217 16:21:15.514802 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4e6a34f-a3c5-453d-a8e0-244c279aa68f-utilities" (OuterVolumeSpecName: "utilities") pod "c4e6a34f-a3c5-453d-a8e0-244c279aa68f" (UID: "c4e6a34f-a3c5-453d-a8e0-244c279aa68f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:21:15 crc kubenswrapper[4808]: I0217 16:21:15.522868 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4e6a34f-a3c5-453d-a8e0-244c279aa68f-kube-api-access-mh9qc" (OuterVolumeSpecName: "kube-api-access-mh9qc") pod "c4e6a34f-a3c5-453d-a8e0-244c279aa68f" (UID: "c4e6a34f-a3c5-453d-a8e0-244c279aa68f"). InnerVolumeSpecName "kube-api-access-mh9qc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:21:15 crc kubenswrapper[4808]: I0217 16:21:15.564169 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4e6a34f-a3c5-453d-a8e0-244c279aa68f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c4e6a34f-a3c5-453d-a8e0-244c279aa68f" (UID: "c4e6a34f-a3c5-453d-a8e0-244c279aa68f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:21:15 crc kubenswrapper[4808]: I0217 16:21:15.616786 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4e6a34f-a3c5-453d-a8e0-244c279aa68f-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:21:15 crc kubenswrapper[4808]: I0217 16:21:15.616823 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4e6a34f-a3c5-453d-a8e0-244c279aa68f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:21:15 crc kubenswrapper[4808]: I0217 16:21:15.616836 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mh9qc\" (UniqueName: \"kubernetes.io/projected/c4e6a34f-a3c5-453d-a8e0-244c279aa68f-kube-api-access-mh9qc\") on node \"crc\" DevicePath \"\"" Feb 17 16:21:15 crc kubenswrapper[4808]: I0217 16:21:15.849502 4808 generic.go:334] "Generic (PLEG): container finished" podID="c4e6a34f-a3c5-453d-a8e0-244c279aa68f" containerID="bc678ceb9ba35b9d93f987954ff15a382cc01cf598d3e6929ad41e00b1326797" exitCode=0 Feb 17 16:21:15 crc kubenswrapper[4808]: I0217 16:21:15.849545 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7kpkn" event={"ID":"c4e6a34f-a3c5-453d-a8e0-244c279aa68f","Type":"ContainerDied","Data":"bc678ceb9ba35b9d93f987954ff15a382cc01cf598d3e6929ad41e00b1326797"} Feb 17 16:21:15 crc kubenswrapper[4808]: I0217 16:21:15.849595 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7kpkn" event={"ID":"c4e6a34f-a3c5-453d-a8e0-244c279aa68f","Type":"ContainerDied","Data":"58a123d0cf872ccfa1d13d556eeba502e7247af82124e5887cddf5c4618985da"} Feb 17 16:21:15 crc kubenswrapper[4808]: I0217 16:21:15.849616 4808 scope.go:117] "RemoveContainer" containerID="bc678ceb9ba35b9d93f987954ff15a382cc01cf598d3e6929ad41e00b1326797" Feb 17 16:21:15 crc kubenswrapper[4808]: I0217 16:21:15.849759 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7kpkn" Feb 17 16:21:15 crc kubenswrapper[4808]: I0217 16:21:15.884301 4808 scope.go:117] "RemoveContainer" containerID="96c0bb98b88359fe533d8206e5e69230b0be81e672510db74ff3204e1943906a" Feb 17 16:21:15 crc kubenswrapper[4808]: I0217 16:21:15.900804 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7kpkn"] Feb 17 16:21:15 crc kubenswrapper[4808]: I0217 16:21:15.913704 4808 scope.go:117] "RemoveContainer" containerID="1d210c635ed371a09b67590952111fc432c489ddf228de0a62fc51e181a3886f" Feb 17 16:21:15 crc kubenswrapper[4808]: I0217 16:21:15.914698 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-7kpkn"] Feb 17 16:21:15 crc kubenswrapper[4808]: I0217 16:21:15.950839 4808 scope.go:117] "RemoveContainer" containerID="bc678ceb9ba35b9d93f987954ff15a382cc01cf598d3e6929ad41e00b1326797" Feb 17 16:21:15 crc kubenswrapper[4808]: E0217 16:21:15.951381 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc678ceb9ba35b9d93f987954ff15a382cc01cf598d3e6929ad41e00b1326797\": container with ID starting with bc678ceb9ba35b9d93f987954ff15a382cc01cf598d3e6929ad41e00b1326797 not found: ID does not exist" containerID="bc678ceb9ba35b9d93f987954ff15a382cc01cf598d3e6929ad41e00b1326797" Feb 17 16:21:15 crc kubenswrapper[4808]: I0217 16:21:15.951513 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc678ceb9ba35b9d93f987954ff15a382cc01cf598d3e6929ad41e00b1326797"} err="failed to get container status \"bc678ceb9ba35b9d93f987954ff15a382cc01cf598d3e6929ad41e00b1326797\": rpc error: code = NotFound desc = could not find container \"bc678ceb9ba35b9d93f987954ff15a382cc01cf598d3e6929ad41e00b1326797\": container with ID starting with bc678ceb9ba35b9d93f987954ff15a382cc01cf598d3e6929ad41e00b1326797 not found: ID does not exist" Feb 17 16:21:15 crc kubenswrapper[4808]: I0217 16:21:15.951649 4808 scope.go:117] "RemoveContainer" containerID="96c0bb98b88359fe533d8206e5e69230b0be81e672510db74ff3204e1943906a" Feb 17 16:21:15 crc kubenswrapper[4808]: E0217 16:21:15.952186 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96c0bb98b88359fe533d8206e5e69230b0be81e672510db74ff3204e1943906a\": container with ID starting with 96c0bb98b88359fe533d8206e5e69230b0be81e672510db74ff3204e1943906a not found: ID does not exist" containerID="96c0bb98b88359fe533d8206e5e69230b0be81e672510db74ff3204e1943906a" Feb 17 16:21:15 crc kubenswrapper[4808]: I0217 16:21:15.952292 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96c0bb98b88359fe533d8206e5e69230b0be81e672510db74ff3204e1943906a"} err="failed to get container status \"96c0bb98b88359fe533d8206e5e69230b0be81e672510db74ff3204e1943906a\": rpc error: code = NotFound desc = could not find container \"96c0bb98b88359fe533d8206e5e69230b0be81e672510db74ff3204e1943906a\": container with ID starting with 96c0bb98b88359fe533d8206e5e69230b0be81e672510db74ff3204e1943906a not found: ID does not exist" Feb 17 16:21:15 crc kubenswrapper[4808]: I0217 16:21:15.952415 4808 scope.go:117] "RemoveContainer" containerID="1d210c635ed371a09b67590952111fc432c489ddf228de0a62fc51e181a3886f" Feb 17 16:21:15 crc kubenswrapper[4808]: E0217 16:21:15.952778 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d210c635ed371a09b67590952111fc432c489ddf228de0a62fc51e181a3886f\": container with ID starting with 1d210c635ed371a09b67590952111fc432c489ddf228de0a62fc51e181a3886f not found: ID does not exist" containerID="1d210c635ed371a09b67590952111fc432c489ddf228de0a62fc51e181a3886f" Feb 17 16:21:15 crc kubenswrapper[4808]: I0217 16:21:15.952903 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d210c635ed371a09b67590952111fc432c489ddf228de0a62fc51e181a3886f"} err="failed to get container status \"1d210c635ed371a09b67590952111fc432c489ddf228de0a62fc51e181a3886f\": rpc error: code = NotFound desc = could not find container \"1d210c635ed371a09b67590952111fc432c489ddf228de0a62fc51e181a3886f\": container with ID starting with 1d210c635ed371a09b67590952111fc432c489ddf228de0a62fc51e181a3886f not found: ID does not exist" Feb 17 16:21:17 crc kubenswrapper[4808]: I0217 16:21:17.158358 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4e6a34f-a3c5-453d-a8e0-244c279aa68f" path="/var/lib/kubelet/pods/c4e6a34f-a3c5-453d-a8e0-244c279aa68f/volumes" Feb 17 16:21:19 crc kubenswrapper[4808]: E0217 16:21:19.149791 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:21:22 crc kubenswrapper[4808]: E0217 16:21:22.273751 4808 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 17 16:21:22 crc kubenswrapper[4808]: E0217 16:21:22.274129 4808 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 17 16:21:22 crc kubenswrapper[4808]: E0217 16:21:22.274316 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fnd2x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-zl7nk_openstack(a4b182d0-48fc-4487-b7ad-18f7803a4d4c): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:21:22 crc kubenswrapper[4808]: E0217 16:21:22.275599 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:21:23 crc kubenswrapper[4808]: I0217 16:21:23.146202 4808 scope.go:117] "RemoveContainer" containerID="3d547770092f773b5c7f62497d5451390c51dc1c958b49576b85d692e046de5d" Feb 17 16:21:23 crc kubenswrapper[4808]: E0217 16:21:23.146855 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:21:32 crc kubenswrapper[4808]: E0217 16:21:32.148126 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:21:35 crc kubenswrapper[4808]: E0217 16:21:35.148047 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:21:38 crc kubenswrapper[4808]: I0217 16:21:38.146056 4808 scope.go:117] "RemoveContainer" containerID="3d547770092f773b5c7f62497d5451390c51dc1c958b49576b85d692e046de5d" Feb 17 16:21:38 crc kubenswrapper[4808]: E0217 16:21:38.146792 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:21:46 crc kubenswrapper[4808]: E0217 16:21:46.149196 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:21:46 crc kubenswrapper[4808]: E0217 16:21:46.270154 4808 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:21:46 crc kubenswrapper[4808]: E0217 16:21:46.270439 4808 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:21:46 crc kubenswrapper[4808]: E0217 16:21:46.270824 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nfchb4h678h649h5fbh664h79h7fh666h5bfh68h565h555h59dh5b6h5bfh66ch645h547h5cbh549h9fh58bh5d4hcfh78h68chc7h5ch67dhc7h5b4q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rjgf2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(2876084b-7055-449d-9ddb-447d3a515d80): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:21:46 crc kubenswrapper[4808]: E0217 16:21:46.272251 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:21:51 crc kubenswrapper[4808]: I0217 16:21:51.146200 4808 scope.go:117] "RemoveContainer" containerID="3d547770092f773b5c7f62497d5451390c51dc1c958b49576b85d692e046de5d" Feb 17 16:21:51 crc kubenswrapper[4808]: E0217 16:21:51.147640 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:22:01 crc kubenswrapper[4808]: E0217 16:22:01.150732 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:22:01 crc kubenswrapper[4808]: E0217 16:22:01.150769 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:22:02 crc kubenswrapper[4808]: I0217 16:22:02.145892 4808 scope.go:117] "RemoveContainer" containerID="3d547770092f773b5c7f62497d5451390c51dc1c958b49576b85d692e046de5d" Feb 17 16:22:02 crc kubenswrapper[4808]: E0217 16:22:02.146258 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:22:14 crc kubenswrapper[4808]: E0217 16:22:14.149431 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:22:14 crc kubenswrapper[4808]: E0217 16:22:14.149472 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:22:16 crc kubenswrapper[4808]: I0217 16:22:16.146975 4808 scope.go:117] "RemoveContainer" containerID="3d547770092f773b5c7f62497d5451390c51dc1c958b49576b85d692e046de5d" Feb 17 16:22:16 crc kubenswrapper[4808]: E0217 16:22:16.147424 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:22:25 crc kubenswrapper[4808]: E0217 16:22:25.148831 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:22:28 crc kubenswrapper[4808]: I0217 16:22:28.147442 4808 scope.go:117] "RemoveContainer" containerID="3d547770092f773b5c7f62497d5451390c51dc1c958b49576b85d692e046de5d" Feb 17 16:22:28 crc kubenswrapper[4808]: E0217 16:22:28.148016 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:22:28 crc kubenswrapper[4808]: E0217 16:22:28.148230 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:22:38 crc kubenswrapper[4808]: E0217 16:22:38.149055 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:22:40 crc kubenswrapper[4808]: I0217 16:22:40.146460 4808 scope.go:117] "RemoveContainer" containerID="3d547770092f773b5c7f62497d5451390c51dc1c958b49576b85d692e046de5d" Feb 17 16:22:40 crc kubenswrapper[4808]: E0217 16:22:40.146858 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:22:43 crc kubenswrapper[4808]: E0217 16:22:43.149546 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:22:51 crc kubenswrapper[4808]: I0217 16:22:51.145696 4808 scope.go:117] "RemoveContainer" containerID="3d547770092f773b5c7f62497d5451390c51dc1c958b49576b85d692e046de5d" Feb 17 16:22:51 crc kubenswrapper[4808]: E0217 16:22:51.146549 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:22:53 crc kubenswrapper[4808]: E0217 16:22:53.149687 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:22:56 crc kubenswrapper[4808]: I0217 16:22:56.020807 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vwl2g" event={"ID":"e4a30af7-342e-49c0-8e89-c38f11b7cc63","Type":"ContainerDied","Data":"71c91d6451b64c7f7e3bd20b7f8ce8d6da0a6dbf093d38be3cac5d1529528868"} Feb 17 16:22:56 crc kubenswrapper[4808]: I0217 16:22:56.020961 4808 generic.go:334] "Generic (PLEG): container finished" podID="e4a30af7-342e-49c0-8e89-c38f11b7cc63" containerID="71c91d6451b64c7f7e3bd20b7f8ce8d6da0a6dbf093d38be3cac5d1529528868" exitCode=0 Feb 17 16:22:57 crc kubenswrapper[4808]: E0217 16:22:57.157297 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:22:57 crc kubenswrapper[4808]: I0217 16:22:57.541198 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vwl2g" Feb 17 16:22:57 crc kubenswrapper[4808]: I0217 16:22:57.655465 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9dsr2\" (UniqueName: \"kubernetes.io/projected/e4a30af7-342e-49c0-8e89-c38f11b7cc63-kube-api-access-9dsr2\") pod \"e4a30af7-342e-49c0-8e89-c38f11b7cc63\" (UID: \"e4a30af7-342e-49c0-8e89-c38f11b7cc63\") " Feb 17 16:22:57 crc kubenswrapper[4808]: I0217 16:22:57.655528 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e4a30af7-342e-49c0-8e89-c38f11b7cc63-ssh-key-openstack-edpm-ipam\") pod \"e4a30af7-342e-49c0-8e89-c38f11b7cc63\" (UID: \"e4a30af7-342e-49c0-8e89-c38f11b7cc63\") " Feb 17 16:22:57 crc kubenswrapper[4808]: I0217 16:22:57.655681 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e4a30af7-342e-49c0-8e89-c38f11b7cc63-inventory\") pod \"e4a30af7-342e-49c0-8e89-c38f11b7cc63\" (UID: \"e4a30af7-342e-49c0-8e89-c38f11b7cc63\") " Feb 17 16:22:57 crc kubenswrapper[4808]: I0217 16:22:57.655723 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4a30af7-342e-49c0-8e89-c38f11b7cc63-bootstrap-combined-ca-bundle\") pod \"e4a30af7-342e-49c0-8e89-c38f11b7cc63\" (UID: \"e4a30af7-342e-49c0-8e89-c38f11b7cc63\") " Feb 17 16:22:57 crc kubenswrapper[4808]: I0217 16:22:57.660669 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4a30af7-342e-49c0-8e89-c38f11b7cc63-kube-api-access-9dsr2" (OuterVolumeSpecName: "kube-api-access-9dsr2") pod "e4a30af7-342e-49c0-8e89-c38f11b7cc63" (UID: "e4a30af7-342e-49c0-8e89-c38f11b7cc63"). InnerVolumeSpecName "kube-api-access-9dsr2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:22:57 crc kubenswrapper[4808]: I0217 16:22:57.663676 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4a30af7-342e-49c0-8e89-c38f11b7cc63-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "e4a30af7-342e-49c0-8e89-c38f11b7cc63" (UID: "e4a30af7-342e-49c0-8e89-c38f11b7cc63"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:22:57 crc kubenswrapper[4808]: I0217 16:22:57.684322 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4a30af7-342e-49c0-8e89-c38f11b7cc63-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e4a30af7-342e-49c0-8e89-c38f11b7cc63" (UID: "e4a30af7-342e-49c0-8e89-c38f11b7cc63"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:22:57 crc kubenswrapper[4808]: I0217 16:22:57.689029 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4a30af7-342e-49c0-8e89-c38f11b7cc63-inventory" (OuterVolumeSpecName: "inventory") pod "e4a30af7-342e-49c0-8e89-c38f11b7cc63" (UID: "e4a30af7-342e-49c0-8e89-c38f11b7cc63"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:22:57 crc kubenswrapper[4808]: I0217 16:22:57.757547 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9dsr2\" (UniqueName: \"kubernetes.io/projected/e4a30af7-342e-49c0-8e89-c38f11b7cc63-kube-api-access-9dsr2\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:57 crc kubenswrapper[4808]: I0217 16:22:57.757598 4808 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e4a30af7-342e-49c0-8e89-c38f11b7cc63-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:57 crc kubenswrapper[4808]: I0217 16:22:57.757610 4808 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e4a30af7-342e-49c0-8e89-c38f11b7cc63-inventory\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:57 crc kubenswrapper[4808]: I0217 16:22:57.757621 4808 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4a30af7-342e-49c0-8e89-c38f11b7cc63-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:58 crc kubenswrapper[4808]: I0217 16:22:58.046944 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vwl2g" event={"ID":"e4a30af7-342e-49c0-8e89-c38f11b7cc63","Type":"ContainerDied","Data":"ec611864a405eeef1eea8b1792d33b647fe4a37506f5f9ad7454e52f00a3b863"} Feb 17 16:22:58 crc kubenswrapper[4808]: I0217 16:22:58.046983 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ec611864a405eeef1eea8b1792d33b647fe4a37506f5f9ad7454e52f00a3b863" Feb 17 16:22:58 crc kubenswrapper[4808]: I0217 16:22:58.047012 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vwl2g" Feb 17 16:22:58 crc kubenswrapper[4808]: I0217 16:22:58.178793 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sjckt"] Feb 17 16:22:58 crc kubenswrapper[4808]: E0217 16:22:58.179202 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4a30af7-342e-49c0-8e89-c38f11b7cc63" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 17 16:22:58 crc kubenswrapper[4808]: I0217 16:22:58.179215 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4a30af7-342e-49c0-8e89-c38f11b7cc63" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 17 16:22:58 crc kubenswrapper[4808]: E0217 16:22:58.179238 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4e6a34f-a3c5-453d-a8e0-244c279aa68f" containerName="registry-server" Feb 17 16:22:58 crc kubenswrapper[4808]: I0217 16:22:58.179245 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4e6a34f-a3c5-453d-a8e0-244c279aa68f" containerName="registry-server" Feb 17 16:22:58 crc kubenswrapper[4808]: E0217 16:22:58.179265 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4e6a34f-a3c5-453d-a8e0-244c279aa68f" containerName="extract-content" Feb 17 16:22:58 crc kubenswrapper[4808]: I0217 16:22:58.179271 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4e6a34f-a3c5-453d-a8e0-244c279aa68f" containerName="extract-content" Feb 17 16:22:58 crc kubenswrapper[4808]: E0217 16:22:58.179287 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4e6a34f-a3c5-453d-a8e0-244c279aa68f" containerName="extract-utilities" Feb 17 16:22:58 crc kubenswrapper[4808]: I0217 16:22:58.179294 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4e6a34f-a3c5-453d-a8e0-244c279aa68f" containerName="extract-utilities" Feb 17 16:22:58 crc kubenswrapper[4808]: I0217 16:22:58.179466 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4a30af7-342e-49c0-8e89-c38f11b7cc63" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 17 16:22:58 crc kubenswrapper[4808]: I0217 16:22:58.179480 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4e6a34f-a3c5-453d-a8e0-244c279aa68f" containerName="registry-server" Feb 17 16:22:58 crc kubenswrapper[4808]: I0217 16:22:58.180152 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sjckt" Feb 17 16:22:58 crc kubenswrapper[4808]: I0217 16:22:58.183397 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 17 16:22:58 crc kubenswrapper[4808]: I0217 16:22:58.183679 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 17 16:22:58 crc kubenswrapper[4808]: I0217 16:22:58.183843 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-gpcsv" Feb 17 16:22:58 crc kubenswrapper[4808]: I0217 16:22:58.183952 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 17 16:22:58 crc kubenswrapper[4808]: I0217 16:22:58.190536 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sjckt"] Feb 17 16:22:58 crc kubenswrapper[4808]: I0217 16:22:58.265872 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdfxv\" (UniqueName: \"kubernetes.io/projected/2084629b-ffd4-4f5e-8db7-070d4a08dd8e-kube-api-access-kdfxv\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-sjckt\" (UID: \"2084629b-ffd4-4f5e-8db7-070d4a08dd8e\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sjckt" Feb 17 16:22:58 crc kubenswrapper[4808]: I0217 16:22:58.266313 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2084629b-ffd4-4f5e-8db7-070d4a08dd8e-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-sjckt\" (UID: \"2084629b-ffd4-4f5e-8db7-070d4a08dd8e\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sjckt" Feb 17 16:22:58 crc kubenswrapper[4808]: I0217 16:22:58.266503 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2084629b-ffd4-4f5e-8db7-070d4a08dd8e-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-sjckt\" (UID: \"2084629b-ffd4-4f5e-8db7-070d4a08dd8e\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sjckt" Feb 17 16:22:58 crc kubenswrapper[4808]: I0217 16:22:58.368189 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2084629b-ffd4-4f5e-8db7-070d4a08dd8e-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-sjckt\" (UID: \"2084629b-ffd4-4f5e-8db7-070d4a08dd8e\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sjckt" Feb 17 16:22:58 crc kubenswrapper[4808]: I0217 16:22:58.368278 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2084629b-ffd4-4f5e-8db7-070d4a08dd8e-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-sjckt\" (UID: \"2084629b-ffd4-4f5e-8db7-070d4a08dd8e\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sjckt" Feb 17 16:22:58 crc kubenswrapper[4808]: I0217 16:22:58.368328 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kdfxv\" (UniqueName: \"kubernetes.io/projected/2084629b-ffd4-4f5e-8db7-070d4a08dd8e-kube-api-access-kdfxv\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-sjckt\" (UID: \"2084629b-ffd4-4f5e-8db7-070d4a08dd8e\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sjckt" Feb 17 16:22:58 crc kubenswrapper[4808]: I0217 16:22:58.374135 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2084629b-ffd4-4f5e-8db7-070d4a08dd8e-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-sjckt\" (UID: \"2084629b-ffd4-4f5e-8db7-070d4a08dd8e\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sjckt" Feb 17 16:22:58 crc kubenswrapper[4808]: I0217 16:22:58.386334 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2084629b-ffd4-4f5e-8db7-070d4a08dd8e-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-sjckt\" (UID: \"2084629b-ffd4-4f5e-8db7-070d4a08dd8e\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sjckt" Feb 17 16:22:58 crc kubenswrapper[4808]: I0217 16:22:58.387169 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdfxv\" (UniqueName: \"kubernetes.io/projected/2084629b-ffd4-4f5e-8db7-070d4a08dd8e-kube-api-access-kdfxv\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-sjckt\" (UID: \"2084629b-ffd4-4f5e-8db7-070d4a08dd8e\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sjckt" Feb 17 16:22:58 crc kubenswrapper[4808]: I0217 16:22:58.536870 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sjckt" Feb 17 16:22:59 crc kubenswrapper[4808]: I0217 16:22:59.132121 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sjckt"] Feb 17 16:23:00 crc kubenswrapper[4808]: I0217 16:23:00.074873 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sjckt" event={"ID":"2084629b-ffd4-4f5e-8db7-070d4a08dd8e","Type":"ContainerStarted","Data":"92e6ef387cf41dd71a851ea483493cf05b8666e2889e1132cbfb6ad483176127"} Feb 17 16:23:00 crc kubenswrapper[4808]: I0217 16:23:00.075395 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sjckt" event={"ID":"2084629b-ffd4-4f5e-8db7-070d4a08dd8e","Type":"ContainerStarted","Data":"b7f31d0387d770241189aacd0771c827ab5a7b271e4e7dcc1efa78c199758ae8"} Feb 17 16:23:00 crc kubenswrapper[4808]: I0217 16:23:00.099792 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sjckt" podStartSLOduration=1.5139489990000001 podStartE2EDuration="2.099762739s" podCreationTimestamp="2026-02-17 16:22:58 +0000 UTC" firstStartedPulling="2026-02-17 16:22:59.124341464 +0000 UTC m=+1742.640700547" lastFinishedPulling="2026-02-17 16:22:59.710155204 +0000 UTC m=+1743.226514287" observedRunningTime="2026-02-17 16:23:00.089954983 +0000 UTC m=+1743.606314106" watchObservedRunningTime="2026-02-17 16:23:00.099762739 +0000 UTC m=+1743.616121892" Feb 17 16:23:03 crc kubenswrapper[4808]: I0217 16:23:03.150743 4808 scope.go:117] "RemoveContainer" containerID="3d547770092f773b5c7f62497d5451390c51dc1c958b49576b85d692e046de5d" Feb 17 16:23:03 crc kubenswrapper[4808]: E0217 16:23:03.157660 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:23:07 crc kubenswrapper[4808]: E0217 16:23:07.158425 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:23:11 crc kubenswrapper[4808]: E0217 16:23:11.149743 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:23:15 crc kubenswrapper[4808]: I0217 16:23:15.155516 4808 scope.go:117] "RemoveContainer" containerID="8bfe96313fc0880ba2b05de73386c3a0141557df7597d80f4ca352d193fcea90" Feb 17 16:23:15 crc kubenswrapper[4808]: I0217 16:23:15.193842 4808 scope.go:117] "RemoveContainer" containerID="8ef043aeb841feb7820cafa9458135b261212780ed4c47c6422beb21b665b0f8" Feb 17 16:23:15 crc kubenswrapper[4808]: I0217 16:23:15.232863 4808 scope.go:117] "RemoveContainer" containerID="b2074f66b52d0ee5fc07e0dd48e5b9610e713f89e070fa2279a74046e30629e5" Feb 17 16:23:15 crc kubenswrapper[4808]: I0217 16:23:15.265947 4808 scope.go:117] "RemoveContainer" containerID="8a9460318021d21a8c095dc46b0f6d2b923e1d1fb20312230919800b64c327bf" Feb 17 16:23:15 crc kubenswrapper[4808]: I0217 16:23:15.303048 4808 scope.go:117] "RemoveContainer" containerID="d73ac62ad3bfcdefb51a665f43bfa062a8308099aae6c2d45cb612f3752adbbe" Feb 17 16:23:15 crc kubenswrapper[4808]: I0217 16:23:15.340205 4808 scope.go:117] "RemoveContainer" containerID="14e92a83abc11738c2e58494b921f0dba3aa3b66f55a3affc10d2417c6785a90" Feb 17 16:23:18 crc kubenswrapper[4808]: I0217 16:23:18.145875 4808 scope.go:117] "RemoveContainer" containerID="3d547770092f773b5c7f62497d5451390c51dc1c958b49576b85d692e046de5d" Feb 17 16:23:18 crc kubenswrapper[4808]: E0217 16:23:18.146793 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:23:22 crc kubenswrapper[4808]: E0217 16:23:22.149656 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:23:22 crc kubenswrapper[4808]: E0217 16:23:22.149656 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:23:30 crc kubenswrapper[4808]: I0217 16:23:30.146927 4808 scope.go:117] "RemoveContainer" containerID="3d547770092f773b5c7f62497d5451390c51dc1c958b49576b85d692e046de5d" Feb 17 16:23:30 crc kubenswrapper[4808]: E0217 16:23:30.149899 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:23:33 crc kubenswrapper[4808]: E0217 16:23:33.149313 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:23:37 crc kubenswrapper[4808]: E0217 16:23:37.174701 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:23:41 crc kubenswrapper[4808]: I0217 16:23:41.063312 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-6mgt5"] Feb 17 16:23:41 crc kubenswrapper[4808]: I0217 16:23:41.079998 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-6mgt5"] Feb 17 16:23:41 crc kubenswrapper[4808]: I0217 16:23:41.097554 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-1c2d-account-create-update-5rmst"] Feb 17 16:23:41 crc kubenswrapper[4808]: I0217 16:23:41.112083 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-mp9g8"] Feb 17 16:23:41 crc kubenswrapper[4808]: I0217 16:23:41.124332 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-1e92-account-create-update-s8tnj"] Feb 17 16:23:41 crc kubenswrapper[4808]: I0217 16:23:41.138339 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-mp9g8"] Feb 17 16:23:41 crc kubenswrapper[4808]: I0217 16:23:41.174318 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56341195-0325-4b22-ba76-8f792fbbcdb6" path="/var/lib/kubelet/pods/56341195-0325-4b22-ba76-8f792fbbcdb6/volumes" Feb 17 16:23:41 crc kubenswrapper[4808]: I0217 16:23:41.176774 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7419b027-2686-4ba4-9459-30a4362d34f0" path="/var/lib/kubelet/pods/7419b027-2686-4ba4-9459-30a4362d34f0/volumes" Feb 17 16:23:41 crc kubenswrapper[4808]: I0217 16:23:41.179020 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-1c2d-account-create-update-5rmst"] Feb 17 16:23:41 crc kubenswrapper[4808]: I0217 16:23:41.179072 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-1e92-account-create-update-s8tnj"] Feb 17 16:23:42 crc kubenswrapper[4808]: I0217 16:23:42.054122 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-cw2fg"] Feb 17 16:23:42 crc kubenswrapper[4808]: I0217 16:23:42.067605 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-cw2fg"] Feb 17 16:23:42 crc kubenswrapper[4808]: I0217 16:23:42.076671 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-6fc9-account-create-update-hsl6c"] Feb 17 16:23:42 crc kubenswrapper[4808]: I0217 16:23:42.084943 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-6fc9-account-create-update-hsl6c"] Feb 17 16:23:42 crc kubenswrapper[4808]: I0217 16:23:42.145318 4808 scope.go:117] "RemoveContainer" containerID="3d547770092f773b5c7f62497d5451390c51dc1c958b49576b85d692e046de5d" Feb 17 16:23:42 crc kubenswrapper[4808]: E0217 16:23:42.145720 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:23:43 crc kubenswrapper[4808]: I0217 16:23:43.167534 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58e700c8-ab25-47a2-a6cf-e85ffcb57e74" path="/var/lib/kubelet/pods/58e700c8-ab25-47a2-a6cf-e85ffcb57e74/volumes" Feb 17 16:23:43 crc kubenswrapper[4808]: I0217 16:23:43.169542 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="850baae5-89be-441f-85e0-f2f0ec68bdc3" path="/var/lib/kubelet/pods/850baae5-89be-441f-85e0-f2f0ec68bdc3/volumes" Feb 17 16:23:43 crc kubenswrapper[4808]: I0217 16:23:43.171678 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="850d66dd-e985-408b-93a0-8251cfd8dbc5" path="/var/lib/kubelet/pods/850d66dd-e985-408b-93a0-8251cfd8dbc5/volumes" Feb 17 16:23:43 crc kubenswrapper[4808]: I0217 16:23:43.172895 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dbacbd93-bbc0-4360-bc45-9782988bd3c0" path="/var/lib/kubelet/pods/dbacbd93-bbc0-4360-bc45-9782988bd3c0/volumes" Feb 17 16:23:45 crc kubenswrapper[4808]: E0217 16:23:45.147729 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:23:51 crc kubenswrapper[4808]: E0217 16:23:51.148398 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:23:56 crc kubenswrapper[4808]: I0217 16:23:56.146413 4808 scope.go:117] "RemoveContainer" containerID="3d547770092f773b5c7f62497d5451390c51dc1c958b49576b85d692e046de5d" Feb 17 16:23:56 crc kubenswrapper[4808]: E0217 16:23:56.147602 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:24:00 crc kubenswrapper[4808]: E0217 16:24:00.147598 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:24:06 crc kubenswrapper[4808]: E0217 16:24:06.148941 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:24:07 crc kubenswrapper[4808]: I0217 16:24:07.050942 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-f2jqv"] Feb 17 16:24:07 crc kubenswrapper[4808]: I0217 16:24:07.066288 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-f2jqv"] Feb 17 16:24:07 crc kubenswrapper[4808]: I0217 16:24:07.163394 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7377369f-b540-4b85-be05-4200c9695a41" path="/var/lib/kubelet/pods/7377369f-b540-4b85-be05-4200c9695a41/volumes" Feb 17 16:24:09 crc kubenswrapper[4808]: I0217 16:24:09.146197 4808 scope.go:117] "RemoveContainer" containerID="3d547770092f773b5c7f62497d5451390c51dc1c958b49576b85d692e046de5d" Feb 17 16:24:09 crc kubenswrapper[4808]: E0217 16:24:09.146840 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:24:11 crc kubenswrapper[4808]: I0217 16:24:11.065755 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-78cc-account-create-update-k7vgl"] Feb 17 16:24:11 crc kubenswrapper[4808]: I0217 16:24:11.093128 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-ktddg"] Feb 17 16:24:11 crc kubenswrapper[4808]: I0217 16:24:11.105964 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-jqrq2"] Feb 17 16:24:11 crc kubenswrapper[4808]: I0217 16:24:11.115279 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-8c80-account-create-update-rk4jj"] Feb 17 16:24:11 crc kubenswrapper[4808]: I0217 16:24:11.125222 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-db-create-r5lfk"] Feb 17 16:24:11 crc kubenswrapper[4808]: I0217 16:24:11.137219 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-jmq6n"] Feb 17 16:24:11 crc kubenswrapper[4808]: I0217 16:24:11.167590 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-59d8-account-create-update-5vsvx"] Feb 17 16:24:11 crc kubenswrapper[4808]: I0217 16:24:11.167833 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-a9c6-account-create-update-48vv8"] Feb 17 16:24:11 crc kubenswrapper[4808]: I0217 16:24:11.177892 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-59d8-account-create-update-5vsvx"] Feb 17 16:24:11 crc kubenswrapper[4808]: I0217 16:24:11.196078 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-jqrq2"] Feb 17 16:24:11 crc kubenswrapper[4808]: I0217 16:24:11.208122 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-ktddg"] Feb 17 16:24:11 crc kubenswrapper[4808]: I0217 16:24:11.217117 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-a9c6-account-create-update-48vv8"] Feb 17 16:24:11 crc kubenswrapper[4808]: I0217 16:24:11.226733 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-8c80-account-create-update-rk4jj"] Feb 17 16:24:11 crc kubenswrapper[4808]: I0217 16:24:11.240976 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-78cc-account-create-update-k7vgl"] Feb 17 16:24:11 crc kubenswrapper[4808]: I0217 16:24:11.249204 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-jmq6n"] Feb 17 16:24:11 crc kubenswrapper[4808]: I0217 16:24:11.256899 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-db-create-r5lfk"] Feb 17 16:24:12 crc kubenswrapper[4808]: E0217 16:24:12.282910 4808 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 17 16:24:12 crc kubenswrapper[4808]: E0217 16:24:12.283512 4808 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 17 16:24:12 crc kubenswrapper[4808]: E0217 16:24:12.283830 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fnd2x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-zl7nk_openstack(a4b182d0-48fc-4487-b7ad-18f7803a4d4c): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:24:12 crc kubenswrapper[4808]: E0217 16:24:12.285227 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:24:13 crc kubenswrapper[4808]: I0217 16:24:13.163227 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02478fdd-380d-42f9-b105-c3ae86d224a8" path="/var/lib/kubelet/pods/02478fdd-380d-42f9-b105-c3ae86d224a8/volumes" Feb 17 16:24:13 crc kubenswrapper[4808]: I0217 16:24:13.164827 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2495c4d6-8174-4b4d-9114-968620fbba31" path="/var/lib/kubelet/pods/2495c4d6-8174-4b4d-9114-968620fbba31/volumes" Feb 17 16:24:13 crc kubenswrapper[4808]: I0217 16:24:13.165995 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ccecd7d-0e59-4336-a6ec-a595adbb727e" path="/var/lib/kubelet/pods/3ccecd7d-0e59-4336-a6ec-a595adbb727e/volumes" Feb 17 16:24:13 crc kubenswrapper[4808]: I0217 16:24:13.167095 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72e328d4-94e9-42bc-ae1c-b07b01d80072" path="/var/lib/kubelet/pods/72e328d4-94e9-42bc-ae1c-b07b01d80072/volumes" Feb 17 16:24:13 crc kubenswrapper[4808]: I0217 16:24:13.169426 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c02cbd83-d077-4812-b852-7fe9a0182b71" path="/var/lib/kubelet/pods/c02cbd83-d077-4812-b852-7fe9a0182b71/volumes" Feb 17 16:24:13 crc kubenswrapper[4808]: I0217 16:24:13.171026 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e183e901-16a0-43cf-9ce5-ef36da8686d1" path="/var/lib/kubelet/pods/e183e901-16a0-43cf-9ce5-ef36da8686d1/volumes" Feb 17 16:24:13 crc kubenswrapper[4808]: I0217 16:24:13.172713 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5180ea6-12c0-4463-8fe5-c35ab2a15b44" path="/var/lib/kubelet/pods/e5180ea6-12c0-4463-8fe5-c35ab2a15b44/volumes" Feb 17 16:24:13 crc kubenswrapper[4808]: I0217 16:24:13.174953 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff670244-5344-4409-9823-6bfcf9ed274d" path="/var/lib/kubelet/pods/ff670244-5344-4409-9823-6bfcf9ed274d/volumes" Feb 17 16:24:15 crc kubenswrapper[4808]: I0217 16:24:15.446668 4808 scope.go:117] "RemoveContainer" containerID="2e2ee0ccc758be665530168176318d177d82ba65213912cccc942306aee57326" Feb 17 16:24:15 crc kubenswrapper[4808]: I0217 16:24:15.502222 4808 scope.go:117] "RemoveContainer" containerID="468b053d64c80baec6de3b54c4b2f477a89ae15f7b2f83e72b93e7a2a09b7e47" Feb 17 16:24:15 crc kubenswrapper[4808]: I0217 16:24:15.585859 4808 scope.go:117] "RemoveContainer" containerID="77cbcade43f0ae77b54c73845bcb62b81d16918f6513db83061d64f348ec9b2b" Feb 17 16:24:15 crc kubenswrapper[4808]: I0217 16:24:15.620142 4808 scope.go:117] "RemoveContainer" containerID="20f7389fa9f51fba5453c2a234db420d7d9f90654863c47b866a9ae0d75fd9b5" Feb 17 16:24:15 crc kubenswrapper[4808]: I0217 16:24:15.675220 4808 scope.go:117] "RemoveContainer" containerID="f07d48d83b8d167312f75dfe2e3617926d4c7c6a17b68b60f025f9a0615ec6aa" Feb 17 16:24:15 crc kubenswrapper[4808]: I0217 16:24:15.718271 4808 scope.go:117] "RemoveContainer" containerID="8bbf45c20da63316a7d1a31fef41a55e4272d4200c5d0a86c7aa340258751589" Feb 17 16:24:15 crc kubenswrapper[4808]: I0217 16:24:15.769674 4808 scope.go:117] "RemoveContainer" containerID="b727a664b9c0061ba9f01801dd0228679fbc0026b1e712729a3b0f80c6eddfb3" Feb 17 16:24:15 crc kubenswrapper[4808]: I0217 16:24:15.795759 4808 scope.go:117] "RemoveContainer" containerID="2318a25c8a4fd490438531d7eb31b39589b2387c36e3e5db64b5abeb8c178d66" Feb 17 16:24:15 crc kubenswrapper[4808]: I0217 16:24:15.821847 4808 scope.go:117] "RemoveContainer" containerID="d6c0e57ec0c9fe5da75d2c778f8867455af3d9bb73146a28181bca20e679417d" Feb 17 16:24:15 crc kubenswrapper[4808]: I0217 16:24:15.842615 4808 scope.go:117] "RemoveContainer" containerID="b9a6e75c4872c463e0bee7ea278256a76575233d65a1cb8980723a4259e57365" Feb 17 16:24:15 crc kubenswrapper[4808]: I0217 16:24:15.865896 4808 scope.go:117] "RemoveContainer" containerID="c6b61ad973a4d676df7b94d7816cb334b0acc481ec5fdce3038641a24a062cf0" Feb 17 16:24:15 crc kubenswrapper[4808]: I0217 16:24:15.888039 4808 scope.go:117] "RemoveContainer" containerID="92a52a548321e7e91228a92677db66adc649f3fd4be4a1f0b2dcb81c8ce95063" Feb 17 16:24:15 crc kubenswrapper[4808]: I0217 16:24:15.912883 4808 scope.go:117] "RemoveContainer" containerID="313ac15ae60a5d599f6768b0198df4cac62283c718fe3fa77e1a4a039f74c3b9" Feb 17 16:24:15 crc kubenswrapper[4808]: I0217 16:24:15.935874 4808 scope.go:117] "RemoveContainer" containerID="56b80ac7ee378fc8d9b7164abf8b6f6b4c7155149d6206a5a9c6aa08286e5594" Feb 17 16:24:15 crc kubenswrapper[4808]: I0217 16:24:15.955152 4808 scope.go:117] "RemoveContainer" containerID="ebb5009c36b8fd7590317bf3c492f0defedfa61fc35e3d839e79e88a3e507747" Feb 17 16:24:18 crc kubenswrapper[4808]: I0217 16:24:18.039110 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-kzjns"] Feb 17 16:24:18 crc kubenswrapper[4808]: I0217 16:24:18.053450 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-kzjns"] Feb 17 16:24:19 crc kubenswrapper[4808]: E0217 16:24:19.149909 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:24:19 crc kubenswrapper[4808]: I0217 16:24:19.166378 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41c68bd6-6280-4a89-be87-4d65f06a5a4d" path="/var/lib/kubelet/pods/41c68bd6-6280-4a89-be87-4d65f06a5a4d/volumes" Feb 17 16:24:23 crc kubenswrapper[4808]: I0217 16:24:23.146272 4808 scope.go:117] "RemoveContainer" containerID="3d547770092f773b5c7f62497d5451390c51dc1c958b49576b85d692e046de5d" Feb 17 16:24:23 crc kubenswrapper[4808]: E0217 16:24:23.148754 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:24:27 crc kubenswrapper[4808]: E0217 16:24:27.161773 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:24:34 crc kubenswrapper[4808]: E0217 16:24:34.283665 4808 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:24:34 crc kubenswrapper[4808]: E0217 16:24:34.284370 4808 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:24:34 crc kubenswrapper[4808]: E0217 16:24:34.284565 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nfchb4h678h649h5fbh664h79h7fh666h5bfh68h565h555h59dh5b6h5bfh66ch645h547h5cbh549h9fh58bh5d4hcfh78h68chc7h5ch67dhc7h5b4q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rjgf2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(2876084b-7055-449d-9ddb-447d3a515d80): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:24:34 crc kubenswrapper[4808]: E0217 16:24:34.286011 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:24:37 crc kubenswrapper[4808]: I0217 16:24:37.158142 4808 scope.go:117] "RemoveContainer" containerID="3d547770092f773b5c7f62497d5451390c51dc1c958b49576b85d692e046de5d" Feb 17 16:24:37 crc kubenswrapper[4808]: E0217 16:24:37.159152 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:24:40 crc kubenswrapper[4808]: E0217 16:24:40.150287 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:24:45 crc kubenswrapper[4808]: I0217 16:24:45.065142 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-4mdzt"] Feb 17 16:24:45 crc kubenswrapper[4808]: I0217 16:24:45.082657 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-jskwv"] Feb 17 16:24:45 crc kubenswrapper[4808]: I0217 16:24:45.092421 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-4mdzt"] Feb 17 16:24:45 crc kubenswrapper[4808]: I0217 16:24:45.104122 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-jskwv"] Feb 17 16:24:45 crc kubenswrapper[4808]: E0217 16:24:45.149296 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:24:45 crc kubenswrapper[4808]: I0217 16:24:45.169118 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="436b0400-6c82-450b-9505-61bf124b5db5" path="/var/lib/kubelet/pods/436b0400-6c82-450b-9505-61bf124b5db5/volumes" Feb 17 16:24:45 crc kubenswrapper[4808]: I0217 16:24:45.170324 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4002815-8dd4-4668-bea7-0d54bdaa4dd6" path="/var/lib/kubelet/pods/e4002815-8dd4-4668-bea7-0d54bdaa4dd6/volumes" Feb 17 16:24:50 crc kubenswrapper[4808]: I0217 16:24:50.146063 4808 scope.go:117] "RemoveContainer" containerID="3d547770092f773b5c7f62497d5451390c51dc1c958b49576b85d692e046de5d" Feb 17 16:24:50 crc kubenswrapper[4808]: E0217 16:24:50.146678 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:24:51 crc kubenswrapper[4808]: E0217 16:24:51.147809 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:24:59 crc kubenswrapper[4808]: E0217 16:24:59.149212 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:25:04 crc kubenswrapper[4808]: E0217 16:25:04.148814 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:25:05 crc kubenswrapper[4808]: I0217 16:25:05.146768 4808 scope.go:117] "RemoveContainer" containerID="3d547770092f773b5c7f62497d5451390c51dc1c958b49576b85d692e046de5d" Feb 17 16:25:05 crc kubenswrapper[4808]: I0217 16:25:05.675215 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" event={"ID":"ca38b6e7-b21c-453d-8b6c-a163dac84b35","Type":"ContainerStarted","Data":"ba9082db1029d7bfb949c1e61cae44b0ec31ca6cae55a6942a3dbac04ecadf0f"} Feb 17 16:25:06 crc kubenswrapper[4808]: I0217 16:25:06.029426 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-67f4b"] Feb 17 16:25:06 crc kubenswrapper[4808]: I0217 16:25:06.038188 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-67f4b"] Feb 17 16:25:07 crc kubenswrapper[4808]: I0217 16:25:07.177854 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb977bed-804c-4e4c-8d35-5562015024f3" path="/var/lib/kubelet/pods/bb977bed-804c-4e4c-8d35-5562015024f3/volumes" Feb 17 16:25:08 crc kubenswrapper[4808]: I0217 16:25:08.056576 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-d52vg"] Feb 17 16:25:08 crc kubenswrapper[4808]: I0217 16:25:08.068057 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-d52vg"] Feb 17 16:25:09 crc kubenswrapper[4808]: I0217 16:25:09.168891 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7820c3c-fe38-46dd-906a-498a579d0805" path="/var/lib/kubelet/pods/b7820c3c-fe38-46dd-906a-498a579d0805/volumes" Feb 17 16:25:13 crc kubenswrapper[4808]: E0217 16:25:13.149729 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:25:14 crc kubenswrapper[4808]: I0217 16:25:14.038365 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-rwld8"] Feb 17 16:25:14 crc kubenswrapper[4808]: I0217 16:25:14.049939 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-rwld8"] Feb 17 16:25:15 crc kubenswrapper[4808]: I0217 16:25:15.039165 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-jcqjf"] Feb 17 16:25:15 crc kubenswrapper[4808]: I0217 16:25:15.071757 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-jcqjf"] Feb 17 16:25:15 crc kubenswrapper[4808]: I0217 16:25:15.166631 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5bf4d932-664a-46c6-bec5-f2b70950c824" path="/var/lib/kubelet/pods/5bf4d932-664a-46c6-bec5-f2b70950c824/volumes" Feb 17 16:25:15 crc kubenswrapper[4808]: I0217 16:25:15.167355 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0cc3be3-7aa7-4384-97ed-1ec7bf75f026" path="/var/lib/kubelet/pods/d0cc3be3-7aa7-4384-97ed-1ec7bf75f026/volumes" Feb 17 16:25:16 crc kubenswrapper[4808]: I0217 16:25:16.286633 4808 scope.go:117] "RemoveContainer" containerID="be39fd3404d415b22eff1029ee90e816412441ea7651c949f01bcda15108e232" Feb 17 16:25:16 crc kubenswrapper[4808]: I0217 16:25:16.337412 4808 scope.go:117] "RemoveContainer" containerID="f8847c4c332a78fa4f9cfb197b1e182c16bad161468b9956b43f0c638512254c" Feb 17 16:25:16 crc kubenswrapper[4808]: I0217 16:25:16.416051 4808 scope.go:117] "RemoveContainer" containerID="d13306e7f7b98912b9cc3cb00da949b55a527efdf00a13d4c28a802941f6067a" Feb 17 16:25:16 crc kubenswrapper[4808]: I0217 16:25:16.461815 4808 scope.go:117] "RemoveContainer" containerID="f426da7c0095388c504bdd496cb29b45871594e3a52a02106d296d950a35b8b0" Feb 17 16:25:16 crc kubenswrapper[4808]: I0217 16:25:16.533519 4808 scope.go:117] "RemoveContainer" containerID="605854da0374a1e089d7a0c7ad0840ab1318edc5017bc1e2125f207c2fb40b06" Feb 17 16:25:16 crc kubenswrapper[4808]: I0217 16:25:16.576273 4808 scope.go:117] "RemoveContainer" containerID="8d303380763eeeb183dbe5ad17a24b48fb7b4e5af84df78d3904d5c4c2cf91f7" Feb 17 16:25:16 crc kubenswrapper[4808]: I0217 16:25:16.613996 4808 scope.go:117] "RemoveContainer" containerID="1cff9cf3eadd10df7be967e33cf8e5d78b57505ed6a912803f00cfd78dd0e31c" Feb 17 16:25:18 crc kubenswrapper[4808]: E0217 16:25:18.147560 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:25:24 crc kubenswrapper[4808]: E0217 16:25:24.149162 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:25:28 crc kubenswrapper[4808]: I0217 16:25:28.048588 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-storageinit-cftjl"] Feb 17 16:25:28 crc kubenswrapper[4808]: I0217 16:25:28.065511 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-storageinit-cftjl"] Feb 17 16:25:29 crc kubenswrapper[4808]: I0217 16:25:29.173772 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf7344d6-b8f4-4234-bb75-f4d7702b040b" path="/var/lib/kubelet/pods/cf7344d6-b8f4-4234-bb75-f4d7702b040b/volumes" Feb 17 16:25:30 crc kubenswrapper[4808]: E0217 16:25:30.148857 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:25:36 crc kubenswrapper[4808]: E0217 16:25:36.149100 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:25:43 crc kubenswrapper[4808]: E0217 16:25:43.154339 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:25:51 crc kubenswrapper[4808]: E0217 16:25:51.148946 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:25:54 crc kubenswrapper[4808]: E0217 16:25:54.149427 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:25:55 crc kubenswrapper[4808]: I0217 16:25:55.197624 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-tlq8w"] Feb 17 16:25:55 crc kubenswrapper[4808]: I0217 16:25:55.201473 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tlq8w" Feb 17 16:25:55 crc kubenswrapper[4808]: I0217 16:25:55.217187 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tlq8w"] Feb 17 16:25:55 crc kubenswrapper[4808]: I0217 16:25:55.394887 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/071adac8-52ce-4703-a685-252d450e9c18-utilities\") pod \"certified-operators-tlq8w\" (UID: \"071adac8-52ce-4703-a685-252d450e9c18\") " pod="openshift-marketplace/certified-operators-tlq8w" Feb 17 16:25:55 crc kubenswrapper[4808]: I0217 16:25:55.394989 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rbsv\" (UniqueName: \"kubernetes.io/projected/071adac8-52ce-4703-a685-252d450e9c18-kube-api-access-8rbsv\") pod \"certified-operators-tlq8w\" (UID: \"071adac8-52ce-4703-a685-252d450e9c18\") " pod="openshift-marketplace/certified-operators-tlq8w" Feb 17 16:25:55 crc kubenswrapper[4808]: I0217 16:25:55.395026 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/071adac8-52ce-4703-a685-252d450e9c18-catalog-content\") pod \"certified-operators-tlq8w\" (UID: \"071adac8-52ce-4703-a685-252d450e9c18\") " pod="openshift-marketplace/certified-operators-tlq8w" Feb 17 16:25:55 crc kubenswrapper[4808]: I0217 16:25:55.496901 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rbsv\" (UniqueName: \"kubernetes.io/projected/071adac8-52ce-4703-a685-252d450e9c18-kube-api-access-8rbsv\") pod \"certified-operators-tlq8w\" (UID: \"071adac8-52ce-4703-a685-252d450e9c18\") " pod="openshift-marketplace/certified-operators-tlq8w" Feb 17 16:25:55 crc kubenswrapper[4808]: I0217 16:25:55.497205 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/071adac8-52ce-4703-a685-252d450e9c18-catalog-content\") pod \"certified-operators-tlq8w\" (UID: \"071adac8-52ce-4703-a685-252d450e9c18\") " pod="openshift-marketplace/certified-operators-tlq8w" Feb 17 16:25:55 crc kubenswrapper[4808]: I0217 16:25:55.497355 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/071adac8-52ce-4703-a685-252d450e9c18-utilities\") pod \"certified-operators-tlq8w\" (UID: \"071adac8-52ce-4703-a685-252d450e9c18\") " pod="openshift-marketplace/certified-operators-tlq8w" Feb 17 16:25:55 crc kubenswrapper[4808]: I0217 16:25:55.497696 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/071adac8-52ce-4703-a685-252d450e9c18-catalog-content\") pod \"certified-operators-tlq8w\" (UID: \"071adac8-52ce-4703-a685-252d450e9c18\") " pod="openshift-marketplace/certified-operators-tlq8w" Feb 17 16:25:55 crc kubenswrapper[4808]: I0217 16:25:55.497844 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/071adac8-52ce-4703-a685-252d450e9c18-utilities\") pod \"certified-operators-tlq8w\" (UID: \"071adac8-52ce-4703-a685-252d450e9c18\") " pod="openshift-marketplace/certified-operators-tlq8w" Feb 17 16:25:55 crc kubenswrapper[4808]: I0217 16:25:55.527904 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rbsv\" (UniqueName: \"kubernetes.io/projected/071adac8-52ce-4703-a685-252d450e9c18-kube-api-access-8rbsv\") pod \"certified-operators-tlq8w\" (UID: \"071adac8-52ce-4703-a685-252d450e9c18\") " pod="openshift-marketplace/certified-operators-tlq8w" Feb 17 16:25:55 crc kubenswrapper[4808]: I0217 16:25:55.585938 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tlq8w" Feb 17 16:25:56 crc kubenswrapper[4808]: I0217 16:25:56.048566 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-7e6f-account-create-update-zcm7d"] Feb 17 16:25:56 crc kubenswrapper[4808]: I0217 16:25:56.064853 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-490b-account-create-update-7wjkg"] Feb 17 16:25:56 crc kubenswrapper[4808]: I0217 16:25:56.073170 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-0369-account-create-update-hd6gb"] Feb 17 16:25:56 crc kubenswrapper[4808]: I0217 16:25:56.085704 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-drbdx"] Feb 17 16:25:56 crc kubenswrapper[4808]: I0217 16:25:56.093448 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-bmg4x"] Feb 17 16:25:56 crc kubenswrapper[4808]: I0217 16:25:56.101834 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-7e6f-account-create-update-zcm7d"] Feb 17 16:25:56 crc kubenswrapper[4808]: I0217 16:25:56.111111 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-490b-account-create-update-7wjkg"] Feb 17 16:25:56 crc kubenswrapper[4808]: I0217 16:25:56.123302 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-0369-account-create-update-hd6gb"] Feb 17 16:25:56 crc kubenswrapper[4808]: I0217 16:25:56.133018 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-drbdx"] Feb 17 16:25:56 crc kubenswrapper[4808]: I0217 16:25:56.160079 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-bmg4x"] Feb 17 16:25:56 crc kubenswrapper[4808]: I0217 16:25:56.183252 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tlq8w"] Feb 17 16:25:56 crc kubenswrapper[4808]: I0217 16:25:56.308236 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tlq8w" event={"ID":"071adac8-52ce-4703-a685-252d450e9c18","Type":"ContainerStarted","Data":"3afeb434bab0fb0da9b13eaddc7fb873f72a9bd6ff9080844b251c54195e62ad"} Feb 17 16:25:57 crc kubenswrapper[4808]: I0217 16:25:57.032671 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-tmj75"] Feb 17 16:25:57 crc kubenswrapper[4808]: I0217 16:25:57.045230 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-tmj75"] Feb 17 16:25:57 crc kubenswrapper[4808]: I0217 16:25:57.165921 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="785bc852-9af8-4d44-9c07-a7b501efb72c" path="/var/lib/kubelet/pods/785bc852-9af8-4d44-9c07-a7b501efb72c/volumes" Feb 17 16:25:57 crc kubenswrapper[4808]: I0217 16:25:57.166492 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84bc7003-1a29-41b6-af75-956706dd0efe" path="/var/lib/kubelet/pods/84bc7003-1a29-41b6-af75-956706dd0efe/volumes" Feb 17 16:25:57 crc kubenswrapper[4808]: I0217 16:25:57.167033 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="adb98158-8a64-4a24-9d8a-5c7308881c79" path="/var/lib/kubelet/pods/adb98158-8a64-4a24-9d8a-5c7308881c79/volumes" Feb 17 16:25:57 crc kubenswrapper[4808]: I0217 16:25:57.168668 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6543f3f-c70d-4258-b1f3-b74458b60153" path="/var/lib/kubelet/pods/b6543f3f-c70d-4258-b1f3-b74458b60153/volumes" Feb 17 16:25:57 crc kubenswrapper[4808]: I0217 16:25:57.169748 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bad0fdf2-2880-4568-87b0-6319f864c348" path="/var/lib/kubelet/pods/bad0fdf2-2880-4568-87b0-6319f864c348/volumes" Feb 17 16:25:57 crc kubenswrapper[4808]: I0217 16:25:57.170259 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6cd1abe-7b23-494f-b22f-b355f5937f82" path="/var/lib/kubelet/pods/c6cd1abe-7b23-494f-b22f-b355f5937f82/volumes" Feb 17 16:25:57 crc kubenswrapper[4808]: I0217 16:25:57.317147 4808 generic.go:334] "Generic (PLEG): container finished" podID="071adac8-52ce-4703-a685-252d450e9c18" containerID="de091f3b420c5b774023dbccb9d1b587bc62d5421a964a3454077ef3e32acdc0" exitCode=0 Feb 17 16:25:57 crc kubenswrapper[4808]: I0217 16:25:57.317216 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tlq8w" event={"ID":"071adac8-52ce-4703-a685-252d450e9c18","Type":"ContainerDied","Data":"de091f3b420c5b774023dbccb9d1b587bc62d5421a964a3454077ef3e32acdc0"} Feb 17 16:25:58 crc kubenswrapper[4808]: I0217 16:25:58.326503 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tlq8w" event={"ID":"071adac8-52ce-4703-a685-252d450e9c18","Type":"ContainerStarted","Data":"249d45e5ae84178337a0d9ce7ba335223b88500ffe32a4b928144256f92f26e2"} Feb 17 16:26:02 crc kubenswrapper[4808]: I0217 16:26:02.383083 4808 generic.go:334] "Generic (PLEG): container finished" podID="071adac8-52ce-4703-a685-252d450e9c18" containerID="249d45e5ae84178337a0d9ce7ba335223b88500ffe32a4b928144256f92f26e2" exitCode=0 Feb 17 16:26:02 crc kubenswrapper[4808]: I0217 16:26:02.383157 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tlq8w" event={"ID":"071adac8-52ce-4703-a685-252d450e9c18","Type":"ContainerDied","Data":"249d45e5ae84178337a0d9ce7ba335223b88500ffe32a4b928144256f92f26e2"} Feb 17 16:26:03 crc kubenswrapper[4808]: E0217 16:26:03.146860 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:26:03 crc kubenswrapper[4808]: I0217 16:26:03.395094 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tlq8w" event={"ID":"071adac8-52ce-4703-a685-252d450e9c18","Type":"ContainerStarted","Data":"c191af7df43171393a8f2dcb17ff0940237db92a45de7eb53f8e9d7b06e7e72d"} Feb 17 16:26:03 crc kubenswrapper[4808]: I0217 16:26:03.429951 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-tlq8w" podStartSLOduration=2.983338063 podStartE2EDuration="8.429921445s" podCreationTimestamp="2026-02-17 16:25:55 +0000 UTC" firstStartedPulling="2026-02-17 16:25:57.319717206 +0000 UTC m=+1920.836076279" lastFinishedPulling="2026-02-17 16:26:02.766300578 +0000 UTC m=+1926.282659661" observedRunningTime="2026-02-17 16:26:03.415355585 +0000 UTC m=+1926.931714698" watchObservedRunningTime="2026-02-17 16:26:03.429921445 +0000 UTC m=+1926.946280548" Feb 17 16:26:05 crc kubenswrapper[4808]: I0217 16:26:05.587057 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-tlq8w" Feb 17 16:26:05 crc kubenswrapper[4808]: I0217 16:26:05.588650 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-tlq8w" Feb 17 16:26:05 crc kubenswrapper[4808]: I0217 16:26:05.648523 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-tlq8w" Feb 17 16:26:08 crc kubenswrapper[4808]: E0217 16:26:08.149010 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:26:15 crc kubenswrapper[4808]: E0217 16:26:15.150171 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:26:15 crc kubenswrapper[4808]: I0217 16:26:15.672837 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-tlq8w" Feb 17 16:26:15 crc kubenswrapper[4808]: I0217 16:26:15.745185 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tlq8w"] Feb 17 16:26:16 crc kubenswrapper[4808]: I0217 16:26:16.536853 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-tlq8w" podUID="071adac8-52ce-4703-a685-252d450e9c18" containerName="registry-server" containerID="cri-o://c191af7df43171393a8f2dcb17ff0940237db92a45de7eb53f8e9d7b06e7e72d" gracePeriod=2 Feb 17 16:26:16 crc kubenswrapper[4808]: I0217 16:26:16.807968 4808 scope.go:117] "RemoveContainer" containerID="202121dae9bdf398a0c42e540c49f3bde76321b020f7cab3e7250c352d974480" Feb 17 16:26:16 crc kubenswrapper[4808]: I0217 16:26:16.877491 4808 scope.go:117] "RemoveContainer" containerID="8a03cfda6ba1482551fb43a88bb0d456e3e357369b1e584649fa69312e5fe7ab" Feb 17 16:26:16 crc kubenswrapper[4808]: I0217 16:26:16.915693 4808 scope.go:117] "RemoveContainer" containerID="51791c7cf2f261447e50c08d9d3c4f313629f6102c4610a772dc3de95d2aa336" Feb 17 16:26:16 crc kubenswrapper[4808]: I0217 16:26:16.951708 4808 scope.go:117] "RemoveContainer" containerID="24b6cca39f7f0539540e703e695312278dead1c9fbed89b92d1978c2b31592d9" Feb 17 16:26:16 crc kubenswrapper[4808]: I0217 16:26:16.997448 4808 scope.go:117] "RemoveContainer" containerID="0c5f393313c4812ace12e3dfcc1699bc58edf0ad3bd0769e445698189b780158" Feb 17 16:26:17 crc kubenswrapper[4808]: I0217 16:26:17.061164 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tlq8w" Feb 17 16:26:17 crc kubenswrapper[4808]: I0217 16:26:17.080194 4808 scope.go:117] "RemoveContainer" containerID="75d3a237cde61df2195413fb2a62d4c02235666e74a55328045b62f08820fc28" Feb 17 16:26:17 crc kubenswrapper[4808]: I0217 16:26:17.123197 4808 scope.go:117] "RemoveContainer" containerID="4239c263afa33d8fe9b5e50780a3b457b698315d00933f6d44bd070b105665ca" Feb 17 16:26:17 crc kubenswrapper[4808]: I0217 16:26:17.239837 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/071adac8-52ce-4703-a685-252d450e9c18-catalog-content\") pod \"071adac8-52ce-4703-a685-252d450e9c18\" (UID: \"071adac8-52ce-4703-a685-252d450e9c18\") " Feb 17 16:26:17 crc kubenswrapper[4808]: I0217 16:26:17.239982 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/071adac8-52ce-4703-a685-252d450e9c18-utilities\") pod \"071adac8-52ce-4703-a685-252d450e9c18\" (UID: \"071adac8-52ce-4703-a685-252d450e9c18\") " Feb 17 16:26:17 crc kubenswrapper[4808]: I0217 16:26:17.240137 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8rbsv\" (UniqueName: \"kubernetes.io/projected/071adac8-52ce-4703-a685-252d450e9c18-kube-api-access-8rbsv\") pod \"071adac8-52ce-4703-a685-252d450e9c18\" (UID: \"071adac8-52ce-4703-a685-252d450e9c18\") " Feb 17 16:26:17 crc kubenswrapper[4808]: I0217 16:26:17.241063 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/071adac8-52ce-4703-a685-252d450e9c18-utilities" (OuterVolumeSpecName: "utilities") pod "071adac8-52ce-4703-a685-252d450e9c18" (UID: "071adac8-52ce-4703-a685-252d450e9c18"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:26:17 crc kubenswrapper[4808]: I0217 16:26:17.245131 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/071adac8-52ce-4703-a685-252d450e9c18-kube-api-access-8rbsv" (OuterVolumeSpecName: "kube-api-access-8rbsv") pod "071adac8-52ce-4703-a685-252d450e9c18" (UID: "071adac8-52ce-4703-a685-252d450e9c18"). InnerVolumeSpecName "kube-api-access-8rbsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:26:17 crc kubenswrapper[4808]: I0217 16:26:17.289977 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/071adac8-52ce-4703-a685-252d450e9c18-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "071adac8-52ce-4703-a685-252d450e9c18" (UID: "071adac8-52ce-4703-a685-252d450e9c18"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:26:17 crc kubenswrapper[4808]: I0217 16:26:17.342497 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/071adac8-52ce-4703-a685-252d450e9c18-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:17 crc kubenswrapper[4808]: I0217 16:26:17.342530 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8rbsv\" (UniqueName: \"kubernetes.io/projected/071adac8-52ce-4703-a685-252d450e9c18-kube-api-access-8rbsv\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:17 crc kubenswrapper[4808]: I0217 16:26:17.342544 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/071adac8-52ce-4703-a685-252d450e9c18-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:17 crc kubenswrapper[4808]: I0217 16:26:17.553318 4808 generic.go:334] "Generic (PLEG): container finished" podID="071adac8-52ce-4703-a685-252d450e9c18" containerID="c191af7df43171393a8f2dcb17ff0940237db92a45de7eb53f8e9d7b06e7e72d" exitCode=0 Feb 17 16:26:17 crc kubenswrapper[4808]: I0217 16:26:17.553491 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tlq8w" event={"ID":"071adac8-52ce-4703-a685-252d450e9c18","Type":"ContainerDied","Data":"c191af7df43171393a8f2dcb17ff0940237db92a45de7eb53f8e9d7b06e7e72d"} Feb 17 16:26:17 crc kubenswrapper[4808]: I0217 16:26:17.555124 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tlq8w" event={"ID":"071adac8-52ce-4703-a685-252d450e9c18","Type":"ContainerDied","Data":"3afeb434bab0fb0da9b13eaddc7fb873f72a9bd6ff9080844b251c54195e62ad"} Feb 17 16:26:17 crc kubenswrapper[4808]: I0217 16:26:17.553664 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tlq8w" Feb 17 16:26:17 crc kubenswrapper[4808]: I0217 16:26:17.555164 4808 scope.go:117] "RemoveContainer" containerID="c191af7df43171393a8f2dcb17ff0940237db92a45de7eb53f8e9d7b06e7e72d" Feb 17 16:26:17 crc kubenswrapper[4808]: I0217 16:26:17.589261 4808 scope.go:117] "RemoveContainer" containerID="249d45e5ae84178337a0d9ce7ba335223b88500ffe32a4b928144256f92f26e2" Feb 17 16:26:17 crc kubenswrapper[4808]: I0217 16:26:17.613629 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tlq8w"] Feb 17 16:26:17 crc kubenswrapper[4808]: I0217 16:26:17.617644 4808 scope.go:117] "RemoveContainer" containerID="de091f3b420c5b774023dbccb9d1b587bc62d5421a964a3454077ef3e32acdc0" Feb 17 16:26:17 crc kubenswrapper[4808]: I0217 16:26:17.624731 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-tlq8w"] Feb 17 16:26:17 crc kubenswrapper[4808]: I0217 16:26:17.675383 4808 scope.go:117] "RemoveContainer" containerID="c191af7df43171393a8f2dcb17ff0940237db92a45de7eb53f8e9d7b06e7e72d" Feb 17 16:26:17 crc kubenswrapper[4808]: E0217 16:26:17.690766 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c191af7df43171393a8f2dcb17ff0940237db92a45de7eb53f8e9d7b06e7e72d\": container with ID starting with c191af7df43171393a8f2dcb17ff0940237db92a45de7eb53f8e9d7b06e7e72d not found: ID does not exist" containerID="c191af7df43171393a8f2dcb17ff0940237db92a45de7eb53f8e9d7b06e7e72d" Feb 17 16:26:17 crc kubenswrapper[4808]: I0217 16:26:17.691076 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c191af7df43171393a8f2dcb17ff0940237db92a45de7eb53f8e9d7b06e7e72d"} err="failed to get container status \"c191af7df43171393a8f2dcb17ff0940237db92a45de7eb53f8e9d7b06e7e72d\": rpc error: code = NotFound desc = could not find container \"c191af7df43171393a8f2dcb17ff0940237db92a45de7eb53f8e9d7b06e7e72d\": container with ID starting with c191af7df43171393a8f2dcb17ff0940237db92a45de7eb53f8e9d7b06e7e72d not found: ID does not exist" Feb 17 16:26:17 crc kubenswrapper[4808]: I0217 16:26:17.691232 4808 scope.go:117] "RemoveContainer" containerID="249d45e5ae84178337a0d9ce7ba335223b88500ffe32a4b928144256f92f26e2" Feb 17 16:26:17 crc kubenswrapper[4808]: E0217 16:26:17.697758 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"249d45e5ae84178337a0d9ce7ba335223b88500ffe32a4b928144256f92f26e2\": container with ID starting with 249d45e5ae84178337a0d9ce7ba335223b88500ffe32a4b928144256f92f26e2 not found: ID does not exist" containerID="249d45e5ae84178337a0d9ce7ba335223b88500ffe32a4b928144256f92f26e2" Feb 17 16:26:17 crc kubenswrapper[4808]: I0217 16:26:17.698059 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"249d45e5ae84178337a0d9ce7ba335223b88500ffe32a4b928144256f92f26e2"} err="failed to get container status \"249d45e5ae84178337a0d9ce7ba335223b88500ffe32a4b928144256f92f26e2\": rpc error: code = NotFound desc = could not find container \"249d45e5ae84178337a0d9ce7ba335223b88500ffe32a4b928144256f92f26e2\": container with ID starting with 249d45e5ae84178337a0d9ce7ba335223b88500ffe32a4b928144256f92f26e2 not found: ID does not exist" Feb 17 16:26:17 crc kubenswrapper[4808]: I0217 16:26:17.698214 4808 scope.go:117] "RemoveContainer" containerID="de091f3b420c5b774023dbccb9d1b587bc62d5421a964a3454077ef3e32acdc0" Feb 17 16:26:17 crc kubenswrapper[4808]: E0217 16:26:17.700932 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de091f3b420c5b774023dbccb9d1b587bc62d5421a964a3454077ef3e32acdc0\": container with ID starting with de091f3b420c5b774023dbccb9d1b587bc62d5421a964a3454077ef3e32acdc0 not found: ID does not exist" containerID="de091f3b420c5b774023dbccb9d1b587bc62d5421a964a3454077ef3e32acdc0" Feb 17 16:26:17 crc kubenswrapper[4808]: I0217 16:26:17.701080 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de091f3b420c5b774023dbccb9d1b587bc62d5421a964a3454077ef3e32acdc0"} err="failed to get container status \"de091f3b420c5b774023dbccb9d1b587bc62d5421a964a3454077ef3e32acdc0\": rpc error: code = NotFound desc = could not find container \"de091f3b420c5b774023dbccb9d1b587bc62d5421a964a3454077ef3e32acdc0\": container with ID starting with de091f3b420c5b774023dbccb9d1b587bc62d5421a964a3454077ef3e32acdc0 not found: ID does not exist" Feb 17 16:26:19 crc kubenswrapper[4808]: I0217 16:26:19.161673 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="071adac8-52ce-4703-a685-252d450e9c18" path="/var/lib/kubelet/pods/071adac8-52ce-4703-a685-252d450e9c18/volumes" Feb 17 16:26:22 crc kubenswrapper[4808]: E0217 16:26:22.148103 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:26:28 crc kubenswrapper[4808]: I0217 16:26:28.064148 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-zrx8j"] Feb 17 16:26:28 crc kubenswrapper[4808]: I0217 16:26:28.079784 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-zrx8j"] Feb 17 16:26:29 crc kubenswrapper[4808]: I0217 16:26:29.160491 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a276997e-b8ab-4b5a-ac5f-c21a8114d673" path="/var/lib/kubelet/pods/a276997e-b8ab-4b5a-ac5f-c21a8114d673/volumes" Feb 17 16:26:30 crc kubenswrapper[4808]: E0217 16:26:30.147517 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:26:33 crc kubenswrapper[4808]: E0217 16:26:33.150631 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:26:44 crc kubenswrapper[4808]: E0217 16:26:44.150035 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:26:44 crc kubenswrapper[4808]: E0217 16:26:44.150097 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:26:55 crc kubenswrapper[4808]: E0217 16:26:55.149684 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:26:58 crc kubenswrapper[4808]: E0217 16:26:58.148323 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:27:03 crc kubenswrapper[4808]: I0217 16:27:03.051454 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-lhrsb"] Feb 17 16:27:03 crc kubenswrapper[4808]: I0217 16:27:03.066457 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-46chh"] Feb 17 16:27:03 crc kubenswrapper[4808]: I0217 16:27:03.081413 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-46chh"] Feb 17 16:27:03 crc kubenswrapper[4808]: I0217 16:27:03.089446 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-lhrsb"] Feb 17 16:27:03 crc kubenswrapper[4808]: I0217 16:27:03.159239 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3864d41e-915e-4b73-908e-c575d38863e9" path="/var/lib/kubelet/pods/3864d41e-915e-4b73-908e-c575d38863e9/volumes" Feb 17 16:27:03 crc kubenswrapper[4808]: I0217 16:27:03.160359 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d64831b-aec0-42cd-96ec-831ec911d921" path="/var/lib/kubelet/pods/8d64831b-aec0-42cd-96ec-831ec911d921/volumes" Feb 17 16:27:09 crc kubenswrapper[4808]: E0217 16:27:09.148901 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:27:13 crc kubenswrapper[4808]: E0217 16:27:13.148000 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:27:17 crc kubenswrapper[4808]: I0217 16:27:17.297721 4808 scope.go:117] "RemoveContainer" containerID="c7ce5a6ab108ae38e41b41038e16d03130e5c8bb91a8cb5bfd28423f0687dfdc" Feb 17 16:27:17 crc kubenswrapper[4808]: I0217 16:27:17.344687 4808 scope.go:117] "RemoveContainer" containerID="03dd27d0072c98b182eebc081f82c18296cd4cef8a9626830d097fc0caa3a09f" Feb 17 16:27:17 crc kubenswrapper[4808]: I0217 16:27:17.414894 4808 scope.go:117] "RemoveContainer" containerID="531034a194c4af62f0c8e11015f026a45e10d027a70d8384a365f5385731c096" Feb 17 16:27:20 crc kubenswrapper[4808]: E0217 16:27:20.150141 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:27:21 crc kubenswrapper[4808]: I0217 16:27:21.592778 4808 patch_prober.go:28] interesting pod/machine-config-daemon-k8v8k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:27:21 crc kubenswrapper[4808]: I0217 16:27:21.592860 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:27:25 crc kubenswrapper[4808]: E0217 16:27:25.148179 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:27:32 crc kubenswrapper[4808]: E0217 16:27:32.148973 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:27:38 crc kubenswrapper[4808]: E0217 16:27:38.149670 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:27:41 crc kubenswrapper[4808]: I0217 16:27:41.779072 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" containerName="ceilometer-notification-agent" probeResult="failure" output="command timed out" Feb 17 16:27:44 crc kubenswrapper[4808]: E0217 16:27:44.148731 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:27:51 crc kubenswrapper[4808]: I0217 16:27:51.592049 4808 patch_prober.go:28] interesting pod/machine-config-daemon-k8v8k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:27:51 crc kubenswrapper[4808]: I0217 16:27:51.592718 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:27:52 crc kubenswrapper[4808]: I0217 16:27:52.052052 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-lf98l"] Feb 17 16:27:52 crc kubenswrapper[4808]: I0217 16:27:52.060923 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-lf98l"] Feb 17 16:27:52 crc kubenswrapper[4808]: E0217 16:27:52.148564 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:27:53 crc kubenswrapper[4808]: I0217 16:27:53.159321 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a26947f-ccdc-4726-98dc-a0c08a2a198b" path="/var/lib/kubelet/pods/9a26947f-ccdc-4726-98dc-a0c08a2a198b/volumes" Feb 17 16:27:56 crc kubenswrapper[4808]: E0217 16:27:56.149402 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:28:06 crc kubenswrapper[4808]: E0217 16:28:06.147563 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:28:09 crc kubenswrapper[4808]: E0217 16:28:09.151412 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:28:13 crc kubenswrapper[4808]: I0217 16:28:13.879000 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hq4vv"] Feb 17 16:28:13 crc kubenswrapper[4808]: E0217 16:28:13.880749 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="071adac8-52ce-4703-a685-252d450e9c18" containerName="extract-utilities" Feb 17 16:28:13 crc kubenswrapper[4808]: I0217 16:28:13.880782 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="071adac8-52ce-4703-a685-252d450e9c18" containerName="extract-utilities" Feb 17 16:28:13 crc kubenswrapper[4808]: E0217 16:28:13.880848 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="071adac8-52ce-4703-a685-252d450e9c18" containerName="registry-server" Feb 17 16:28:13 crc kubenswrapper[4808]: I0217 16:28:13.880866 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="071adac8-52ce-4703-a685-252d450e9c18" containerName="registry-server" Feb 17 16:28:13 crc kubenswrapper[4808]: E0217 16:28:13.880915 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="071adac8-52ce-4703-a685-252d450e9c18" containerName="extract-content" Feb 17 16:28:13 crc kubenswrapper[4808]: I0217 16:28:13.880938 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="071adac8-52ce-4703-a685-252d450e9c18" containerName="extract-content" Feb 17 16:28:13 crc kubenswrapper[4808]: I0217 16:28:13.881484 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="071adac8-52ce-4703-a685-252d450e9c18" containerName="registry-server" Feb 17 16:28:13 crc kubenswrapper[4808]: I0217 16:28:13.885336 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hq4vv" Feb 17 16:28:13 crc kubenswrapper[4808]: I0217 16:28:13.910635 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hq4vv"] Feb 17 16:28:13 crc kubenswrapper[4808]: I0217 16:28:13.938153 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpv9n\" (UniqueName: \"kubernetes.io/projected/9c5ff0a3-7a28-4be0-bbea-b9058f87ec29-kube-api-access-fpv9n\") pod \"redhat-operators-hq4vv\" (UID: \"9c5ff0a3-7a28-4be0-bbea-b9058f87ec29\") " pod="openshift-marketplace/redhat-operators-hq4vv" Feb 17 16:28:13 crc kubenswrapper[4808]: I0217 16:28:13.938234 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c5ff0a3-7a28-4be0-bbea-b9058f87ec29-catalog-content\") pod \"redhat-operators-hq4vv\" (UID: \"9c5ff0a3-7a28-4be0-bbea-b9058f87ec29\") " pod="openshift-marketplace/redhat-operators-hq4vv" Feb 17 16:28:13 crc kubenswrapper[4808]: I0217 16:28:13.938334 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c5ff0a3-7a28-4be0-bbea-b9058f87ec29-utilities\") pod \"redhat-operators-hq4vv\" (UID: \"9c5ff0a3-7a28-4be0-bbea-b9058f87ec29\") " pod="openshift-marketplace/redhat-operators-hq4vv" Feb 17 16:28:14 crc kubenswrapper[4808]: I0217 16:28:14.040108 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c5ff0a3-7a28-4be0-bbea-b9058f87ec29-utilities\") pod \"redhat-operators-hq4vv\" (UID: \"9c5ff0a3-7a28-4be0-bbea-b9058f87ec29\") " pod="openshift-marketplace/redhat-operators-hq4vv" Feb 17 16:28:14 crc kubenswrapper[4808]: I0217 16:28:14.040464 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fpv9n\" (UniqueName: \"kubernetes.io/projected/9c5ff0a3-7a28-4be0-bbea-b9058f87ec29-kube-api-access-fpv9n\") pod \"redhat-operators-hq4vv\" (UID: \"9c5ff0a3-7a28-4be0-bbea-b9058f87ec29\") " pod="openshift-marketplace/redhat-operators-hq4vv" Feb 17 16:28:14 crc kubenswrapper[4808]: I0217 16:28:14.040559 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c5ff0a3-7a28-4be0-bbea-b9058f87ec29-catalog-content\") pod \"redhat-operators-hq4vv\" (UID: \"9c5ff0a3-7a28-4be0-bbea-b9058f87ec29\") " pod="openshift-marketplace/redhat-operators-hq4vv" Feb 17 16:28:14 crc kubenswrapper[4808]: I0217 16:28:14.040787 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c5ff0a3-7a28-4be0-bbea-b9058f87ec29-utilities\") pod \"redhat-operators-hq4vv\" (UID: \"9c5ff0a3-7a28-4be0-bbea-b9058f87ec29\") " pod="openshift-marketplace/redhat-operators-hq4vv" Feb 17 16:28:14 crc kubenswrapper[4808]: I0217 16:28:14.040983 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c5ff0a3-7a28-4be0-bbea-b9058f87ec29-catalog-content\") pod \"redhat-operators-hq4vv\" (UID: \"9c5ff0a3-7a28-4be0-bbea-b9058f87ec29\") " pod="openshift-marketplace/redhat-operators-hq4vv" Feb 17 16:28:14 crc kubenswrapper[4808]: I0217 16:28:14.062235 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fpv9n\" (UniqueName: \"kubernetes.io/projected/9c5ff0a3-7a28-4be0-bbea-b9058f87ec29-kube-api-access-fpv9n\") pod \"redhat-operators-hq4vv\" (UID: \"9c5ff0a3-7a28-4be0-bbea-b9058f87ec29\") " pod="openshift-marketplace/redhat-operators-hq4vv" Feb 17 16:28:14 crc kubenswrapper[4808]: I0217 16:28:14.228817 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hq4vv" Feb 17 16:28:14 crc kubenswrapper[4808]: I0217 16:28:14.685223 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hq4vv"] Feb 17 16:28:14 crc kubenswrapper[4808]: I0217 16:28:14.868269 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hq4vv" event={"ID":"9c5ff0a3-7a28-4be0-bbea-b9058f87ec29","Type":"ContainerStarted","Data":"59f85d534f1c5d5a0ca9234081d8cdc8974975ca244768bed00c00b344466112"} Feb 17 16:28:15 crc kubenswrapper[4808]: I0217 16:28:15.879731 4808 generic.go:334] "Generic (PLEG): container finished" podID="9c5ff0a3-7a28-4be0-bbea-b9058f87ec29" containerID="64e1f84e31293a6c69e3e994952a776bcd04b97b872f856b3844a61cb99b2e6b" exitCode=0 Feb 17 16:28:15 crc kubenswrapper[4808]: I0217 16:28:15.879904 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hq4vv" event={"ID":"9c5ff0a3-7a28-4be0-bbea-b9058f87ec29","Type":"ContainerDied","Data":"64e1f84e31293a6c69e3e994952a776bcd04b97b872f856b3844a61cb99b2e6b"} Feb 17 16:28:15 crc kubenswrapper[4808]: I0217 16:28:15.882004 4808 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 16:28:16 crc kubenswrapper[4808]: I0217 16:28:16.896567 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hq4vv" event={"ID":"9c5ff0a3-7a28-4be0-bbea-b9058f87ec29","Type":"ContainerStarted","Data":"2202ab54cce46501d924080e87b75d03cda4e99f070d52743be88e1707063844"} Feb 17 16:28:17 crc kubenswrapper[4808]: I0217 16:28:17.561318 4808 scope.go:117] "RemoveContainer" containerID="af528ab271e814b2015501ad54dc67165447a3cd6d539f4779d4b1f395b9ad79" Feb 17 16:28:18 crc kubenswrapper[4808]: E0217 16:28:18.148442 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:28:20 crc kubenswrapper[4808]: E0217 16:28:20.148570 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:28:21 crc kubenswrapper[4808]: I0217 16:28:21.592997 4808 patch_prober.go:28] interesting pod/machine-config-daemon-k8v8k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:28:21 crc kubenswrapper[4808]: I0217 16:28:21.593091 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:28:21 crc kubenswrapper[4808]: I0217 16:28:21.593161 4808 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" Feb 17 16:28:21 crc kubenswrapper[4808]: I0217 16:28:21.594482 4808 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ba9082db1029d7bfb949c1e61cae44b0ec31ca6cae55a6942a3dbac04ecadf0f"} pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 16:28:21 crc kubenswrapper[4808]: I0217 16:28:21.594658 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" containerID="cri-o://ba9082db1029d7bfb949c1e61cae44b0ec31ca6cae55a6942a3dbac04ecadf0f" gracePeriod=600 Feb 17 16:28:21 crc kubenswrapper[4808]: I0217 16:28:21.955889 4808 generic.go:334] "Generic (PLEG): container finished" podID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerID="ba9082db1029d7bfb949c1e61cae44b0ec31ca6cae55a6942a3dbac04ecadf0f" exitCode=0 Feb 17 16:28:21 crc kubenswrapper[4808]: I0217 16:28:21.955988 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" event={"ID":"ca38b6e7-b21c-453d-8b6c-a163dac84b35","Type":"ContainerDied","Data":"ba9082db1029d7bfb949c1e61cae44b0ec31ca6cae55a6942a3dbac04ecadf0f"} Feb 17 16:28:21 crc kubenswrapper[4808]: I0217 16:28:21.956273 4808 scope.go:117] "RemoveContainer" containerID="3d547770092f773b5c7f62497d5451390c51dc1c958b49576b85d692e046de5d" Feb 17 16:28:21 crc kubenswrapper[4808]: I0217 16:28:21.959593 4808 generic.go:334] "Generic (PLEG): container finished" podID="9c5ff0a3-7a28-4be0-bbea-b9058f87ec29" containerID="2202ab54cce46501d924080e87b75d03cda4e99f070d52743be88e1707063844" exitCode=0 Feb 17 16:28:21 crc kubenswrapper[4808]: I0217 16:28:21.959637 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hq4vv" event={"ID":"9c5ff0a3-7a28-4be0-bbea-b9058f87ec29","Type":"ContainerDied","Data":"2202ab54cce46501d924080e87b75d03cda4e99f070d52743be88e1707063844"} Feb 17 16:28:22 crc kubenswrapper[4808]: I0217 16:28:22.975223 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hq4vv" event={"ID":"9c5ff0a3-7a28-4be0-bbea-b9058f87ec29","Type":"ContainerStarted","Data":"d05ba129c5fb0f360f858c4b6bc003646deb9e62dc5fce155872b9940a57e4bb"} Feb 17 16:28:22 crc kubenswrapper[4808]: I0217 16:28:22.978061 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" event={"ID":"ca38b6e7-b21c-453d-8b6c-a163dac84b35","Type":"ContainerStarted","Data":"1bc8c301ec8b4441d9a8329001acd7ade818d27cbaa99f4b04c925c309e2eb22"} Feb 17 16:28:23 crc kubenswrapper[4808]: I0217 16:28:23.018144 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hq4vv" podStartSLOduration=3.467866383 podStartE2EDuration="10.018124021s" podCreationTimestamp="2026-02-17 16:28:13 +0000 UTC" firstStartedPulling="2026-02-17 16:28:15.881532706 +0000 UTC m=+2059.397891799" lastFinishedPulling="2026-02-17 16:28:22.431790364 +0000 UTC m=+2065.948149437" observedRunningTime="2026-02-17 16:28:22.999109314 +0000 UTC m=+2066.515468397" watchObservedRunningTime="2026-02-17 16:28:23.018124021 +0000 UTC m=+2066.534483094" Feb 17 16:28:24 crc kubenswrapper[4808]: I0217 16:28:24.229886 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-hq4vv" Feb 17 16:28:24 crc kubenswrapper[4808]: I0217 16:28:24.230390 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hq4vv" Feb 17 16:28:25 crc kubenswrapper[4808]: I0217 16:28:25.282379 4808 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hq4vv" podUID="9c5ff0a3-7a28-4be0-bbea-b9058f87ec29" containerName="registry-server" probeResult="failure" output=< Feb 17 16:28:25 crc kubenswrapper[4808]: timeout: failed to connect service ":50051" within 1s Feb 17 16:28:25 crc kubenswrapper[4808]: > Feb 17 16:28:27 crc kubenswrapper[4808]: I0217 16:28:27.027808 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4zchg"] Feb 17 16:28:27 crc kubenswrapper[4808]: I0217 16:28:27.030801 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4zchg" Feb 17 16:28:27 crc kubenswrapper[4808]: I0217 16:28:27.042551 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4zchg"] Feb 17 16:28:27 crc kubenswrapper[4808]: I0217 16:28:27.136102 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12171d1b-4dea-4358-89cd-ba25b219f753-utilities\") pod \"community-operators-4zchg\" (UID: \"12171d1b-4dea-4358-89cd-ba25b219f753\") " pod="openshift-marketplace/community-operators-4zchg" Feb 17 16:28:27 crc kubenswrapper[4808]: I0217 16:28:27.136161 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12171d1b-4dea-4358-89cd-ba25b219f753-catalog-content\") pod \"community-operators-4zchg\" (UID: \"12171d1b-4dea-4358-89cd-ba25b219f753\") " pod="openshift-marketplace/community-operators-4zchg" Feb 17 16:28:27 crc kubenswrapper[4808]: I0217 16:28:27.136225 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k69hd\" (UniqueName: \"kubernetes.io/projected/12171d1b-4dea-4358-89cd-ba25b219f753-kube-api-access-k69hd\") pod \"community-operators-4zchg\" (UID: \"12171d1b-4dea-4358-89cd-ba25b219f753\") " pod="openshift-marketplace/community-operators-4zchg" Feb 17 16:28:27 crc kubenswrapper[4808]: I0217 16:28:27.238508 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k69hd\" (UniqueName: \"kubernetes.io/projected/12171d1b-4dea-4358-89cd-ba25b219f753-kube-api-access-k69hd\") pod \"community-operators-4zchg\" (UID: \"12171d1b-4dea-4358-89cd-ba25b219f753\") " pod="openshift-marketplace/community-operators-4zchg" Feb 17 16:28:27 crc kubenswrapper[4808]: I0217 16:28:27.239943 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12171d1b-4dea-4358-89cd-ba25b219f753-utilities\") pod \"community-operators-4zchg\" (UID: \"12171d1b-4dea-4358-89cd-ba25b219f753\") " pod="openshift-marketplace/community-operators-4zchg" Feb 17 16:28:27 crc kubenswrapper[4808]: I0217 16:28:27.239996 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12171d1b-4dea-4358-89cd-ba25b219f753-catalog-content\") pod \"community-operators-4zchg\" (UID: \"12171d1b-4dea-4358-89cd-ba25b219f753\") " pod="openshift-marketplace/community-operators-4zchg" Feb 17 16:28:27 crc kubenswrapper[4808]: I0217 16:28:27.240917 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12171d1b-4dea-4358-89cd-ba25b219f753-utilities\") pod \"community-operators-4zchg\" (UID: \"12171d1b-4dea-4358-89cd-ba25b219f753\") " pod="openshift-marketplace/community-operators-4zchg" Feb 17 16:28:27 crc kubenswrapper[4808]: I0217 16:28:27.241491 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12171d1b-4dea-4358-89cd-ba25b219f753-catalog-content\") pod \"community-operators-4zchg\" (UID: \"12171d1b-4dea-4358-89cd-ba25b219f753\") " pod="openshift-marketplace/community-operators-4zchg" Feb 17 16:28:27 crc kubenswrapper[4808]: I0217 16:28:27.268752 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k69hd\" (UniqueName: \"kubernetes.io/projected/12171d1b-4dea-4358-89cd-ba25b219f753-kube-api-access-k69hd\") pod \"community-operators-4zchg\" (UID: \"12171d1b-4dea-4358-89cd-ba25b219f753\") " pod="openshift-marketplace/community-operators-4zchg" Feb 17 16:28:27 crc kubenswrapper[4808]: I0217 16:28:27.363393 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4zchg" Feb 17 16:28:27 crc kubenswrapper[4808]: I0217 16:28:27.953110 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4zchg"] Feb 17 16:28:28 crc kubenswrapper[4808]: I0217 16:28:28.029428 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4zchg" event={"ID":"12171d1b-4dea-4358-89cd-ba25b219f753","Type":"ContainerStarted","Data":"68f649476e38cbc82b4ba982f39c632fb19bbdf3c243d2c8025176af812aea53"} Feb 17 16:28:29 crc kubenswrapper[4808]: I0217 16:28:29.044554 4808 generic.go:334] "Generic (PLEG): container finished" podID="12171d1b-4dea-4358-89cd-ba25b219f753" containerID="eaab67ade3e6a8ead085c7389c35450cef55e0a08a5aea1cae472285361aeb8a" exitCode=0 Feb 17 16:28:29 crc kubenswrapper[4808]: I0217 16:28:29.044634 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4zchg" event={"ID":"12171d1b-4dea-4358-89cd-ba25b219f753","Type":"ContainerDied","Data":"eaab67ade3e6a8ead085c7389c35450cef55e0a08a5aea1cae472285361aeb8a"} Feb 17 16:28:30 crc kubenswrapper[4808]: I0217 16:28:30.056269 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4zchg" event={"ID":"12171d1b-4dea-4358-89cd-ba25b219f753","Type":"ContainerStarted","Data":"01886f3aa66694baa8290698092e6055a9b8e9e08c35606c247630e462c5fc6c"} Feb 17 16:28:30 crc kubenswrapper[4808]: E0217 16:28:30.147634 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:28:31 crc kubenswrapper[4808]: E0217 16:28:31.148332 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:28:32 crc kubenswrapper[4808]: I0217 16:28:32.079160 4808 generic.go:334] "Generic (PLEG): container finished" podID="12171d1b-4dea-4358-89cd-ba25b219f753" containerID="01886f3aa66694baa8290698092e6055a9b8e9e08c35606c247630e462c5fc6c" exitCode=0 Feb 17 16:28:32 crc kubenswrapper[4808]: I0217 16:28:32.079270 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4zchg" event={"ID":"12171d1b-4dea-4358-89cd-ba25b219f753","Type":"ContainerDied","Data":"01886f3aa66694baa8290698092e6055a9b8e9e08c35606c247630e462c5fc6c"} Feb 17 16:28:33 crc kubenswrapper[4808]: I0217 16:28:33.092069 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4zchg" event={"ID":"12171d1b-4dea-4358-89cd-ba25b219f753","Type":"ContainerStarted","Data":"63b0f13f2686512e6cb3851b56a4c2d66348cef0074cd1e2922ae2c51d2158d3"} Feb 17 16:28:33 crc kubenswrapper[4808]: I0217 16:28:33.118427 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4zchg" podStartSLOduration=2.662289438 podStartE2EDuration="6.118410233s" podCreationTimestamp="2026-02-17 16:28:27 +0000 UTC" firstStartedPulling="2026-02-17 16:28:29.04767989 +0000 UTC m=+2072.564038963" lastFinishedPulling="2026-02-17 16:28:32.503800685 +0000 UTC m=+2076.020159758" observedRunningTime="2026-02-17 16:28:33.113891573 +0000 UTC m=+2076.630250686" watchObservedRunningTime="2026-02-17 16:28:33.118410233 +0000 UTC m=+2076.634769316" Feb 17 16:28:35 crc kubenswrapper[4808]: I0217 16:28:35.296506 4808 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hq4vv" podUID="9c5ff0a3-7a28-4be0-bbea-b9058f87ec29" containerName="registry-server" probeResult="failure" output=< Feb 17 16:28:35 crc kubenswrapper[4808]: timeout: failed to connect service ":50051" within 1s Feb 17 16:28:35 crc kubenswrapper[4808]: > Feb 17 16:28:37 crc kubenswrapper[4808]: I0217 16:28:37.363540 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-4zchg" Feb 17 16:28:37 crc kubenswrapper[4808]: I0217 16:28:37.363838 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4zchg" Feb 17 16:28:37 crc kubenswrapper[4808]: I0217 16:28:37.427116 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4zchg" Feb 17 16:28:38 crc kubenswrapper[4808]: I0217 16:28:38.202331 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4zchg" Feb 17 16:28:38 crc kubenswrapper[4808]: I0217 16:28:38.270955 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4zchg"] Feb 17 16:28:40 crc kubenswrapper[4808]: I0217 16:28:40.173342 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-4zchg" podUID="12171d1b-4dea-4358-89cd-ba25b219f753" containerName="registry-server" containerID="cri-o://63b0f13f2686512e6cb3851b56a4c2d66348cef0074cd1e2922ae2c51d2158d3" gracePeriod=2 Feb 17 16:28:40 crc kubenswrapper[4808]: E0217 16:28:40.421783 4808 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod12171d1b_4dea_4358_89cd_ba25b219f753.slice/crio-63b0f13f2686512e6cb3851b56a4c2d66348cef0074cd1e2922ae2c51d2158d3.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod12171d1b_4dea_4358_89cd_ba25b219f753.slice/crio-conmon-63b0f13f2686512e6cb3851b56a4c2d66348cef0074cd1e2922ae2c51d2158d3.scope\": RecentStats: unable to find data in memory cache]" Feb 17 16:28:40 crc kubenswrapper[4808]: I0217 16:28:40.788919 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4zchg" Feb 17 16:28:40 crc kubenswrapper[4808]: I0217 16:28:40.881490 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12171d1b-4dea-4358-89cd-ba25b219f753-utilities\") pod \"12171d1b-4dea-4358-89cd-ba25b219f753\" (UID: \"12171d1b-4dea-4358-89cd-ba25b219f753\") " Feb 17 16:28:40 crc kubenswrapper[4808]: I0217 16:28:40.881797 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k69hd\" (UniqueName: \"kubernetes.io/projected/12171d1b-4dea-4358-89cd-ba25b219f753-kube-api-access-k69hd\") pod \"12171d1b-4dea-4358-89cd-ba25b219f753\" (UID: \"12171d1b-4dea-4358-89cd-ba25b219f753\") " Feb 17 16:28:40 crc kubenswrapper[4808]: I0217 16:28:40.881851 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12171d1b-4dea-4358-89cd-ba25b219f753-catalog-content\") pod \"12171d1b-4dea-4358-89cd-ba25b219f753\" (UID: \"12171d1b-4dea-4358-89cd-ba25b219f753\") " Feb 17 16:28:40 crc kubenswrapper[4808]: I0217 16:28:40.882728 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/12171d1b-4dea-4358-89cd-ba25b219f753-utilities" (OuterVolumeSpecName: "utilities") pod "12171d1b-4dea-4358-89cd-ba25b219f753" (UID: "12171d1b-4dea-4358-89cd-ba25b219f753"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:28:40 crc kubenswrapper[4808]: I0217 16:28:40.887594 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12171d1b-4dea-4358-89cd-ba25b219f753-kube-api-access-k69hd" (OuterVolumeSpecName: "kube-api-access-k69hd") pod "12171d1b-4dea-4358-89cd-ba25b219f753" (UID: "12171d1b-4dea-4358-89cd-ba25b219f753"). InnerVolumeSpecName "kube-api-access-k69hd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:28:40 crc kubenswrapper[4808]: I0217 16:28:40.934076 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/12171d1b-4dea-4358-89cd-ba25b219f753-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "12171d1b-4dea-4358-89cd-ba25b219f753" (UID: "12171d1b-4dea-4358-89cd-ba25b219f753"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:28:40 crc kubenswrapper[4808]: I0217 16:28:40.984911 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12171d1b-4dea-4358-89cd-ba25b219f753-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:28:40 crc kubenswrapper[4808]: I0217 16:28:40.984943 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k69hd\" (UniqueName: \"kubernetes.io/projected/12171d1b-4dea-4358-89cd-ba25b219f753-kube-api-access-k69hd\") on node \"crc\" DevicePath \"\"" Feb 17 16:28:40 crc kubenswrapper[4808]: I0217 16:28:40.984953 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12171d1b-4dea-4358-89cd-ba25b219f753-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:28:41 crc kubenswrapper[4808]: I0217 16:28:41.190347 4808 generic.go:334] "Generic (PLEG): container finished" podID="12171d1b-4dea-4358-89cd-ba25b219f753" containerID="63b0f13f2686512e6cb3851b56a4c2d66348cef0074cd1e2922ae2c51d2158d3" exitCode=0 Feb 17 16:28:41 crc kubenswrapper[4808]: I0217 16:28:41.190454 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4zchg" Feb 17 16:28:41 crc kubenswrapper[4808]: I0217 16:28:41.190489 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4zchg" event={"ID":"12171d1b-4dea-4358-89cd-ba25b219f753","Type":"ContainerDied","Data":"63b0f13f2686512e6cb3851b56a4c2d66348cef0074cd1e2922ae2c51d2158d3"} Feb 17 16:28:41 crc kubenswrapper[4808]: I0217 16:28:41.191660 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4zchg" event={"ID":"12171d1b-4dea-4358-89cd-ba25b219f753","Type":"ContainerDied","Data":"68f649476e38cbc82b4ba982f39c632fb19bbdf3c243d2c8025176af812aea53"} Feb 17 16:28:41 crc kubenswrapper[4808]: I0217 16:28:41.191701 4808 scope.go:117] "RemoveContainer" containerID="63b0f13f2686512e6cb3851b56a4c2d66348cef0074cd1e2922ae2c51d2158d3" Feb 17 16:28:41 crc kubenswrapper[4808]: I0217 16:28:41.227852 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4zchg"] Feb 17 16:28:41 crc kubenswrapper[4808]: I0217 16:28:41.231768 4808 scope.go:117] "RemoveContainer" containerID="01886f3aa66694baa8290698092e6055a9b8e9e08c35606c247630e462c5fc6c" Feb 17 16:28:41 crc kubenswrapper[4808]: I0217 16:28:41.237748 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-4zchg"] Feb 17 16:28:41 crc kubenswrapper[4808]: I0217 16:28:41.257720 4808 scope.go:117] "RemoveContainer" containerID="eaab67ade3e6a8ead085c7389c35450cef55e0a08a5aea1cae472285361aeb8a" Feb 17 16:28:41 crc kubenswrapper[4808]: I0217 16:28:41.332655 4808 scope.go:117] "RemoveContainer" containerID="63b0f13f2686512e6cb3851b56a4c2d66348cef0074cd1e2922ae2c51d2158d3" Feb 17 16:28:41 crc kubenswrapper[4808]: E0217 16:28:41.333114 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"63b0f13f2686512e6cb3851b56a4c2d66348cef0074cd1e2922ae2c51d2158d3\": container with ID starting with 63b0f13f2686512e6cb3851b56a4c2d66348cef0074cd1e2922ae2c51d2158d3 not found: ID does not exist" containerID="63b0f13f2686512e6cb3851b56a4c2d66348cef0074cd1e2922ae2c51d2158d3" Feb 17 16:28:41 crc kubenswrapper[4808]: I0217 16:28:41.333159 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63b0f13f2686512e6cb3851b56a4c2d66348cef0074cd1e2922ae2c51d2158d3"} err="failed to get container status \"63b0f13f2686512e6cb3851b56a4c2d66348cef0074cd1e2922ae2c51d2158d3\": rpc error: code = NotFound desc = could not find container \"63b0f13f2686512e6cb3851b56a4c2d66348cef0074cd1e2922ae2c51d2158d3\": container with ID starting with 63b0f13f2686512e6cb3851b56a4c2d66348cef0074cd1e2922ae2c51d2158d3 not found: ID does not exist" Feb 17 16:28:41 crc kubenswrapper[4808]: I0217 16:28:41.333193 4808 scope.go:117] "RemoveContainer" containerID="01886f3aa66694baa8290698092e6055a9b8e9e08c35606c247630e462c5fc6c" Feb 17 16:28:41 crc kubenswrapper[4808]: E0217 16:28:41.333860 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01886f3aa66694baa8290698092e6055a9b8e9e08c35606c247630e462c5fc6c\": container with ID starting with 01886f3aa66694baa8290698092e6055a9b8e9e08c35606c247630e462c5fc6c not found: ID does not exist" containerID="01886f3aa66694baa8290698092e6055a9b8e9e08c35606c247630e462c5fc6c" Feb 17 16:28:41 crc kubenswrapper[4808]: I0217 16:28:41.333995 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01886f3aa66694baa8290698092e6055a9b8e9e08c35606c247630e462c5fc6c"} err="failed to get container status \"01886f3aa66694baa8290698092e6055a9b8e9e08c35606c247630e462c5fc6c\": rpc error: code = NotFound desc = could not find container \"01886f3aa66694baa8290698092e6055a9b8e9e08c35606c247630e462c5fc6c\": container with ID starting with 01886f3aa66694baa8290698092e6055a9b8e9e08c35606c247630e462c5fc6c not found: ID does not exist" Feb 17 16:28:41 crc kubenswrapper[4808]: I0217 16:28:41.334102 4808 scope.go:117] "RemoveContainer" containerID="eaab67ade3e6a8ead085c7389c35450cef55e0a08a5aea1cae472285361aeb8a" Feb 17 16:28:41 crc kubenswrapper[4808]: E0217 16:28:41.334718 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eaab67ade3e6a8ead085c7389c35450cef55e0a08a5aea1cae472285361aeb8a\": container with ID starting with eaab67ade3e6a8ead085c7389c35450cef55e0a08a5aea1cae472285361aeb8a not found: ID does not exist" containerID="eaab67ade3e6a8ead085c7389c35450cef55e0a08a5aea1cae472285361aeb8a" Feb 17 16:28:41 crc kubenswrapper[4808]: I0217 16:28:41.334811 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eaab67ade3e6a8ead085c7389c35450cef55e0a08a5aea1cae472285361aeb8a"} err="failed to get container status \"eaab67ade3e6a8ead085c7389c35450cef55e0a08a5aea1cae472285361aeb8a\": rpc error: code = NotFound desc = could not find container \"eaab67ade3e6a8ead085c7389c35450cef55e0a08a5aea1cae472285361aeb8a\": container with ID starting with eaab67ade3e6a8ead085c7389c35450cef55e0a08a5aea1cae472285361aeb8a not found: ID does not exist" Feb 17 16:28:43 crc kubenswrapper[4808]: I0217 16:28:43.159201 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12171d1b-4dea-4358-89cd-ba25b219f753" path="/var/lib/kubelet/pods/12171d1b-4dea-4358-89cd-ba25b219f753/volumes" Feb 17 16:28:44 crc kubenswrapper[4808]: I0217 16:28:44.294420 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hq4vv" Feb 17 16:28:44 crc kubenswrapper[4808]: I0217 16:28:44.353929 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hq4vv" Feb 17 16:28:44 crc kubenswrapper[4808]: I0217 16:28:44.612808 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hq4vv"] Feb 17 16:28:45 crc kubenswrapper[4808]: E0217 16:28:45.151683 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:28:45 crc kubenswrapper[4808]: E0217 16:28:45.151702 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:28:46 crc kubenswrapper[4808]: I0217 16:28:46.237494 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-hq4vv" podUID="9c5ff0a3-7a28-4be0-bbea-b9058f87ec29" containerName="registry-server" containerID="cri-o://d05ba129c5fb0f360f858c4b6bc003646deb9e62dc5fce155872b9940a57e4bb" gracePeriod=2 Feb 17 16:28:46 crc kubenswrapper[4808]: I0217 16:28:46.786643 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hq4vv" Feb 17 16:28:46 crc kubenswrapper[4808]: I0217 16:28:46.897202 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c5ff0a3-7a28-4be0-bbea-b9058f87ec29-catalog-content\") pod \"9c5ff0a3-7a28-4be0-bbea-b9058f87ec29\" (UID: \"9c5ff0a3-7a28-4be0-bbea-b9058f87ec29\") " Feb 17 16:28:46 crc kubenswrapper[4808]: I0217 16:28:46.897378 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fpv9n\" (UniqueName: \"kubernetes.io/projected/9c5ff0a3-7a28-4be0-bbea-b9058f87ec29-kube-api-access-fpv9n\") pod \"9c5ff0a3-7a28-4be0-bbea-b9058f87ec29\" (UID: \"9c5ff0a3-7a28-4be0-bbea-b9058f87ec29\") " Feb 17 16:28:46 crc kubenswrapper[4808]: I0217 16:28:46.897471 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c5ff0a3-7a28-4be0-bbea-b9058f87ec29-utilities\") pod \"9c5ff0a3-7a28-4be0-bbea-b9058f87ec29\" (UID: \"9c5ff0a3-7a28-4be0-bbea-b9058f87ec29\") " Feb 17 16:28:46 crc kubenswrapper[4808]: I0217 16:28:46.898688 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c5ff0a3-7a28-4be0-bbea-b9058f87ec29-utilities" (OuterVolumeSpecName: "utilities") pod "9c5ff0a3-7a28-4be0-bbea-b9058f87ec29" (UID: "9c5ff0a3-7a28-4be0-bbea-b9058f87ec29"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:28:46 crc kubenswrapper[4808]: I0217 16:28:46.915874 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c5ff0a3-7a28-4be0-bbea-b9058f87ec29-kube-api-access-fpv9n" (OuterVolumeSpecName: "kube-api-access-fpv9n") pod "9c5ff0a3-7a28-4be0-bbea-b9058f87ec29" (UID: "9c5ff0a3-7a28-4be0-bbea-b9058f87ec29"). InnerVolumeSpecName "kube-api-access-fpv9n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:28:46 crc kubenswrapper[4808]: I0217 16:28:46.999436 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fpv9n\" (UniqueName: \"kubernetes.io/projected/9c5ff0a3-7a28-4be0-bbea-b9058f87ec29-kube-api-access-fpv9n\") on node \"crc\" DevicePath \"\"" Feb 17 16:28:46 crc kubenswrapper[4808]: I0217 16:28:46.999465 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c5ff0a3-7a28-4be0-bbea-b9058f87ec29-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:28:47 crc kubenswrapper[4808]: I0217 16:28:47.016992 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c5ff0a3-7a28-4be0-bbea-b9058f87ec29-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9c5ff0a3-7a28-4be0-bbea-b9058f87ec29" (UID: "9c5ff0a3-7a28-4be0-bbea-b9058f87ec29"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:28:47 crc kubenswrapper[4808]: I0217 16:28:47.101638 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c5ff0a3-7a28-4be0-bbea-b9058f87ec29-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:28:47 crc kubenswrapper[4808]: I0217 16:28:47.250282 4808 generic.go:334] "Generic (PLEG): container finished" podID="9c5ff0a3-7a28-4be0-bbea-b9058f87ec29" containerID="d05ba129c5fb0f360f858c4b6bc003646deb9e62dc5fce155872b9940a57e4bb" exitCode=0 Feb 17 16:28:47 crc kubenswrapper[4808]: I0217 16:28:47.250331 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hq4vv" event={"ID":"9c5ff0a3-7a28-4be0-bbea-b9058f87ec29","Type":"ContainerDied","Data":"d05ba129c5fb0f360f858c4b6bc003646deb9e62dc5fce155872b9940a57e4bb"} Feb 17 16:28:47 crc kubenswrapper[4808]: I0217 16:28:47.250363 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hq4vv" event={"ID":"9c5ff0a3-7a28-4be0-bbea-b9058f87ec29","Type":"ContainerDied","Data":"59f85d534f1c5d5a0ca9234081d8cdc8974975ca244768bed00c00b344466112"} Feb 17 16:28:47 crc kubenswrapper[4808]: I0217 16:28:47.250371 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hq4vv" Feb 17 16:28:47 crc kubenswrapper[4808]: I0217 16:28:47.250385 4808 scope.go:117] "RemoveContainer" containerID="d05ba129c5fb0f360f858c4b6bc003646deb9e62dc5fce155872b9940a57e4bb" Feb 17 16:28:47 crc kubenswrapper[4808]: I0217 16:28:47.288941 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hq4vv"] Feb 17 16:28:47 crc kubenswrapper[4808]: I0217 16:28:47.294271 4808 scope.go:117] "RemoveContainer" containerID="2202ab54cce46501d924080e87b75d03cda4e99f070d52743be88e1707063844" Feb 17 16:28:47 crc kubenswrapper[4808]: I0217 16:28:47.300504 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-hq4vv"] Feb 17 16:28:47 crc kubenswrapper[4808]: I0217 16:28:47.327823 4808 scope.go:117] "RemoveContainer" containerID="64e1f84e31293a6c69e3e994952a776bcd04b97b872f856b3844a61cb99b2e6b" Feb 17 16:28:47 crc kubenswrapper[4808]: I0217 16:28:47.383105 4808 scope.go:117] "RemoveContainer" containerID="d05ba129c5fb0f360f858c4b6bc003646deb9e62dc5fce155872b9940a57e4bb" Feb 17 16:28:47 crc kubenswrapper[4808]: E0217 16:28:47.384221 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d05ba129c5fb0f360f858c4b6bc003646deb9e62dc5fce155872b9940a57e4bb\": container with ID starting with d05ba129c5fb0f360f858c4b6bc003646deb9e62dc5fce155872b9940a57e4bb not found: ID does not exist" containerID="d05ba129c5fb0f360f858c4b6bc003646deb9e62dc5fce155872b9940a57e4bb" Feb 17 16:28:47 crc kubenswrapper[4808]: I0217 16:28:47.384290 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d05ba129c5fb0f360f858c4b6bc003646deb9e62dc5fce155872b9940a57e4bb"} err="failed to get container status \"d05ba129c5fb0f360f858c4b6bc003646deb9e62dc5fce155872b9940a57e4bb\": rpc error: code = NotFound desc = could not find container \"d05ba129c5fb0f360f858c4b6bc003646deb9e62dc5fce155872b9940a57e4bb\": container with ID starting with d05ba129c5fb0f360f858c4b6bc003646deb9e62dc5fce155872b9940a57e4bb not found: ID does not exist" Feb 17 16:28:47 crc kubenswrapper[4808]: I0217 16:28:47.384332 4808 scope.go:117] "RemoveContainer" containerID="2202ab54cce46501d924080e87b75d03cda4e99f070d52743be88e1707063844" Feb 17 16:28:47 crc kubenswrapper[4808]: E0217 16:28:47.384714 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2202ab54cce46501d924080e87b75d03cda4e99f070d52743be88e1707063844\": container with ID starting with 2202ab54cce46501d924080e87b75d03cda4e99f070d52743be88e1707063844 not found: ID does not exist" containerID="2202ab54cce46501d924080e87b75d03cda4e99f070d52743be88e1707063844" Feb 17 16:28:47 crc kubenswrapper[4808]: I0217 16:28:47.384761 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2202ab54cce46501d924080e87b75d03cda4e99f070d52743be88e1707063844"} err="failed to get container status \"2202ab54cce46501d924080e87b75d03cda4e99f070d52743be88e1707063844\": rpc error: code = NotFound desc = could not find container \"2202ab54cce46501d924080e87b75d03cda4e99f070d52743be88e1707063844\": container with ID starting with 2202ab54cce46501d924080e87b75d03cda4e99f070d52743be88e1707063844 not found: ID does not exist" Feb 17 16:28:47 crc kubenswrapper[4808]: I0217 16:28:47.384797 4808 scope.go:117] "RemoveContainer" containerID="64e1f84e31293a6c69e3e994952a776bcd04b97b872f856b3844a61cb99b2e6b" Feb 17 16:28:47 crc kubenswrapper[4808]: E0217 16:28:47.385136 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"64e1f84e31293a6c69e3e994952a776bcd04b97b872f856b3844a61cb99b2e6b\": container with ID starting with 64e1f84e31293a6c69e3e994952a776bcd04b97b872f856b3844a61cb99b2e6b not found: ID does not exist" containerID="64e1f84e31293a6c69e3e994952a776bcd04b97b872f856b3844a61cb99b2e6b" Feb 17 16:28:47 crc kubenswrapper[4808]: I0217 16:28:47.385172 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64e1f84e31293a6c69e3e994952a776bcd04b97b872f856b3844a61cb99b2e6b"} err="failed to get container status \"64e1f84e31293a6c69e3e994952a776bcd04b97b872f856b3844a61cb99b2e6b\": rpc error: code = NotFound desc = could not find container \"64e1f84e31293a6c69e3e994952a776bcd04b97b872f856b3844a61cb99b2e6b\": container with ID starting with 64e1f84e31293a6c69e3e994952a776bcd04b97b872f856b3844a61cb99b2e6b not found: ID does not exist" Feb 17 16:28:49 crc kubenswrapper[4808]: I0217 16:28:49.168751 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c5ff0a3-7a28-4be0-bbea-b9058f87ec29" path="/var/lib/kubelet/pods/9c5ff0a3-7a28-4be0-bbea-b9058f87ec29/volumes" Feb 17 16:28:56 crc kubenswrapper[4808]: E0217 16:28:56.148211 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:28:56 crc kubenswrapper[4808]: E0217 16:28:56.148402 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:29:07 crc kubenswrapper[4808]: E0217 16:29:07.178484 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:29:08 crc kubenswrapper[4808]: E0217 16:29:08.147827 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:29:18 crc kubenswrapper[4808]: E0217 16:29:18.288877 4808 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 17 16:29:18 crc kubenswrapper[4808]: E0217 16:29:18.289450 4808 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 17 16:29:18 crc kubenswrapper[4808]: E0217 16:29:18.289616 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fnd2x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-zl7nk_openstack(a4b182d0-48fc-4487-b7ad-18f7803a4d4c): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:29:18 crc kubenswrapper[4808]: E0217 16:29:18.290849 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:29:19 crc kubenswrapper[4808]: E0217 16:29:19.146682 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:29:32 crc kubenswrapper[4808]: E0217 16:29:32.149377 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:29:34 crc kubenswrapper[4808]: E0217 16:29:34.147809 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:29:43 crc kubenswrapper[4808]: E0217 16:29:43.148240 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:29:46 crc kubenswrapper[4808]: E0217 16:29:46.278999 4808 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:29:46 crc kubenswrapper[4808]: E0217 16:29:46.279902 4808 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:29:46 crc kubenswrapper[4808]: E0217 16:29:46.280122 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nfchb4h678h649h5fbh664h79h7fh666h5bfh68h565h555h59dh5b6h5bfh66ch645h547h5cbh549h9fh58bh5d4hcfh78h68chc7h5ch67dhc7h5b4q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rjgf2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(2876084b-7055-449d-9ddb-447d3a515d80): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:29:46 crc kubenswrapper[4808]: E0217 16:29:46.281484 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:29:51 crc kubenswrapper[4808]: I0217 16:29:51.566683 4808 generic.go:334] "Generic (PLEG): container finished" podID="2084629b-ffd4-4f5e-8db7-070d4a08dd8e" containerID="92e6ef387cf41dd71a851ea483493cf05b8666e2889e1132cbfb6ad483176127" exitCode=2 Feb 17 16:29:51 crc kubenswrapper[4808]: I0217 16:29:51.566784 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sjckt" event={"ID":"2084629b-ffd4-4f5e-8db7-070d4a08dd8e","Type":"ContainerDied","Data":"92e6ef387cf41dd71a851ea483493cf05b8666e2889e1132cbfb6ad483176127"} Feb 17 16:29:53 crc kubenswrapper[4808]: I0217 16:29:53.137018 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sjckt" Feb 17 16:29:53 crc kubenswrapper[4808]: I0217 16:29:53.234124 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kdfxv\" (UniqueName: \"kubernetes.io/projected/2084629b-ffd4-4f5e-8db7-070d4a08dd8e-kube-api-access-kdfxv\") pod \"2084629b-ffd4-4f5e-8db7-070d4a08dd8e\" (UID: \"2084629b-ffd4-4f5e-8db7-070d4a08dd8e\") " Feb 17 16:29:53 crc kubenswrapper[4808]: I0217 16:29:53.234322 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2084629b-ffd4-4f5e-8db7-070d4a08dd8e-inventory\") pod \"2084629b-ffd4-4f5e-8db7-070d4a08dd8e\" (UID: \"2084629b-ffd4-4f5e-8db7-070d4a08dd8e\") " Feb 17 16:29:53 crc kubenswrapper[4808]: I0217 16:29:53.234485 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2084629b-ffd4-4f5e-8db7-070d4a08dd8e-ssh-key-openstack-edpm-ipam\") pod \"2084629b-ffd4-4f5e-8db7-070d4a08dd8e\" (UID: \"2084629b-ffd4-4f5e-8db7-070d4a08dd8e\") " Feb 17 16:29:53 crc kubenswrapper[4808]: I0217 16:29:53.243772 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2084629b-ffd4-4f5e-8db7-070d4a08dd8e-kube-api-access-kdfxv" (OuterVolumeSpecName: "kube-api-access-kdfxv") pod "2084629b-ffd4-4f5e-8db7-070d4a08dd8e" (UID: "2084629b-ffd4-4f5e-8db7-070d4a08dd8e"). InnerVolumeSpecName "kube-api-access-kdfxv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:29:53 crc kubenswrapper[4808]: I0217 16:29:53.267140 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2084629b-ffd4-4f5e-8db7-070d4a08dd8e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "2084629b-ffd4-4f5e-8db7-070d4a08dd8e" (UID: "2084629b-ffd4-4f5e-8db7-070d4a08dd8e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:29:53 crc kubenswrapper[4808]: I0217 16:29:53.272444 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2084629b-ffd4-4f5e-8db7-070d4a08dd8e-inventory" (OuterVolumeSpecName: "inventory") pod "2084629b-ffd4-4f5e-8db7-070d4a08dd8e" (UID: "2084629b-ffd4-4f5e-8db7-070d4a08dd8e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:29:53 crc kubenswrapper[4808]: I0217 16:29:53.338882 4808 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2084629b-ffd4-4f5e-8db7-070d4a08dd8e-inventory\") on node \"crc\" DevicePath \"\"" Feb 17 16:29:53 crc kubenswrapper[4808]: I0217 16:29:53.338914 4808 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2084629b-ffd4-4f5e-8db7-070d4a08dd8e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 16:29:53 crc kubenswrapper[4808]: I0217 16:29:53.338926 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kdfxv\" (UniqueName: \"kubernetes.io/projected/2084629b-ffd4-4f5e-8db7-070d4a08dd8e-kube-api-access-kdfxv\") on node \"crc\" DevicePath \"\"" Feb 17 16:29:53 crc kubenswrapper[4808]: I0217 16:29:53.589456 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sjckt" event={"ID":"2084629b-ffd4-4f5e-8db7-070d4a08dd8e","Type":"ContainerDied","Data":"b7f31d0387d770241189aacd0771c827ab5a7b271e4e7dcc1efa78c199758ae8"} Feb 17 16:29:53 crc kubenswrapper[4808]: I0217 16:29:53.589530 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b7f31d0387d770241189aacd0771c827ab5a7b271e4e7dcc1efa78c199758ae8" Feb 17 16:29:53 crc kubenswrapper[4808]: I0217 16:29:53.589614 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sjckt" Feb 17 16:29:54 crc kubenswrapper[4808]: E0217 16:29:54.147863 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:29:57 crc kubenswrapper[4808]: E0217 16:29:57.161295 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:30:00 crc kubenswrapper[4808]: I0217 16:30:00.167954 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522430-jhp9b"] Feb 17 16:30:00 crc kubenswrapper[4808]: E0217 16:30:00.169279 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12171d1b-4dea-4358-89cd-ba25b219f753" containerName="extract-utilities" Feb 17 16:30:00 crc kubenswrapper[4808]: I0217 16:30:00.169303 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="12171d1b-4dea-4358-89cd-ba25b219f753" containerName="extract-utilities" Feb 17 16:30:00 crc kubenswrapper[4808]: E0217 16:30:00.169333 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c5ff0a3-7a28-4be0-bbea-b9058f87ec29" containerName="registry-server" Feb 17 16:30:00 crc kubenswrapper[4808]: I0217 16:30:00.169345 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c5ff0a3-7a28-4be0-bbea-b9058f87ec29" containerName="registry-server" Feb 17 16:30:00 crc kubenswrapper[4808]: E0217 16:30:00.169365 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12171d1b-4dea-4358-89cd-ba25b219f753" containerName="extract-content" Feb 17 16:30:00 crc kubenswrapper[4808]: I0217 16:30:00.169377 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="12171d1b-4dea-4358-89cd-ba25b219f753" containerName="extract-content" Feb 17 16:30:00 crc kubenswrapper[4808]: E0217 16:30:00.169413 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c5ff0a3-7a28-4be0-bbea-b9058f87ec29" containerName="extract-utilities" Feb 17 16:30:00 crc kubenswrapper[4808]: I0217 16:30:00.169452 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c5ff0a3-7a28-4be0-bbea-b9058f87ec29" containerName="extract-utilities" Feb 17 16:30:00 crc kubenswrapper[4808]: E0217 16:30:00.169484 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c5ff0a3-7a28-4be0-bbea-b9058f87ec29" containerName="extract-content" Feb 17 16:30:00 crc kubenswrapper[4808]: I0217 16:30:00.169496 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c5ff0a3-7a28-4be0-bbea-b9058f87ec29" containerName="extract-content" Feb 17 16:30:00 crc kubenswrapper[4808]: E0217 16:30:00.169519 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2084629b-ffd4-4f5e-8db7-070d4a08dd8e" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 17 16:30:00 crc kubenswrapper[4808]: I0217 16:30:00.169532 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="2084629b-ffd4-4f5e-8db7-070d4a08dd8e" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 17 16:30:00 crc kubenswrapper[4808]: E0217 16:30:00.169553 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12171d1b-4dea-4358-89cd-ba25b219f753" containerName="registry-server" Feb 17 16:30:00 crc kubenswrapper[4808]: I0217 16:30:00.169565 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="12171d1b-4dea-4358-89cd-ba25b219f753" containerName="registry-server" Feb 17 16:30:00 crc kubenswrapper[4808]: I0217 16:30:00.187835 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c5ff0a3-7a28-4be0-bbea-b9058f87ec29" containerName="registry-server" Feb 17 16:30:00 crc kubenswrapper[4808]: I0217 16:30:00.187953 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="12171d1b-4dea-4358-89cd-ba25b219f753" containerName="registry-server" Feb 17 16:30:00 crc kubenswrapper[4808]: I0217 16:30:00.187981 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="2084629b-ffd4-4f5e-8db7-070d4a08dd8e" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 17 16:30:00 crc kubenswrapper[4808]: I0217 16:30:00.189220 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522430-jhp9b" Feb 17 16:30:00 crc kubenswrapper[4808]: I0217 16:30:00.193691 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 17 16:30:00 crc kubenswrapper[4808]: I0217 16:30:00.196770 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 17 16:30:00 crc kubenswrapper[4808]: I0217 16:30:00.212844 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522430-jhp9b"] Feb 17 16:30:00 crc kubenswrapper[4808]: I0217 16:30:00.298427 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9bvw\" (UniqueName: \"kubernetes.io/projected/e5f89f01-6a5d-4eb4-adc9-cbfbd921accf-kube-api-access-z9bvw\") pod \"collect-profiles-29522430-jhp9b\" (UID: \"e5f89f01-6a5d-4eb4-adc9-cbfbd921accf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522430-jhp9b" Feb 17 16:30:00 crc kubenswrapper[4808]: I0217 16:30:00.298558 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e5f89f01-6a5d-4eb4-adc9-cbfbd921accf-config-volume\") pod \"collect-profiles-29522430-jhp9b\" (UID: \"e5f89f01-6a5d-4eb4-adc9-cbfbd921accf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522430-jhp9b" Feb 17 16:30:00 crc kubenswrapper[4808]: I0217 16:30:00.300317 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e5f89f01-6a5d-4eb4-adc9-cbfbd921accf-secret-volume\") pod \"collect-profiles-29522430-jhp9b\" (UID: \"e5f89f01-6a5d-4eb4-adc9-cbfbd921accf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522430-jhp9b" Feb 17 16:30:00 crc kubenswrapper[4808]: I0217 16:30:00.402362 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e5f89f01-6a5d-4eb4-adc9-cbfbd921accf-secret-volume\") pod \"collect-profiles-29522430-jhp9b\" (UID: \"e5f89f01-6a5d-4eb4-adc9-cbfbd921accf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522430-jhp9b" Feb 17 16:30:00 crc kubenswrapper[4808]: I0217 16:30:00.402431 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9bvw\" (UniqueName: \"kubernetes.io/projected/e5f89f01-6a5d-4eb4-adc9-cbfbd921accf-kube-api-access-z9bvw\") pod \"collect-profiles-29522430-jhp9b\" (UID: \"e5f89f01-6a5d-4eb4-adc9-cbfbd921accf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522430-jhp9b" Feb 17 16:30:00 crc kubenswrapper[4808]: I0217 16:30:00.402506 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e5f89f01-6a5d-4eb4-adc9-cbfbd921accf-config-volume\") pod \"collect-profiles-29522430-jhp9b\" (UID: \"e5f89f01-6a5d-4eb4-adc9-cbfbd921accf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522430-jhp9b" Feb 17 16:30:00 crc kubenswrapper[4808]: I0217 16:30:00.403561 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e5f89f01-6a5d-4eb4-adc9-cbfbd921accf-config-volume\") pod \"collect-profiles-29522430-jhp9b\" (UID: \"e5f89f01-6a5d-4eb4-adc9-cbfbd921accf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522430-jhp9b" Feb 17 16:30:00 crc kubenswrapper[4808]: I0217 16:30:00.408995 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e5f89f01-6a5d-4eb4-adc9-cbfbd921accf-secret-volume\") pod \"collect-profiles-29522430-jhp9b\" (UID: \"e5f89f01-6a5d-4eb4-adc9-cbfbd921accf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522430-jhp9b" Feb 17 16:30:00 crc kubenswrapper[4808]: I0217 16:30:00.424301 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9bvw\" (UniqueName: \"kubernetes.io/projected/e5f89f01-6a5d-4eb4-adc9-cbfbd921accf-kube-api-access-z9bvw\") pod \"collect-profiles-29522430-jhp9b\" (UID: \"e5f89f01-6a5d-4eb4-adc9-cbfbd921accf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522430-jhp9b" Feb 17 16:30:00 crc kubenswrapper[4808]: I0217 16:30:00.524920 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522430-jhp9b" Feb 17 16:30:01 crc kubenswrapper[4808]: I0217 16:30:01.040952 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9nkdz"] Feb 17 16:30:01 crc kubenswrapper[4808]: I0217 16:30:01.043213 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9nkdz" Feb 17 16:30:01 crc kubenswrapper[4808]: I0217 16:30:01.045544 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-gpcsv" Feb 17 16:30:01 crc kubenswrapper[4808]: I0217 16:30:01.045599 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 17 16:30:01 crc kubenswrapper[4808]: I0217 16:30:01.045830 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 17 16:30:01 crc kubenswrapper[4808]: I0217 16:30:01.045871 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 17 16:30:01 crc kubenswrapper[4808]: I0217 16:30:01.057314 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9nkdz"] Feb 17 16:30:01 crc kubenswrapper[4808]: I0217 16:30:01.220829 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/486d1a55-6cee-4d24-ab2b-5c5c61c6d3d3-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-9nkdz\" (UID: \"486d1a55-6cee-4d24-ab2b-5c5c61c6d3d3\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9nkdz" Feb 17 16:30:01 crc kubenswrapper[4808]: I0217 16:30:01.220980 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmcgj\" (UniqueName: \"kubernetes.io/projected/486d1a55-6cee-4d24-ab2b-5c5c61c6d3d3-kube-api-access-cmcgj\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-9nkdz\" (UID: \"486d1a55-6cee-4d24-ab2b-5c5c61c6d3d3\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9nkdz" Feb 17 16:30:01 crc kubenswrapper[4808]: I0217 16:30:01.221059 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/486d1a55-6cee-4d24-ab2b-5c5c61c6d3d3-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-9nkdz\" (UID: \"486d1a55-6cee-4d24-ab2b-5c5c61c6d3d3\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9nkdz" Feb 17 16:30:01 crc kubenswrapper[4808]: I0217 16:30:01.281413 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522430-jhp9b"] Feb 17 16:30:01 crc kubenswrapper[4808]: I0217 16:30:01.322995 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/486d1a55-6cee-4d24-ab2b-5c5c61c6d3d3-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-9nkdz\" (UID: \"486d1a55-6cee-4d24-ab2b-5c5c61c6d3d3\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9nkdz" Feb 17 16:30:01 crc kubenswrapper[4808]: I0217 16:30:01.323425 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/486d1a55-6cee-4d24-ab2b-5c5c61c6d3d3-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-9nkdz\" (UID: \"486d1a55-6cee-4d24-ab2b-5c5c61c6d3d3\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9nkdz" Feb 17 16:30:01 crc kubenswrapper[4808]: I0217 16:30:01.324792 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmcgj\" (UniqueName: \"kubernetes.io/projected/486d1a55-6cee-4d24-ab2b-5c5c61c6d3d3-kube-api-access-cmcgj\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-9nkdz\" (UID: \"486d1a55-6cee-4d24-ab2b-5c5c61c6d3d3\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9nkdz" Feb 17 16:30:01 crc kubenswrapper[4808]: I0217 16:30:01.329996 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/486d1a55-6cee-4d24-ab2b-5c5c61c6d3d3-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-9nkdz\" (UID: \"486d1a55-6cee-4d24-ab2b-5c5c61c6d3d3\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9nkdz" Feb 17 16:30:01 crc kubenswrapper[4808]: I0217 16:30:01.330366 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/486d1a55-6cee-4d24-ab2b-5c5c61c6d3d3-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-9nkdz\" (UID: \"486d1a55-6cee-4d24-ab2b-5c5c61c6d3d3\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9nkdz" Feb 17 16:30:01 crc kubenswrapper[4808]: I0217 16:30:01.342741 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmcgj\" (UniqueName: \"kubernetes.io/projected/486d1a55-6cee-4d24-ab2b-5c5c61c6d3d3-kube-api-access-cmcgj\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-9nkdz\" (UID: \"486d1a55-6cee-4d24-ab2b-5c5c61c6d3d3\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9nkdz" Feb 17 16:30:01 crc kubenswrapper[4808]: I0217 16:30:01.366183 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9nkdz" Feb 17 16:30:01 crc kubenswrapper[4808]: I0217 16:30:01.775446 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522430-jhp9b" event={"ID":"e5f89f01-6a5d-4eb4-adc9-cbfbd921accf","Type":"ContainerStarted","Data":"c5ba79dcf1a3ea436f18f622b5a896f04d2d690a78e981b12dc981865c236bbe"} Feb 17 16:30:01 crc kubenswrapper[4808]: I0217 16:30:01.775831 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522430-jhp9b" event={"ID":"e5f89f01-6a5d-4eb4-adc9-cbfbd921accf","Type":"ContainerStarted","Data":"760de8bd8d09554dd73353da29e851042c810b009a003ac5e43d970dec207854"} Feb 17 16:30:01 crc kubenswrapper[4808]: I0217 16:30:01.810023 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29522430-jhp9b" podStartSLOduration=1.809996564 podStartE2EDuration="1.809996564s" podCreationTimestamp="2026-02-17 16:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:30:01.79555796 +0000 UTC m=+2165.311917043" watchObservedRunningTime="2026-02-17 16:30:01.809996564 +0000 UTC m=+2165.326355647" Feb 17 16:30:01 crc kubenswrapper[4808]: W0217 16:30:01.911556 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod486d1a55_6cee_4d24_ab2b_5c5c61c6d3d3.slice/crio-7f46c1a26483e6a88332ba91471836d6c5c7e3122663fd45f8f638555de77a90 WatchSource:0}: Error finding container 7f46c1a26483e6a88332ba91471836d6c5c7e3122663fd45f8f638555de77a90: Status 404 returned error can't find the container with id 7f46c1a26483e6a88332ba91471836d6c5c7e3122663fd45f8f638555de77a90 Feb 17 16:30:01 crc kubenswrapper[4808]: I0217 16:30:01.921343 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9nkdz"] Feb 17 16:30:02 crc kubenswrapper[4808]: I0217 16:30:02.784429 4808 generic.go:334] "Generic (PLEG): container finished" podID="e5f89f01-6a5d-4eb4-adc9-cbfbd921accf" containerID="c5ba79dcf1a3ea436f18f622b5a896f04d2d690a78e981b12dc981865c236bbe" exitCode=0 Feb 17 16:30:02 crc kubenswrapper[4808]: I0217 16:30:02.784542 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522430-jhp9b" event={"ID":"e5f89f01-6a5d-4eb4-adc9-cbfbd921accf","Type":"ContainerDied","Data":"c5ba79dcf1a3ea436f18f622b5a896f04d2d690a78e981b12dc981865c236bbe"} Feb 17 16:30:02 crc kubenswrapper[4808]: I0217 16:30:02.786287 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9nkdz" event={"ID":"486d1a55-6cee-4d24-ab2b-5c5c61c6d3d3","Type":"ContainerStarted","Data":"8411ed95197c32b6e4edaeead95a670ced65c70f3a3592064db86f9a1b81cf5a"} Feb 17 16:30:02 crc kubenswrapper[4808]: I0217 16:30:02.786330 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9nkdz" event={"ID":"486d1a55-6cee-4d24-ab2b-5c5c61c6d3d3","Type":"ContainerStarted","Data":"7f46c1a26483e6a88332ba91471836d6c5c7e3122663fd45f8f638555de77a90"} Feb 17 16:30:02 crc kubenswrapper[4808]: I0217 16:30:02.818241 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9nkdz" podStartSLOduration=1.409963092 podStartE2EDuration="1.818223865s" podCreationTimestamp="2026-02-17 16:30:01 +0000 UTC" firstStartedPulling="2026-02-17 16:30:01.916905851 +0000 UTC m=+2165.433264924" lastFinishedPulling="2026-02-17 16:30:02.325166604 +0000 UTC m=+2165.841525697" observedRunningTime="2026-02-17 16:30:02.813684694 +0000 UTC m=+2166.330043777" watchObservedRunningTime="2026-02-17 16:30:02.818223865 +0000 UTC m=+2166.334582928" Feb 17 16:30:04 crc kubenswrapper[4808]: I0217 16:30:04.313403 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522430-jhp9b" Feb 17 16:30:04 crc kubenswrapper[4808]: I0217 16:30:04.407570 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z9bvw\" (UniqueName: \"kubernetes.io/projected/e5f89f01-6a5d-4eb4-adc9-cbfbd921accf-kube-api-access-z9bvw\") pod \"e5f89f01-6a5d-4eb4-adc9-cbfbd921accf\" (UID: \"e5f89f01-6a5d-4eb4-adc9-cbfbd921accf\") " Feb 17 16:30:04 crc kubenswrapper[4808]: I0217 16:30:04.407878 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e5f89f01-6a5d-4eb4-adc9-cbfbd921accf-config-volume\") pod \"e5f89f01-6a5d-4eb4-adc9-cbfbd921accf\" (UID: \"e5f89f01-6a5d-4eb4-adc9-cbfbd921accf\") " Feb 17 16:30:04 crc kubenswrapper[4808]: I0217 16:30:04.408002 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e5f89f01-6a5d-4eb4-adc9-cbfbd921accf-secret-volume\") pod \"e5f89f01-6a5d-4eb4-adc9-cbfbd921accf\" (UID: \"e5f89f01-6a5d-4eb4-adc9-cbfbd921accf\") " Feb 17 16:30:04 crc kubenswrapper[4808]: I0217 16:30:04.408491 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5f89f01-6a5d-4eb4-adc9-cbfbd921accf-config-volume" (OuterVolumeSpecName: "config-volume") pod "e5f89f01-6a5d-4eb4-adc9-cbfbd921accf" (UID: "e5f89f01-6a5d-4eb4-adc9-cbfbd921accf"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:30:04 crc kubenswrapper[4808]: I0217 16:30:04.409129 4808 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e5f89f01-6a5d-4eb4-adc9-cbfbd921accf-config-volume\") on node \"crc\" DevicePath \"\"" Feb 17 16:30:04 crc kubenswrapper[4808]: I0217 16:30:04.413330 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5f89f01-6a5d-4eb4-adc9-cbfbd921accf-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e5f89f01-6a5d-4eb4-adc9-cbfbd921accf" (UID: "e5f89f01-6a5d-4eb4-adc9-cbfbd921accf"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:30:04 crc kubenswrapper[4808]: I0217 16:30:04.415803 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5f89f01-6a5d-4eb4-adc9-cbfbd921accf-kube-api-access-z9bvw" (OuterVolumeSpecName: "kube-api-access-z9bvw") pod "e5f89f01-6a5d-4eb4-adc9-cbfbd921accf" (UID: "e5f89f01-6a5d-4eb4-adc9-cbfbd921accf"). InnerVolumeSpecName "kube-api-access-z9bvw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:30:04 crc kubenswrapper[4808]: I0217 16:30:04.510918 4808 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e5f89f01-6a5d-4eb4-adc9-cbfbd921accf-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 17 16:30:04 crc kubenswrapper[4808]: I0217 16:30:04.510952 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z9bvw\" (UniqueName: \"kubernetes.io/projected/e5f89f01-6a5d-4eb4-adc9-cbfbd921accf-kube-api-access-z9bvw\") on node \"crc\" DevicePath \"\"" Feb 17 16:30:04 crc kubenswrapper[4808]: I0217 16:30:04.814845 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522430-jhp9b" event={"ID":"e5f89f01-6a5d-4eb4-adc9-cbfbd921accf","Type":"ContainerDied","Data":"760de8bd8d09554dd73353da29e851042c810b009a003ac5e43d970dec207854"} Feb 17 16:30:04 crc kubenswrapper[4808]: I0217 16:30:04.814891 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522430-jhp9b" Feb 17 16:30:04 crc kubenswrapper[4808]: I0217 16:30:04.814916 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="760de8bd8d09554dd73353da29e851042c810b009a003ac5e43d970dec207854" Feb 17 16:30:05 crc kubenswrapper[4808]: I0217 16:30:05.383486 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522385-74pvr"] Feb 17 16:30:05 crc kubenswrapper[4808]: I0217 16:30:05.390851 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522385-74pvr"] Feb 17 16:30:07 crc kubenswrapper[4808]: E0217 16:30:07.156887 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:30:07 crc kubenswrapper[4808]: I0217 16:30:07.173883 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7baa3ebb-6bb0-4744-b096-971958bcd263" path="/var/lib/kubelet/pods/7baa3ebb-6bb0-4744-b096-971958bcd263/volumes" Feb 17 16:30:10 crc kubenswrapper[4808]: E0217 16:30:10.149018 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:30:17 crc kubenswrapper[4808]: I0217 16:30:17.728939 4808 scope.go:117] "RemoveContainer" containerID="4636e3a05a4f1b63b0a37839e73e790b55d96dd321273848e2dfb3f38193ea44" Feb 17 16:30:19 crc kubenswrapper[4808]: E0217 16:30:19.149803 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:30:21 crc kubenswrapper[4808]: E0217 16:30:21.148431 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:30:32 crc kubenswrapper[4808]: E0217 16:30:32.147549 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:30:36 crc kubenswrapper[4808]: E0217 16:30:36.149255 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:30:45 crc kubenswrapper[4808]: E0217 16:30:45.147471 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:30:51 crc kubenswrapper[4808]: E0217 16:30:51.149276 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:30:51 crc kubenswrapper[4808]: I0217 16:30:51.591858 4808 patch_prober.go:28] interesting pod/machine-config-daemon-k8v8k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:30:51 crc kubenswrapper[4808]: I0217 16:30:51.591927 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:31:00 crc kubenswrapper[4808]: E0217 16:31:00.149010 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:31:02 crc kubenswrapper[4808]: E0217 16:31:02.147831 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:31:12 crc kubenswrapper[4808]: E0217 16:31:12.148846 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:31:14 crc kubenswrapper[4808]: E0217 16:31:14.148656 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:31:21 crc kubenswrapper[4808]: I0217 16:31:21.592622 4808 patch_prober.go:28] interesting pod/machine-config-daemon-k8v8k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:31:21 crc kubenswrapper[4808]: I0217 16:31:21.593219 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:31:23 crc kubenswrapper[4808]: E0217 16:31:23.148434 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:31:26 crc kubenswrapper[4808]: E0217 16:31:26.150261 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:31:36 crc kubenswrapper[4808]: E0217 16:31:36.147648 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:31:40 crc kubenswrapper[4808]: E0217 16:31:40.148213 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:31:51 crc kubenswrapper[4808]: E0217 16:31:51.148342 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:31:51 crc kubenswrapper[4808]: I0217 16:31:51.591677 4808 patch_prober.go:28] interesting pod/machine-config-daemon-k8v8k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:31:51 crc kubenswrapper[4808]: I0217 16:31:51.591730 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:31:51 crc kubenswrapper[4808]: I0217 16:31:51.591805 4808 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" Feb 17 16:31:51 crc kubenswrapper[4808]: I0217 16:31:51.592557 4808 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1bc8c301ec8b4441d9a8329001acd7ade818d27cbaa99f4b04c925c309e2eb22"} pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 16:31:51 crc kubenswrapper[4808]: I0217 16:31:51.592631 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" containerID="cri-o://1bc8c301ec8b4441d9a8329001acd7ade818d27cbaa99f4b04c925c309e2eb22" gracePeriod=600 Feb 17 16:31:51 crc kubenswrapper[4808]: E0217 16:31:51.733145 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:31:51 crc kubenswrapper[4808]: I0217 16:31:51.966533 4808 generic.go:334] "Generic (PLEG): container finished" podID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerID="1bc8c301ec8b4441d9a8329001acd7ade818d27cbaa99f4b04c925c309e2eb22" exitCode=0 Feb 17 16:31:51 crc kubenswrapper[4808]: I0217 16:31:51.966570 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" event={"ID":"ca38b6e7-b21c-453d-8b6c-a163dac84b35","Type":"ContainerDied","Data":"1bc8c301ec8b4441d9a8329001acd7ade818d27cbaa99f4b04c925c309e2eb22"} Feb 17 16:31:51 crc kubenswrapper[4808]: I0217 16:31:51.966630 4808 scope.go:117] "RemoveContainer" containerID="ba9082db1029d7bfb949c1e61cae44b0ec31ca6cae55a6942a3dbac04ecadf0f" Feb 17 16:31:51 crc kubenswrapper[4808]: I0217 16:31:51.967244 4808 scope.go:117] "RemoveContainer" containerID="1bc8c301ec8b4441d9a8329001acd7ade818d27cbaa99f4b04c925c309e2eb22" Feb 17 16:31:51 crc kubenswrapper[4808]: E0217 16:31:51.967469 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:31:54 crc kubenswrapper[4808]: E0217 16:31:54.148007 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:32:03 crc kubenswrapper[4808]: I0217 16:32:03.146442 4808 scope.go:117] "RemoveContainer" containerID="1bc8c301ec8b4441d9a8329001acd7ade818d27cbaa99f4b04c925c309e2eb22" Feb 17 16:32:03 crc kubenswrapper[4808]: E0217 16:32:03.147873 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:32:05 crc kubenswrapper[4808]: E0217 16:32:05.148762 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:32:07 crc kubenswrapper[4808]: E0217 16:32:07.157964 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:32:17 crc kubenswrapper[4808]: I0217 16:32:17.154097 4808 scope.go:117] "RemoveContainer" containerID="1bc8c301ec8b4441d9a8329001acd7ade818d27cbaa99f4b04c925c309e2eb22" Feb 17 16:32:17 crc kubenswrapper[4808]: E0217 16:32:17.155116 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:32:18 crc kubenswrapper[4808]: E0217 16:32:18.148843 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:32:19 crc kubenswrapper[4808]: E0217 16:32:19.147508 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:32:31 crc kubenswrapper[4808]: I0217 16:32:31.146481 4808 scope.go:117] "RemoveContainer" containerID="1bc8c301ec8b4441d9a8329001acd7ade818d27cbaa99f4b04c925c309e2eb22" Feb 17 16:32:31 crc kubenswrapper[4808]: E0217 16:32:31.147175 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:32:33 crc kubenswrapper[4808]: E0217 16:32:33.149042 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:32:33 crc kubenswrapper[4808]: E0217 16:32:33.152091 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:32:43 crc kubenswrapper[4808]: I0217 16:32:43.148079 4808 scope.go:117] "RemoveContainer" containerID="1bc8c301ec8b4441d9a8329001acd7ade818d27cbaa99f4b04c925c309e2eb22" Feb 17 16:32:43 crc kubenswrapper[4808]: E0217 16:32:43.149406 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:32:45 crc kubenswrapper[4808]: E0217 16:32:45.150933 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:32:46 crc kubenswrapper[4808]: E0217 16:32:46.147771 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:32:56 crc kubenswrapper[4808]: E0217 16:32:56.148512 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:32:57 crc kubenswrapper[4808]: I0217 16:32:57.152157 4808 scope.go:117] "RemoveContainer" containerID="1bc8c301ec8b4441d9a8329001acd7ade818d27cbaa99f4b04c925c309e2eb22" Feb 17 16:32:57 crc kubenswrapper[4808]: E0217 16:32:57.152429 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:33:01 crc kubenswrapper[4808]: E0217 16:33:01.148797 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:33:09 crc kubenswrapper[4808]: I0217 16:33:09.146060 4808 scope.go:117] "RemoveContainer" containerID="1bc8c301ec8b4441d9a8329001acd7ade818d27cbaa99f4b04c925c309e2eb22" Feb 17 16:33:09 crc kubenswrapper[4808]: E0217 16:33:09.147084 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:33:09 crc kubenswrapper[4808]: E0217 16:33:09.148842 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:33:15 crc kubenswrapper[4808]: E0217 16:33:15.151542 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:33:22 crc kubenswrapper[4808]: I0217 16:33:22.145833 4808 scope.go:117] "RemoveContainer" containerID="1bc8c301ec8b4441d9a8329001acd7ade818d27cbaa99f4b04c925c309e2eb22" Feb 17 16:33:22 crc kubenswrapper[4808]: E0217 16:33:22.147195 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:33:24 crc kubenswrapper[4808]: E0217 16:33:24.149383 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:33:26 crc kubenswrapper[4808]: E0217 16:33:26.148629 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:33:35 crc kubenswrapper[4808]: E0217 16:33:35.148237 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:33:36 crc kubenswrapper[4808]: I0217 16:33:36.146511 4808 scope.go:117] "RemoveContainer" containerID="1bc8c301ec8b4441d9a8329001acd7ade818d27cbaa99f4b04c925c309e2eb22" Feb 17 16:33:36 crc kubenswrapper[4808]: E0217 16:33:36.147107 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:33:41 crc kubenswrapper[4808]: E0217 16:33:41.154095 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:33:48 crc kubenswrapper[4808]: I0217 16:33:48.146985 4808 scope.go:117] "RemoveContainer" containerID="1bc8c301ec8b4441d9a8329001acd7ade818d27cbaa99f4b04c925c309e2eb22" Feb 17 16:33:48 crc kubenswrapper[4808]: E0217 16:33:48.147900 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:33:49 crc kubenswrapper[4808]: E0217 16:33:49.147848 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:33:53 crc kubenswrapper[4808]: E0217 16:33:53.148240 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:34:00 crc kubenswrapper[4808]: I0217 16:34:00.146362 4808 scope.go:117] "RemoveContainer" containerID="1bc8c301ec8b4441d9a8329001acd7ade818d27cbaa99f4b04c925c309e2eb22" Feb 17 16:34:00 crc kubenswrapper[4808]: E0217 16:34:00.146990 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:34:03 crc kubenswrapper[4808]: E0217 16:34:03.147998 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:34:04 crc kubenswrapper[4808]: E0217 16:34:04.147637 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:34:14 crc kubenswrapper[4808]: I0217 16:34:14.146390 4808 scope.go:117] "RemoveContainer" containerID="1bc8c301ec8b4441d9a8329001acd7ade818d27cbaa99f4b04c925c309e2eb22" Feb 17 16:34:14 crc kubenswrapper[4808]: E0217 16:34:14.147308 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:34:15 crc kubenswrapper[4808]: E0217 16:34:15.150951 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:34:18 crc kubenswrapper[4808]: E0217 16:34:18.149767 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:34:26 crc kubenswrapper[4808]: I0217 16:34:26.149237 4808 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 16:34:26 crc kubenswrapper[4808]: E0217 16:34:26.274075 4808 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 17 16:34:26 crc kubenswrapper[4808]: E0217 16:34:26.274466 4808 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 17 16:34:26 crc kubenswrapper[4808]: E0217 16:34:26.274650 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fnd2x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-zl7nk_openstack(a4b182d0-48fc-4487-b7ad-18f7803a4d4c): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:34:26 crc kubenswrapper[4808]: E0217 16:34:26.276232 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:34:29 crc kubenswrapper[4808]: I0217 16:34:29.147199 4808 scope.go:117] "RemoveContainer" containerID="1bc8c301ec8b4441d9a8329001acd7ade818d27cbaa99f4b04c925c309e2eb22" Feb 17 16:34:29 crc kubenswrapper[4808]: E0217 16:34:29.147974 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:34:32 crc kubenswrapper[4808]: E0217 16:34:32.147067 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:34:38 crc kubenswrapper[4808]: E0217 16:34:38.148831 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:34:41 crc kubenswrapper[4808]: I0217 16:34:41.145760 4808 scope.go:117] "RemoveContainer" containerID="1bc8c301ec8b4441d9a8329001acd7ade818d27cbaa99f4b04c925c309e2eb22" Feb 17 16:34:41 crc kubenswrapper[4808]: E0217 16:34:41.146459 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:34:46 crc kubenswrapper[4808]: E0217 16:34:46.149705 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:34:49 crc kubenswrapper[4808]: E0217 16:34:49.149011 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:34:52 crc kubenswrapper[4808]: I0217 16:34:52.146336 4808 scope.go:117] "RemoveContainer" containerID="1bc8c301ec8b4441d9a8329001acd7ade818d27cbaa99f4b04c925c309e2eb22" Feb 17 16:34:52 crc kubenswrapper[4808]: E0217 16:34:52.146731 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:35:01 crc kubenswrapper[4808]: E0217 16:35:01.149454 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:35:01 crc kubenswrapper[4808]: E0217 16:35:01.268160 4808 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:35:01 crc kubenswrapper[4808]: E0217 16:35:01.268257 4808 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:35:01 crc kubenswrapper[4808]: E0217 16:35:01.268422 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nfchb4h678h649h5fbh664h79h7fh666h5bfh68h565h555h59dh5b6h5bfh66ch645h547h5cbh549h9fh58bh5d4hcfh78h68chc7h5ch67dhc7h5b4q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rjgf2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(2876084b-7055-449d-9ddb-447d3a515d80): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:35:01 crc kubenswrapper[4808]: E0217 16:35:01.269714 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:35:03 crc kubenswrapper[4808]: I0217 16:35:03.145706 4808 scope.go:117] "RemoveContainer" containerID="1bc8c301ec8b4441d9a8329001acd7ade818d27cbaa99f4b04c925c309e2eb22" Feb 17 16:35:03 crc kubenswrapper[4808]: E0217 16:35:03.146437 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:35:13 crc kubenswrapper[4808]: E0217 16:35:13.148310 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:35:16 crc kubenswrapper[4808]: I0217 16:35:16.146540 4808 scope.go:117] "RemoveContainer" containerID="1bc8c301ec8b4441d9a8329001acd7ade818d27cbaa99f4b04c925c309e2eb22" Feb 17 16:35:16 crc kubenswrapper[4808]: E0217 16:35:16.147210 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:35:16 crc kubenswrapper[4808]: E0217 16:35:16.155435 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:35:26 crc kubenswrapper[4808]: E0217 16:35:26.149318 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:35:27 crc kubenswrapper[4808]: I0217 16:35:27.162266 4808 scope.go:117] "RemoveContainer" containerID="1bc8c301ec8b4441d9a8329001acd7ade818d27cbaa99f4b04c925c309e2eb22" Feb 17 16:35:27 crc kubenswrapper[4808]: E0217 16:35:27.164285 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:35:31 crc kubenswrapper[4808]: E0217 16:35:31.148107 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:35:41 crc kubenswrapper[4808]: E0217 16:35:41.149668 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:35:42 crc kubenswrapper[4808]: I0217 16:35:42.146423 4808 scope.go:117] "RemoveContainer" containerID="1bc8c301ec8b4441d9a8329001acd7ade818d27cbaa99f4b04c925c309e2eb22" Feb 17 16:35:42 crc kubenswrapper[4808]: E0217 16:35:42.147107 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:35:45 crc kubenswrapper[4808]: E0217 16:35:45.148812 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:35:55 crc kubenswrapper[4808]: E0217 16:35:55.148477 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:35:57 crc kubenswrapper[4808]: I0217 16:35:57.152078 4808 scope.go:117] "RemoveContainer" containerID="1bc8c301ec8b4441d9a8329001acd7ade818d27cbaa99f4b04c925c309e2eb22" Feb 17 16:35:57 crc kubenswrapper[4808]: E0217 16:35:57.152693 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:35:58 crc kubenswrapper[4808]: E0217 16:35:58.150156 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:36:08 crc kubenswrapper[4808]: I0217 16:36:08.146294 4808 scope.go:117] "RemoveContainer" containerID="1bc8c301ec8b4441d9a8329001acd7ade818d27cbaa99f4b04c925c309e2eb22" Feb 17 16:36:08 crc kubenswrapper[4808]: E0217 16:36:08.147101 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:36:10 crc kubenswrapper[4808]: E0217 16:36:10.149348 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:36:12 crc kubenswrapper[4808]: I0217 16:36:12.350235 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-t6krv"] Feb 17 16:36:12 crc kubenswrapper[4808]: E0217 16:36:12.350954 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5f89f01-6a5d-4eb4-adc9-cbfbd921accf" containerName="collect-profiles" Feb 17 16:36:12 crc kubenswrapper[4808]: I0217 16:36:12.350969 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5f89f01-6a5d-4eb4-adc9-cbfbd921accf" containerName="collect-profiles" Feb 17 16:36:12 crc kubenswrapper[4808]: I0217 16:36:12.351215 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5f89f01-6a5d-4eb4-adc9-cbfbd921accf" containerName="collect-profiles" Feb 17 16:36:12 crc kubenswrapper[4808]: I0217 16:36:12.353260 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t6krv" Feb 17 16:36:12 crc kubenswrapper[4808]: I0217 16:36:12.363731 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-t6krv"] Feb 17 16:36:12 crc kubenswrapper[4808]: I0217 16:36:12.452538 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33cc2cac-9faa-4273-905f-128750f10c80-utilities\") pod \"certified-operators-t6krv\" (UID: \"33cc2cac-9faa-4273-905f-128750f10c80\") " pod="openshift-marketplace/certified-operators-t6krv" Feb 17 16:36:12 crc kubenswrapper[4808]: I0217 16:36:12.452629 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwnt9\" (UniqueName: \"kubernetes.io/projected/33cc2cac-9faa-4273-905f-128750f10c80-kube-api-access-xwnt9\") pod \"certified-operators-t6krv\" (UID: \"33cc2cac-9faa-4273-905f-128750f10c80\") " pod="openshift-marketplace/certified-operators-t6krv" Feb 17 16:36:12 crc kubenswrapper[4808]: I0217 16:36:12.452683 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33cc2cac-9faa-4273-905f-128750f10c80-catalog-content\") pod \"certified-operators-t6krv\" (UID: \"33cc2cac-9faa-4273-905f-128750f10c80\") " pod="openshift-marketplace/certified-operators-t6krv" Feb 17 16:36:12 crc kubenswrapper[4808]: I0217 16:36:12.554969 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33cc2cac-9faa-4273-905f-128750f10c80-utilities\") pod \"certified-operators-t6krv\" (UID: \"33cc2cac-9faa-4273-905f-128750f10c80\") " pod="openshift-marketplace/certified-operators-t6krv" Feb 17 16:36:12 crc kubenswrapper[4808]: I0217 16:36:12.555092 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwnt9\" (UniqueName: \"kubernetes.io/projected/33cc2cac-9faa-4273-905f-128750f10c80-kube-api-access-xwnt9\") pod \"certified-operators-t6krv\" (UID: \"33cc2cac-9faa-4273-905f-128750f10c80\") " pod="openshift-marketplace/certified-operators-t6krv" Feb 17 16:36:12 crc kubenswrapper[4808]: I0217 16:36:12.555161 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33cc2cac-9faa-4273-905f-128750f10c80-catalog-content\") pod \"certified-operators-t6krv\" (UID: \"33cc2cac-9faa-4273-905f-128750f10c80\") " pod="openshift-marketplace/certified-operators-t6krv" Feb 17 16:36:12 crc kubenswrapper[4808]: I0217 16:36:12.555542 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33cc2cac-9faa-4273-905f-128750f10c80-utilities\") pod \"certified-operators-t6krv\" (UID: \"33cc2cac-9faa-4273-905f-128750f10c80\") " pod="openshift-marketplace/certified-operators-t6krv" Feb 17 16:36:12 crc kubenswrapper[4808]: I0217 16:36:12.555554 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33cc2cac-9faa-4273-905f-128750f10c80-catalog-content\") pod \"certified-operators-t6krv\" (UID: \"33cc2cac-9faa-4273-905f-128750f10c80\") " pod="openshift-marketplace/certified-operators-t6krv" Feb 17 16:36:12 crc kubenswrapper[4808]: I0217 16:36:12.578414 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwnt9\" (UniqueName: \"kubernetes.io/projected/33cc2cac-9faa-4273-905f-128750f10c80-kube-api-access-xwnt9\") pod \"certified-operators-t6krv\" (UID: \"33cc2cac-9faa-4273-905f-128750f10c80\") " pod="openshift-marketplace/certified-operators-t6krv" Feb 17 16:36:12 crc kubenswrapper[4808]: I0217 16:36:12.683442 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t6krv" Feb 17 16:36:13 crc kubenswrapper[4808]: E0217 16:36:13.161063 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:36:13 crc kubenswrapper[4808]: I0217 16:36:13.357973 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-t6krv"] Feb 17 16:36:13 crc kubenswrapper[4808]: I0217 16:36:13.841125 4808 generic.go:334] "Generic (PLEG): container finished" podID="486d1a55-6cee-4d24-ab2b-5c5c61c6d3d3" containerID="8411ed95197c32b6e4edaeead95a670ced65c70f3a3592064db86f9a1b81cf5a" exitCode=2 Feb 17 16:36:13 crc kubenswrapper[4808]: I0217 16:36:13.841221 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9nkdz" event={"ID":"486d1a55-6cee-4d24-ab2b-5c5c61c6d3d3","Type":"ContainerDied","Data":"8411ed95197c32b6e4edaeead95a670ced65c70f3a3592064db86f9a1b81cf5a"} Feb 17 16:36:13 crc kubenswrapper[4808]: I0217 16:36:13.847135 4808 generic.go:334] "Generic (PLEG): container finished" podID="33cc2cac-9faa-4273-905f-128750f10c80" containerID="6c0f46d7c8aa34df68f09873dff14de5301f914b39e6b9525c0c8e733141a7dd" exitCode=0 Feb 17 16:36:13 crc kubenswrapper[4808]: I0217 16:36:13.847189 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t6krv" event={"ID":"33cc2cac-9faa-4273-905f-128750f10c80","Type":"ContainerDied","Data":"6c0f46d7c8aa34df68f09873dff14de5301f914b39e6b9525c0c8e733141a7dd"} Feb 17 16:36:13 crc kubenswrapper[4808]: I0217 16:36:13.847220 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t6krv" event={"ID":"33cc2cac-9faa-4273-905f-128750f10c80","Type":"ContainerStarted","Data":"2438c932894e0e169fd6358da543273050a3355916f7007c304f5b2829473875"} Feb 17 16:36:15 crc kubenswrapper[4808]: I0217 16:36:15.398403 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9nkdz" Feb 17 16:36:15 crc kubenswrapper[4808]: I0217 16:36:15.428101 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cmcgj\" (UniqueName: \"kubernetes.io/projected/486d1a55-6cee-4d24-ab2b-5c5c61c6d3d3-kube-api-access-cmcgj\") pod \"486d1a55-6cee-4d24-ab2b-5c5c61c6d3d3\" (UID: \"486d1a55-6cee-4d24-ab2b-5c5c61c6d3d3\") " Feb 17 16:36:15 crc kubenswrapper[4808]: I0217 16:36:15.428366 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/486d1a55-6cee-4d24-ab2b-5c5c61c6d3d3-ssh-key-openstack-edpm-ipam\") pod \"486d1a55-6cee-4d24-ab2b-5c5c61c6d3d3\" (UID: \"486d1a55-6cee-4d24-ab2b-5c5c61c6d3d3\") " Feb 17 16:36:15 crc kubenswrapper[4808]: I0217 16:36:15.428395 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/486d1a55-6cee-4d24-ab2b-5c5c61c6d3d3-inventory\") pod \"486d1a55-6cee-4d24-ab2b-5c5c61c6d3d3\" (UID: \"486d1a55-6cee-4d24-ab2b-5c5c61c6d3d3\") " Feb 17 16:36:15 crc kubenswrapper[4808]: I0217 16:36:15.469220 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/486d1a55-6cee-4d24-ab2b-5c5c61c6d3d3-kube-api-access-cmcgj" (OuterVolumeSpecName: "kube-api-access-cmcgj") pod "486d1a55-6cee-4d24-ab2b-5c5c61c6d3d3" (UID: "486d1a55-6cee-4d24-ab2b-5c5c61c6d3d3"). InnerVolumeSpecName "kube-api-access-cmcgj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:36:15 crc kubenswrapper[4808]: I0217 16:36:15.482773 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/486d1a55-6cee-4d24-ab2b-5c5c61c6d3d3-inventory" (OuterVolumeSpecName: "inventory") pod "486d1a55-6cee-4d24-ab2b-5c5c61c6d3d3" (UID: "486d1a55-6cee-4d24-ab2b-5c5c61c6d3d3"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:36:15 crc kubenswrapper[4808]: I0217 16:36:15.496091 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/486d1a55-6cee-4d24-ab2b-5c5c61c6d3d3-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "486d1a55-6cee-4d24-ab2b-5c5c61c6d3d3" (UID: "486d1a55-6cee-4d24-ab2b-5c5c61c6d3d3"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:36:15 crc kubenswrapper[4808]: I0217 16:36:15.535313 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cmcgj\" (UniqueName: \"kubernetes.io/projected/486d1a55-6cee-4d24-ab2b-5c5c61c6d3d3-kube-api-access-cmcgj\") on node \"crc\" DevicePath \"\"" Feb 17 16:36:15 crc kubenswrapper[4808]: I0217 16:36:15.535350 4808 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/486d1a55-6cee-4d24-ab2b-5c5c61c6d3d3-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 16:36:15 crc kubenswrapper[4808]: I0217 16:36:15.535361 4808 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/486d1a55-6cee-4d24-ab2b-5c5c61c6d3d3-inventory\") on node \"crc\" DevicePath \"\"" Feb 17 16:36:15 crc kubenswrapper[4808]: I0217 16:36:15.869818 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t6krv" event={"ID":"33cc2cac-9faa-4273-905f-128750f10c80","Type":"ContainerStarted","Data":"077e44c9a2f154dd65f7667cdaea0a5343ca52a9523d095319a495c5f5c86dd4"} Feb 17 16:36:15 crc kubenswrapper[4808]: I0217 16:36:15.872023 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9nkdz" event={"ID":"486d1a55-6cee-4d24-ab2b-5c5c61c6d3d3","Type":"ContainerDied","Data":"7f46c1a26483e6a88332ba91471836d6c5c7e3122663fd45f8f638555de77a90"} Feb 17 16:36:15 crc kubenswrapper[4808]: I0217 16:36:15.872048 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7f46c1a26483e6a88332ba91471836d6c5c7e3122663fd45f8f638555de77a90" Feb 17 16:36:15 crc kubenswrapper[4808]: I0217 16:36:15.872092 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9nkdz" Feb 17 16:36:16 crc kubenswrapper[4808]: I0217 16:36:16.887196 4808 generic.go:334] "Generic (PLEG): container finished" podID="33cc2cac-9faa-4273-905f-128750f10c80" containerID="077e44c9a2f154dd65f7667cdaea0a5343ca52a9523d095319a495c5f5c86dd4" exitCode=0 Feb 17 16:36:16 crc kubenswrapper[4808]: I0217 16:36:16.887269 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t6krv" event={"ID":"33cc2cac-9faa-4273-905f-128750f10c80","Type":"ContainerDied","Data":"077e44c9a2f154dd65f7667cdaea0a5343ca52a9523d095319a495c5f5c86dd4"} Feb 17 16:36:17 crc kubenswrapper[4808]: I0217 16:36:17.902868 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t6krv" event={"ID":"33cc2cac-9faa-4273-905f-128750f10c80","Type":"ContainerStarted","Data":"5457c7bcaeafa118d11c137a8052169d692ab4250cc7b69cced9a3c2c6e6084b"} Feb 17 16:36:17 crc kubenswrapper[4808]: I0217 16:36:17.923229 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-t6krv" podStartSLOduration=2.221646902 podStartE2EDuration="5.923204526s" podCreationTimestamp="2026-02-17 16:36:12 +0000 UTC" firstStartedPulling="2026-02-17 16:36:13.849877092 +0000 UTC m=+2537.366236165" lastFinishedPulling="2026-02-17 16:36:17.551434726 +0000 UTC m=+2541.067793789" observedRunningTime="2026-02-17 16:36:17.918971882 +0000 UTC m=+2541.435330955" watchObservedRunningTime="2026-02-17 16:36:17.923204526 +0000 UTC m=+2541.439563609" Feb 17 16:36:20 crc kubenswrapper[4808]: I0217 16:36:20.145691 4808 scope.go:117] "RemoveContainer" containerID="1bc8c301ec8b4441d9a8329001acd7ade818d27cbaa99f4b04c925c309e2eb22" Feb 17 16:36:20 crc kubenswrapper[4808]: E0217 16:36:20.146226 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:36:22 crc kubenswrapper[4808]: E0217 16:36:22.147690 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:36:22 crc kubenswrapper[4808]: I0217 16:36:22.683682 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-t6krv" Feb 17 16:36:22 crc kubenswrapper[4808]: I0217 16:36:22.684028 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-t6krv" Feb 17 16:36:22 crc kubenswrapper[4808]: I0217 16:36:22.734858 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-t6krv" Feb 17 16:36:23 crc kubenswrapper[4808]: I0217 16:36:23.006102 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-t6krv" Feb 17 16:36:23 crc kubenswrapper[4808]: I0217 16:36:23.063941 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-t6krv"] Feb 17 16:36:24 crc kubenswrapper[4808]: E0217 16:36:24.148318 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:36:24 crc kubenswrapper[4808]: I0217 16:36:24.971976 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-t6krv" podUID="33cc2cac-9faa-4273-905f-128750f10c80" containerName="registry-server" containerID="cri-o://5457c7bcaeafa118d11c137a8052169d692ab4250cc7b69cced9a3c2c6e6084b" gracePeriod=2 Feb 17 16:36:25 crc kubenswrapper[4808]: I0217 16:36:25.580257 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t6krv" Feb 17 16:36:25 crc kubenswrapper[4808]: I0217 16:36:25.740801 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xwnt9\" (UniqueName: \"kubernetes.io/projected/33cc2cac-9faa-4273-905f-128750f10c80-kube-api-access-xwnt9\") pod \"33cc2cac-9faa-4273-905f-128750f10c80\" (UID: \"33cc2cac-9faa-4273-905f-128750f10c80\") " Feb 17 16:36:25 crc kubenswrapper[4808]: I0217 16:36:25.741471 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33cc2cac-9faa-4273-905f-128750f10c80-catalog-content\") pod \"33cc2cac-9faa-4273-905f-128750f10c80\" (UID: \"33cc2cac-9faa-4273-905f-128750f10c80\") " Feb 17 16:36:25 crc kubenswrapper[4808]: I0217 16:36:25.741743 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33cc2cac-9faa-4273-905f-128750f10c80-utilities\") pod \"33cc2cac-9faa-4273-905f-128750f10c80\" (UID: \"33cc2cac-9faa-4273-905f-128750f10c80\") " Feb 17 16:36:25 crc kubenswrapper[4808]: I0217 16:36:25.743243 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/33cc2cac-9faa-4273-905f-128750f10c80-utilities" (OuterVolumeSpecName: "utilities") pod "33cc2cac-9faa-4273-905f-128750f10c80" (UID: "33cc2cac-9faa-4273-905f-128750f10c80"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:36:25 crc kubenswrapper[4808]: I0217 16:36:25.747438 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33cc2cac-9faa-4273-905f-128750f10c80-kube-api-access-xwnt9" (OuterVolumeSpecName: "kube-api-access-xwnt9") pod "33cc2cac-9faa-4273-905f-128750f10c80" (UID: "33cc2cac-9faa-4273-905f-128750f10c80"). InnerVolumeSpecName "kube-api-access-xwnt9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:36:25 crc kubenswrapper[4808]: I0217 16:36:25.844412 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33cc2cac-9faa-4273-905f-128750f10c80-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:36:25 crc kubenswrapper[4808]: I0217 16:36:25.844460 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xwnt9\" (UniqueName: \"kubernetes.io/projected/33cc2cac-9faa-4273-905f-128750f10c80-kube-api-access-xwnt9\") on node \"crc\" DevicePath \"\"" Feb 17 16:36:25 crc kubenswrapper[4808]: I0217 16:36:25.964667 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/33cc2cac-9faa-4273-905f-128750f10c80-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "33cc2cac-9faa-4273-905f-128750f10c80" (UID: "33cc2cac-9faa-4273-905f-128750f10c80"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:36:25 crc kubenswrapper[4808]: I0217 16:36:25.988141 4808 generic.go:334] "Generic (PLEG): container finished" podID="33cc2cac-9faa-4273-905f-128750f10c80" containerID="5457c7bcaeafa118d11c137a8052169d692ab4250cc7b69cced9a3c2c6e6084b" exitCode=0 Feb 17 16:36:25 crc kubenswrapper[4808]: I0217 16:36:25.988196 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t6krv" event={"ID":"33cc2cac-9faa-4273-905f-128750f10c80","Type":"ContainerDied","Data":"5457c7bcaeafa118d11c137a8052169d692ab4250cc7b69cced9a3c2c6e6084b"} Feb 17 16:36:25 crc kubenswrapper[4808]: I0217 16:36:25.988229 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t6krv" event={"ID":"33cc2cac-9faa-4273-905f-128750f10c80","Type":"ContainerDied","Data":"2438c932894e0e169fd6358da543273050a3355916f7007c304f5b2829473875"} Feb 17 16:36:25 crc kubenswrapper[4808]: I0217 16:36:25.988251 4808 scope.go:117] "RemoveContainer" containerID="5457c7bcaeafa118d11c137a8052169d692ab4250cc7b69cced9a3c2c6e6084b" Feb 17 16:36:25 crc kubenswrapper[4808]: I0217 16:36:25.988464 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t6krv" Feb 17 16:36:26 crc kubenswrapper[4808]: I0217 16:36:26.018873 4808 scope.go:117] "RemoveContainer" containerID="077e44c9a2f154dd65f7667cdaea0a5343ca52a9523d095319a495c5f5c86dd4" Feb 17 16:36:26 crc kubenswrapper[4808]: I0217 16:36:26.027413 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-t6krv"] Feb 17 16:36:26 crc kubenswrapper[4808]: I0217 16:36:26.036057 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-t6krv"] Feb 17 16:36:26 crc kubenswrapper[4808]: I0217 16:36:26.045465 4808 scope.go:117] "RemoveContainer" containerID="6c0f46d7c8aa34df68f09873dff14de5301f914b39e6b9525c0c8e733141a7dd" Feb 17 16:36:26 crc kubenswrapper[4808]: I0217 16:36:26.048829 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33cc2cac-9faa-4273-905f-128750f10c80-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:36:26 crc kubenswrapper[4808]: I0217 16:36:26.089769 4808 scope.go:117] "RemoveContainer" containerID="5457c7bcaeafa118d11c137a8052169d692ab4250cc7b69cced9a3c2c6e6084b" Feb 17 16:36:26 crc kubenswrapper[4808]: E0217 16:36:26.090282 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5457c7bcaeafa118d11c137a8052169d692ab4250cc7b69cced9a3c2c6e6084b\": container with ID starting with 5457c7bcaeafa118d11c137a8052169d692ab4250cc7b69cced9a3c2c6e6084b not found: ID does not exist" containerID="5457c7bcaeafa118d11c137a8052169d692ab4250cc7b69cced9a3c2c6e6084b" Feb 17 16:36:26 crc kubenswrapper[4808]: I0217 16:36:26.090335 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5457c7bcaeafa118d11c137a8052169d692ab4250cc7b69cced9a3c2c6e6084b"} err="failed to get container status \"5457c7bcaeafa118d11c137a8052169d692ab4250cc7b69cced9a3c2c6e6084b\": rpc error: code = NotFound desc = could not find container \"5457c7bcaeafa118d11c137a8052169d692ab4250cc7b69cced9a3c2c6e6084b\": container with ID starting with 5457c7bcaeafa118d11c137a8052169d692ab4250cc7b69cced9a3c2c6e6084b not found: ID does not exist" Feb 17 16:36:26 crc kubenswrapper[4808]: I0217 16:36:26.090366 4808 scope.go:117] "RemoveContainer" containerID="077e44c9a2f154dd65f7667cdaea0a5343ca52a9523d095319a495c5f5c86dd4" Feb 17 16:36:26 crc kubenswrapper[4808]: E0217 16:36:26.091144 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"077e44c9a2f154dd65f7667cdaea0a5343ca52a9523d095319a495c5f5c86dd4\": container with ID starting with 077e44c9a2f154dd65f7667cdaea0a5343ca52a9523d095319a495c5f5c86dd4 not found: ID does not exist" containerID="077e44c9a2f154dd65f7667cdaea0a5343ca52a9523d095319a495c5f5c86dd4" Feb 17 16:36:26 crc kubenswrapper[4808]: I0217 16:36:26.091201 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"077e44c9a2f154dd65f7667cdaea0a5343ca52a9523d095319a495c5f5c86dd4"} err="failed to get container status \"077e44c9a2f154dd65f7667cdaea0a5343ca52a9523d095319a495c5f5c86dd4\": rpc error: code = NotFound desc = could not find container \"077e44c9a2f154dd65f7667cdaea0a5343ca52a9523d095319a495c5f5c86dd4\": container with ID starting with 077e44c9a2f154dd65f7667cdaea0a5343ca52a9523d095319a495c5f5c86dd4 not found: ID does not exist" Feb 17 16:36:26 crc kubenswrapper[4808]: I0217 16:36:26.091236 4808 scope.go:117] "RemoveContainer" containerID="6c0f46d7c8aa34df68f09873dff14de5301f914b39e6b9525c0c8e733141a7dd" Feb 17 16:36:26 crc kubenswrapper[4808]: E0217 16:36:26.091715 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c0f46d7c8aa34df68f09873dff14de5301f914b39e6b9525c0c8e733141a7dd\": container with ID starting with 6c0f46d7c8aa34df68f09873dff14de5301f914b39e6b9525c0c8e733141a7dd not found: ID does not exist" containerID="6c0f46d7c8aa34df68f09873dff14de5301f914b39e6b9525c0c8e733141a7dd" Feb 17 16:36:26 crc kubenswrapper[4808]: I0217 16:36:26.091761 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c0f46d7c8aa34df68f09873dff14de5301f914b39e6b9525c0c8e733141a7dd"} err="failed to get container status \"6c0f46d7c8aa34df68f09873dff14de5301f914b39e6b9525c0c8e733141a7dd\": rpc error: code = NotFound desc = could not find container \"6c0f46d7c8aa34df68f09873dff14de5301f914b39e6b9525c0c8e733141a7dd\": container with ID starting with 6c0f46d7c8aa34df68f09873dff14de5301f914b39e6b9525c0c8e733141a7dd not found: ID does not exist" Feb 17 16:36:27 crc kubenswrapper[4808]: I0217 16:36:27.168228 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33cc2cac-9faa-4273-905f-128750f10c80" path="/var/lib/kubelet/pods/33cc2cac-9faa-4273-905f-128750f10c80/volumes" Feb 17 16:36:33 crc kubenswrapper[4808]: I0217 16:36:33.041075 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hsdg8"] Feb 17 16:36:33 crc kubenswrapper[4808]: E0217 16:36:33.041851 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33cc2cac-9faa-4273-905f-128750f10c80" containerName="registry-server" Feb 17 16:36:33 crc kubenswrapper[4808]: I0217 16:36:33.041865 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="33cc2cac-9faa-4273-905f-128750f10c80" containerName="registry-server" Feb 17 16:36:33 crc kubenswrapper[4808]: E0217 16:36:33.041879 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33cc2cac-9faa-4273-905f-128750f10c80" containerName="extract-content" Feb 17 16:36:33 crc kubenswrapper[4808]: I0217 16:36:33.041885 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="33cc2cac-9faa-4273-905f-128750f10c80" containerName="extract-content" Feb 17 16:36:33 crc kubenswrapper[4808]: E0217 16:36:33.041914 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33cc2cac-9faa-4273-905f-128750f10c80" containerName="extract-utilities" Feb 17 16:36:33 crc kubenswrapper[4808]: I0217 16:36:33.041921 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="33cc2cac-9faa-4273-905f-128750f10c80" containerName="extract-utilities" Feb 17 16:36:33 crc kubenswrapper[4808]: E0217 16:36:33.041933 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="486d1a55-6cee-4d24-ab2b-5c5c61c6d3d3" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 17 16:36:33 crc kubenswrapper[4808]: I0217 16:36:33.041940 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="486d1a55-6cee-4d24-ab2b-5c5c61c6d3d3" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 17 16:36:33 crc kubenswrapper[4808]: I0217 16:36:33.042169 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="486d1a55-6cee-4d24-ab2b-5c5c61c6d3d3" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 17 16:36:33 crc kubenswrapper[4808]: I0217 16:36:33.042196 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="33cc2cac-9faa-4273-905f-128750f10c80" containerName="registry-server" Feb 17 16:36:33 crc kubenswrapper[4808]: I0217 16:36:33.043079 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hsdg8" Feb 17 16:36:33 crc kubenswrapper[4808]: I0217 16:36:33.050261 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 17 16:36:33 crc kubenswrapper[4808]: I0217 16:36:33.050940 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-gpcsv" Feb 17 16:36:33 crc kubenswrapper[4808]: I0217 16:36:33.051299 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 17 16:36:33 crc kubenswrapper[4808]: I0217 16:36:33.056330 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 17 16:36:33 crc kubenswrapper[4808]: I0217 16:36:33.056558 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hsdg8"] Feb 17 16:36:33 crc kubenswrapper[4808]: I0217 16:36:33.110160 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c51156c6-7d2b-4871-9ae0-963c4eb67454-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-hsdg8\" (UID: \"c51156c6-7d2b-4871-9ae0-963c4eb67454\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hsdg8" Feb 17 16:36:33 crc kubenswrapper[4808]: I0217 16:36:33.110236 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c51156c6-7d2b-4871-9ae0-963c4eb67454-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-hsdg8\" (UID: \"c51156c6-7d2b-4871-9ae0-963c4eb67454\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hsdg8" Feb 17 16:36:33 crc kubenswrapper[4808]: I0217 16:36:33.110271 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nf9ss\" (UniqueName: \"kubernetes.io/projected/c51156c6-7d2b-4871-9ae0-963c4eb67454-kube-api-access-nf9ss\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-hsdg8\" (UID: \"c51156c6-7d2b-4871-9ae0-963c4eb67454\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hsdg8" Feb 17 16:36:33 crc kubenswrapper[4808]: I0217 16:36:33.146102 4808 scope.go:117] "RemoveContainer" containerID="1bc8c301ec8b4441d9a8329001acd7ade818d27cbaa99f4b04c925c309e2eb22" Feb 17 16:36:33 crc kubenswrapper[4808]: E0217 16:36:33.146442 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:36:33 crc kubenswrapper[4808]: I0217 16:36:33.211131 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c51156c6-7d2b-4871-9ae0-963c4eb67454-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-hsdg8\" (UID: \"c51156c6-7d2b-4871-9ae0-963c4eb67454\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hsdg8" Feb 17 16:36:33 crc kubenswrapper[4808]: I0217 16:36:33.212086 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c51156c6-7d2b-4871-9ae0-963c4eb67454-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-hsdg8\" (UID: \"c51156c6-7d2b-4871-9ae0-963c4eb67454\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hsdg8" Feb 17 16:36:33 crc kubenswrapper[4808]: I0217 16:36:33.212195 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nf9ss\" (UniqueName: \"kubernetes.io/projected/c51156c6-7d2b-4871-9ae0-963c4eb67454-kube-api-access-nf9ss\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-hsdg8\" (UID: \"c51156c6-7d2b-4871-9ae0-963c4eb67454\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hsdg8" Feb 17 16:36:33 crc kubenswrapper[4808]: I0217 16:36:33.216438 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c51156c6-7d2b-4871-9ae0-963c4eb67454-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-hsdg8\" (UID: \"c51156c6-7d2b-4871-9ae0-963c4eb67454\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hsdg8" Feb 17 16:36:33 crc kubenswrapper[4808]: I0217 16:36:33.216623 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c51156c6-7d2b-4871-9ae0-963c4eb67454-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-hsdg8\" (UID: \"c51156c6-7d2b-4871-9ae0-963c4eb67454\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hsdg8" Feb 17 16:36:33 crc kubenswrapper[4808]: I0217 16:36:33.233225 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nf9ss\" (UniqueName: \"kubernetes.io/projected/c51156c6-7d2b-4871-9ae0-963c4eb67454-kube-api-access-nf9ss\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-hsdg8\" (UID: \"c51156c6-7d2b-4871-9ae0-963c4eb67454\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hsdg8" Feb 17 16:36:33 crc kubenswrapper[4808]: I0217 16:36:33.388284 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hsdg8" Feb 17 16:36:33 crc kubenswrapper[4808]: I0217 16:36:33.939744 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hsdg8"] Feb 17 16:36:34 crc kubenswrapper[4808]: I0217 16:36:34.073509 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hsdg8" event={"ID":"c51156c6-7d2b-4871-9ae0-963c4eb67454","Type":"ContainerStarted","Data":"0bd0464d30a220d6d00def18b5261451af4eeafffd898c8b5ae55cfbfb63623f"} Feb 17 16:36:34 crc kubenswrapper[4808]: E0217 16:36:34.148736 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:36:35 crc kubenswrapper[4808]: I0217 16:36:35.084313 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hsdg8" event={"ID":"c51156c6-7d2b-4871-9ae0-963c4eb67454","Type":"ContainerStarted","Data":"65dafe8a1101f4ddfb7e0bce9d223f707cac8bd45bd857f95672b3b349fe2857"} Feb 17 16:36:35 crc kubenswrapper[4808]: I0217 16:36:35.109970 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hsdg8" podStartSLOduration=1.540102377 podStartE2EDuration="2.109953124s" podCreationTimestamp="2026-02-17 16:36:33 +0000 UTC" firstStartedPulling="2026-02-17 16:36:33.944832202 +0000 UTC m=+2557.461191275" lastFinishedPulling="2026-02-17 16:36:34.514682939 +0000 UTC m=+2558.031042022" observedRunningTime="2026-02-17 16:36:35.102287628 +0000 UTC m=+2558.618646701" watchObservedRunningTime="2026-02-17 16:36:35.109953124 +0000 UTC m=+2558.626312197" Feb 17 16:36:39 crc kubenswrapper[4808]: E0217 16:36:39.150113 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:36:47 crc kubenswrapper[4808]: E0217 16:36:47.158035 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:36:48 crc kubenswrapper[4808]: I0217 16:36:48.166263 4808 scope.go:117] "RemoveContainer" containerID="1bc8c301ec8b4441d9a8329001acd7ade818d27cbaa99f4b04c925c309e2eb22" Feb 17 16:36:48 crc kubenswrapper[4808]: E0217 16:36:48.166952 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:36:50 crc kubenswrapper[4808]: E0217 16:36:50.148717 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:37:00 crc kubenswrapper[4808]: I0217 16:37:00.851814 4808 scope.go:117] "RemoveContainer" containerID="1bc8c301ec8b4441d9a8329001acd7ade818d27cbaa99f4b04c925c309e2eb22" Feb 17 16:37:00 crc kubenswrapper[4808]: E0217 16:37:00.874119 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:37:01 crc kubenswrapper[4808]: I0217 16:37:01.892253 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" event={"ID":"ca38b6e7-b21c-453d-8b6c-a163dac84b35","Type":"ContainerStarted","Data":"7e8601a98b232938835916b07f525ce196aee0ee01e8ee4ec9de824633712b8d"} Feb 17 16:37:03 crc kubenswrapper[4808]: E0217 16:37:03.151431 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:37:12 crc kubenswrapper[4808]: E0217 16:37:12.149122 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:37:15 crc kubenswrapper[4808]: E0217 16:37:15.148812 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:37:26 crc kubenswrapper[4808]: E0217 16:37:26.149073 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:37:29 crc kubenswrapper[4808]: E0217 16:37:29.151531 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:37:37 crc kubenswrapper[4808]: E0217 16:37:37.165966 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:37:42 crc kubenswrapper[4808]: E0217 16:37:42.149179 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:37:51 crc kubenswrapper[4808]: E0217 16:37:51.151520 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:37:57 crc kubenswrapper[4808]: E0217 16:37:57.164420 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:38:05 crc kubenswrapper[4808]: E0217 16:38:05.148996 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:38:11 crc kubenswrapper[4808]: E0217 16:38:11.150056 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:38:16 crc kubenswrapper[4808]: E0217 16:38:16.149639 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:38:26 crc kubenswrapper[4808]: E0217 16:38:26.147962 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:38:31 crc kubenswrapper[4808]: E0217 16:38:31.149093 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:38:39 crc kubenswrapper[4808]: E0217 16:38:39.149233 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:38:43 crc kubenswrapper[4808]: I0217 16:38:43.835790 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-q8vwn"] Feb 17 16:38:43 crc kubenswrapper[4808]: I0217 16:38:43.844263 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q8vwn" Feb 17 16:38:43 crc kubenswrapper[4808]: I0217 16:38:43.866976 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-q8vwn"] Feb 17 16:38:43 crc kubenswrapper[4808]: I0217 16:38:43.930964 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a88dad2-a141-4d84-85d2-e8b97defad8b-catalog-content\") pod \"redhat-operators-q8vwn\" (UID: \"5a88dad2-a141-4d84-85d2-e8b97defad8b\") " pod="openshift-marketplace/redhat-operators-q8vwn" Feb 17 16:38:43 crc kubenswrapper[4808]: I0217 16:38:43.931460 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a88dad2-a141-4d84-85d2-e8b97defad8b-utilities\") pod \"redhat-operators-q8vwn\" (UID: \"5a88dad2-a141-4d84-85d2-e8b97defad8b\") " pod="openshift-marketplace/redhat-operators-q8vwn" Feb 17 16:38:43 crc kubenswrapper[4808]: I0217 16:38:43.931565 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6v7rb\" (UniqueName: \"kubernetes.io/projected/5a88dad2-a141-4d84-85d2-e8b97defad8b-kube-api-access-6v7rb\") pod \"redhat-operators-q8vwn\" (UID: \"5a88dad2-a141-4d84-85d2-e8b97defad8b\") " pod="openshift-marketplace/redhat-operators-q8vwn" Feb 17 16:38:44 crc kubenswrapper[4808]: I0217 16:38:44.033984 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a88dad2-a141-4d84-85d2-e8b97defad8b-catalog-content\") pod \"redhat-operators-q8vwn\" (UID: \"5a88dad2-a141-4d84-85d2-e8b97defad8b\") " pod="openshift-marketplace/redhat-operators-q8vwn" Feb 17 16:38:44 crc kubenswrapper[4808]: I0217 16:38:44.034090 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a88dad2-a141-4d84-85d2-e8b97defad8b-utilities\") pod \"redhat-operators-q8vwn\" (UID: \"5a88dad2-a141-4d84-85d2-e8b97defad8b\") " pod="openshift-marketplace/redhat-operators-q8vwn" Feb 17 16:38:44 crc kubenswrapper[4808]: I0217 16:38:44.034866 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a88dad2-a141-4d84-85d2-e8b97defad8b-utilities\") pod \"redhat-operators-q8vwn\" (UID: \"5a88dad2-a141-4d84-85d2-e8b97defad8b\") " pod="openshift-marketplace/redhat-operators-q8vwn" Feb 17 16:38:44 crc kubenswrapper[4808]: I0217 16:38:44.034882 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a88dad2-a141-4d84-85d2-e8b97defad8b-catalog-content\") pod \"redhat-operators-q8vwn\" (UID: \"5a88dad2-a141-4d84-85d2-e8b97defad8b\") " pod="openshift-marketplace/redhat-operators-q8vwn" Feb 17 16:38:44 crc kubenswrapper[4808]: I0217 16:38:44.035075 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6v7rb\" (UniqueName: \"kubernetes.io/projected/5a88dad2-a141-4d84-85d2-e8b97defad8b-kube-api-access-6v7rb\") pod \"redhat-operators-q8vwn\" (UID: \"5a88dad2-a141-4d84-85d2-e8b97defad8b\") " pod="openshift-marketplace/redhat-operators-q8vwn" Feb 17 16:38:44 crc kubenswrapper[4808]: I0217 16:38:44.065134 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6v7rb\" (UniqueName: \"kubernetes.io/projected/5a88dad2-a141-4d84-85d2-e8b97defad8b-kube-api-access-6v7rb\") pod \"redhat-operators-q8vwn\" (UID: \"5a88dad2-a141-4d84-85d2-e8b97defad8b\") " pod="openshift-marketplace/redhat-operators-q8vwn" Feb 17 16:38:44 crc kubenswrapper[4808]: E0217 16:38:44.147604 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:38:44 crc kubenswrapper[4808]: I0217 16:38:44.202376 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q8vwn" Feb 17 16:38:44 crc kubenswrapper[4808]: I0217 16:38:44.665985 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-q8vwn"] Feb 17 16:38:45 crc kubenswrapper[4808]: I0217 16:38:45.058704 4808 generic.go:334] "Generic (PLEG): container finished" podID="5a88dad2-a141-4d84-85d2-e8b97defad8b" containerID="7a374d10196aea00cf6516262ebdd7226f9c80c3f45fc5a10080aa5a274591d7" exitCode=0 Feb 17 16:38:45 crc kubenswrapper[4808]: I0217 16:38:45.058885 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q8vwn" event={"ID":"5a88dad2-a141-4d84-85d2-e8b97defad8b","Type":"ContainerDied","Data":"7a374d10196aea00cf6516262ebdd7226f9c80c3f45fc5a10080aa5a274591d7"} Feb 17 16:38:45 crc kubenswrapper[4808]: I0217 16:38:45.060040 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q8vwn" event={"ID":"5a88dad2-a141-4d84-85d2-e8b97defad8b","Type":"ContainerStarted","Data":"66f492c858449812bc56a11568b4164d08498794c2af45b5127f3bbf69b58322"} Feb 17 16:38:46 crc kubenswrapper[4808]: I0217 16:38:46.075471 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q8vwn" event={"ID":"5a88dad2-a141-4d84-85d2-e8b97defad8b","Type":"ContainerStarted","Data":"c7de64a0d581120bef717e1286de6cf57bcc353a82fe899ab03fc15cdf65a496"} Feb 17 16:38:50 crc kubenswrapper[4808]: I0217 16:38:50.131840 4808 generic.go:334] "Generic (PLEG): container finished" podID="5a88dad2-a141-4d84-85d2-e8b97defad8b" containerID="c7de64a0d581120bef717e1286de6cf57bcc353a82fe899ab03fc15cdf65a496" exitCode=0 Feb 17 16:38:50 crc kubenswrapper[4808]: I0217 16:38:50.131918 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q8vwn" event={"ID":"5a88dad2-a141-4d84-85d2-e8b97defad8b","Type":"ContainerDied","Data":"c7de64a0d581120bef717e1286de6cf57bcc353a82fe899ab03fc15cdf65a496"} Feb 17 16:38:51 crc kubenswrapper[4808]: I0217 16:38:51.142324 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q8vwn" event={"ID":"5a88dad2-a141-4d84-85d2-e8b97defad8b","Type":"ContainerStarted","Data":"4c520ed3361db0a15b556f5ff6eea476901f394716647473fc4a59c837079c9e"} Feb 17 16:38:51 crc kubenswrapper[4808]: I0217 16:38:51.196536 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-q8vwn" podStartSLOduration=2.711497022 podStartE2EDuration="8.196509523s" podCreationTimestamp="2026-02-17 16:38:43 +0000 UTC" firstStartedPulling="2026-02-17 16:38:45.060761199 +0000 UTC m=+2688.577120282" lastFinishedPulling="2026-02-17 16:38:50.54577369 +0000 UTC m=+2694.062132783" observedRunningTime="2026-02-17 16:38:51.173361513 +0000 UTC m=+2694.689720586" watchObservedRunningTime="2026-02-17 16:38:51.196509523 +0000 UTC m=+2694.712868606" Feb 17 16:38:53 crc kubenswrapper[4808]: E0217 16:38:53.148607 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:38:54 crc kubenswrapper[4808]: I0217 16:38:54.202956 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-q8vwn" Feb 17 16:38:54 crc kubenswrapper[4808]: I0217 16:38:54.203816 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-q8vwn" Feb 17 16:38:55 crc kubenswrapper[4808]: I0217 16:38:55.265078 4808 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-q8vwn" podUID="5a88dad2-a141-4d84-85d2-e8b97defad8b" containerName="registry-server" probeResult="failure" output=< Feb 17 16:38:55 crc kubenswrapper[4808]: timeout: failed to connect service ":50051" within 1s Feb 17 16:38:55 crc kubenswrapper[4808]: > Feb 17 16:38:57 crc kubenswrapper[4808]: E0217 16:38:57.154879 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:39:04 crc kubenswrapper[4808]: I0217 16:39:04.297885 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-q8vwn" Feb 17 16:39:04 crc kubenswrapper[4808]: I0217 16:39:04.378124 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-q8vwn" Feb 17 16:39:04 crc kubenswrapper[4808]: I0217 16:39:04.554741 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-q8vwn"] Feb 17 16:39:06 crc kubenswrapper[4808]: E0217 16:39:06.148081 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:39:06 crc kubenswrapper[4808]: I0217 16:39:06.317350 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-q8vwn" podUID="5a88dad2-a141-4d84-85d2-e8b97defad8b" containerName="registry-server" containerID="cri-o://4c520ed3361db0a15b556f5ff6eea476901f394716647473fc4a59c837079c9e" gracePeriod=2 Feb 17 16:39:06 crc kubenswrapper[4808]: I0217 16:39:06.939028 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q8vwn" Feb 17 16:39:07 crc kubenswrapper[4808]: I0217 16:39:07.101884 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a88dad2-a141-4d84-85d2-e8b97defad8b-catalog-content\") pod \"5a88dad2-a141-4d84-85d2-e8b97defad8b\" (UID: \"5a88dad2-a141-4d84-85d2-e8b97defad8b\") " Feb 17 16:39:07 crc kubenswrapper[4808]: I0217 16:39:07.101933 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6v7rb\" (UniqueName: \"kubernetes.io/projected/5a88dad2-a141-4d84-85d2-e8b97defad8b-kube-api-access-6v7rb\") pod \"5a88dad2-a141-4d84-85d2-e8b97defad8b\" (UID: \"5a88dad2-a141-4d84-85d2-e8b97defad8b\") " Feb 17 16:39:07 crc kubenswrapper[4808]: I0217 16:39:07.101994 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a88dad2-a141-4d84-85d2-e8b97defad8b-utilities\") pod \"5a88dad2-a141-4d84-85d2-e8b97defad8b\" (UID: \"5a88dad2-a141-4d84-85d2-e8b97defad8b\") " Feb 17 16:39:07 crc kubenswrapper[4808]: I0217 16:39:07.103125 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a88dad2-a141-4d84-85d2-e8b97defad8b-utilities" (OuterVolumeSpecName: "utilities") pod "5a88dad2-a141-4d84-85d2-e8b97defad8b" (UID: "5a88dad2-a141-4d84-85d2-e8b97defad8b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:39:07 crc kubenswrapper[4808]: I0217 16:39:07.108813 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a88dad2-a141-4d84-85d2-e8b97defad8b-kube-api-access-6v7rb" (OuterVolumeSpecName: "kube-api-access-6v7rb") pod "5a88dad2-a141-4d84-85d2-e8b97defad8b" (UID: "5a88dad2-a141-4d84-85d2-e8b97defad8b"). InnerVolumeSpecName "kube-api-access-6v7rb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:39:07 crc kubenswrapper[4808]: I0217 16:39:07.203997 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6v7rb\" (UniqueName: \"kubernetes.io/projected/5a88dad2-a141-4d84-85d2-e8b97defad8b-kube-api-access-6v7rb\") on node \"crc\" DevicePath \"\"" Feb 17 16:39:07 crc kubenswrapper[4808]: I0217 16:39:07.204030 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a88dad2-a141-4d84-85d2-e8b97defad8b-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:39:07 crc kubenswrapper[4808]: I0217 16:39:07.236033 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a88dad2-a141-4d84-85d2-e8b97defad8b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5a88dad2-a141-4d84-85d2-e8b97defad8b" (UID: "5a88dad2-a141-4d84-85d2-e8b97defad8b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:39:07 crc kubenswrapper[4808]: I0217 16:39:07.305506 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a88dad2-a141-4d84-85d2-e8b97defad8b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:39:07 crc kubenswrapper[4808]: I0217 16:39:07.327263 4808 generic.go:334] "Generic (PLEG): container finished" podID="5a88dad2-a141-4d84-85d2-e8b97defad8b" containerID="4c520ed3361db0a15b556f5ff6eea476901f394716647473fc4a59c837079c9e" exitCode=0 Feb 17 16:39:07 crc kubenswrapper[4808]: I0217 16:39:07.327304 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q8vwn" event={"ID":"5a88dad2-a141-4d84-85d2-e8b97defad8b","Type":"ContainerDied","Data":"4c520ed3361db0a15b556f5ff6eea476901f394716647473fc4a59c837079c9e"} Feb 17 16:39:07 crc kubenswrapper[4808]: I0217 16:39:07.327332 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q8vwn" event={"ID":"5a88dad2-a141-4d84-85d2-e8b97defad8b","Type":"ContainerDied","Data":"66f492c858449812bc56a11568b4164d08498794c2af45b5127f3bbf69b58322"} Feb 17 16:39:07 crc kubenswrapper[4808]: I0217 16:39:07.327334 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q8vwn" Feb 17 16:39:07 crc kubenswrapper[4808]: I0217 16:39:07.327348 4808 scope.go:117] "RemoveContainer" containerID="4c520ed3361db0a15b556f5ff6eea476901f394716647473fc4a59c837079c9e" Feb 17 16:39:07 crc kubenswrapper[4808]: I0217 16:39:07.360555 4808 scope.go:117] "RemoveContainer" containerID="c7de64a0d581120bef717e1286de6cf57bcc353a82fe899ab03fc15cdf65a496" Feb 17 16:39:07 crc kubenswrapper[4808]: I0217 16:39:07.372280 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-q8vwn"] Feb 17 16:39:07 crc kubenswrapper[4808]: I0217 16:39:07.381729 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-q8vwn"] Feb 17 16:39:07 crc kubenswrapper[4808]: I0217 16:39:07.399335 4808 scope.go:117] "RemoveContainer" containerID="7a374d10196aea00cf6516262ebdd7226f9c80c3f45fc5a10080aa5a274591d7" Feb 17 16:39:07 crc kubenswrapper[4808]: I0217 16:39:07.448313 4808 scope.go:117] "RemoveContainer" containerID="4c520ed3361db0a15b556f5ff6eea476901f394716647473fc4a59c837079c9e" Feb 17 16:39:07 crc kubenswrapper[4808]: E0217 16:39:07.448794 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c520ed3361db0a15b556f5ff6eea476901f394716647473fc4a59c837079c9e\": container with ID starting with 4c520ed3361db0a15b556f5ff6eea476901f394716647473fc4a59c837079c9e not found: ID does not exist" containerID="4c520ed3361db0a15b556f5ff6eea476901f394716647473fc4a59c837079c9e" Feb 17 16:39:07 crc kubenswrapper[4808]: I0217 16:39:07.448823 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c520ed3361db0a15b556f5ff6eea476901f394716647473fc4a59c837079c9e"} err="failed to get container status \"4c520ed3361db0a15b556f5ff6eea476901f394716647473fc4a59c837079c9e\": rpc error: code = NotFound desc = could not find container \"4c520ed3361db0a15b556f5ff6eea476901f394716647473fc4a59c837079c9e\": container with ID starting with 4c520ed3361db0a15b556f5ff6eea476901f394716647473fc4a59c837079c9e not found: ID does not exist" Feb 17 16:39:07 crc kubenswrapper[4808]: I0217 16:39:07.448845 4808 scope.go:117] "RemoveContainer" containerID="c7de64a0d581120bef717e1286de6cf57bcc353a82fe899ab03fc15cdf65a496" Feb 17 16:39:07 crc kubenswrapper[4808]: E0217 16:39:07.449248 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7de64a0d581120bef717e1286de6cf57bcc353a82fe899ab03fc15cdf65a496\": container with ID starting with c7de64a0d581120bef717e1286de6cf57bcc353a82fe899ab03fc15cdf65a496 not found: ID does not exist" containerID="c7de64a0d581120bef717e1286de6cf57bcc353a82fe899ab03fc15cdf65a496" Feb 17 16:39:07 crc kubenswrapper[4808]: I0217 16:39:07.449273 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7de64a0d581120bef717e1286de6cf57bcc353a82fe899ab03fc15cdf65a496"} err="failed to get container status \"c7de64a0d581120bef717e1286de6cf57bcc353a82fe899ab03fc15cdf65a496\": rpc error: code = NotFound desc = could not find container \"c7de64a0d581120bef717e1286de6cf57bcc353a82fe899ab03fc15cdf65a496\": container with ID starting with c7de64a0d581120bef717e1286de6cf57bcc353a82fe899ab03fc15cdf65a496 not found: ID does not exist" Feb 17 16:39:07 crc kubenswrapper[4808]: I0217 16:39:07.449287 4808 scope.go:117] "RemoveContainer" containerID="7a374d10196aea00cf6516262ebdd7226f9c80c3f45fc5a10080aa5a274591d7" Feb 17 16:39:07 crc kubenswrapper[4808]: E0217 16:39:07.449596 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a374d10196aea00cf6516262ebdd7226f9c80c3f45fc5a10080aa5a274591d7\": container with ID starting with 7a374d10196aea00cf6516262ebdd7226f9c80c3f45fc5a10080aa5a274591d7 not found: ID does not exist" containerID="7a374d10196aea00cf6516262ebdd7226f9c80c3f45fc5a10080aa5a274591d7" Feb 17 16:39:07 crc kubenswrapper[4808]: I0217 16:39:07.449640 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a374d10196aea00cf6516262ebdd7226f9c80c3f45fc5a10080aa5a274591d7"} err="failed to get container status \"7a374d10196aea00cf6516262ebdd7226f9c80c3f45fc5a10080aa5a274591d7\": rpc error: code = NotFound desc = could not find container \"7a374d10196aea00cf6516262ebdd7226f9c80c3f45fc5a10080aa5a274591d7\": container with ID starting with 7a374d10196aea00cf6516262ebdd7226f9c80c3f45fc5a10080aa5a274591d7 not found: ID does not exist" Feb 17 16:39:09 crc kubenswrapper[4808]: I0217 16:39:09.157910 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a88dad2-a141-4d84-85d2-e8b97defad8b" path="/var/lib/kubelet/pods/5a88dad2-a141-4d84-85d2-e8b97defad8b/volumes" Feb 17 16:39:10 crc kubenswrapper[4808]: I0217 16:39:10.964447 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-mk25l"] Feb 17 16:39:10 crc kubenswrapper[4808]: E0217 16:39:10.965618 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a88dad2-a141-4d84-85d2-e8b97defad8b" containerName="extract-content" Feb 17 16:39:10 crc kubenswrapper[4808]: I0217 16:39:10.965649 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a88dad2-a141-4d84-85d2-e8b97defad8b" containerName="extract-content" Feb 17 16:39:10 crc kubenswrapper[4808]: E0217 16:39:10.965684 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a88dad2-a141-4d84-85d2-e8b97defad8b" containerName="registry-server" Feb 17 16:39:10 crc kubenswrapper[4808]: I0217 16:39:10.965698 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a88dad2-a141-4d84-85d2-e8b97defad8b" containerName="registry-server" Feb 17 16:39:10 crc kubenswrapper[4808]: E0217 16:39:10.965734 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a88dad2-a141-4d84-85d2-e8b97defad8b" containerName="extract-utilities" Feb 17 16:39:10 crc kubenswrapper[4808]: I0217 16:39:10.965749 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a88dad2-a141-4d84-85d2-e8b97defad8b" containerName="extract-utilities" Feb 17 16:39:10 crc kubenswrapper[4808]: I0217 16:39:10.966218 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a88dad2-a141-4d84-85d2-e8b97defad8b" containerName="registry-server" Feb 17 16:39:10 crc kubenswrapper[4808]: I0217 16:39:10.969075 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mk25l" Feb 17 16:39:10 crc kubenswrapper[4808]: I0217 16:39:10.988355 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mk25l"] Feb 17 16:39:11 crc kubenswrapper[4808]: I0217 16:39:11.099208 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/257c9d3f-48cc-4f4f-83f8-9474261e2ca4-utilities\") pod \"community-operators-mk25l\" (UID: \"257c9d3f-48cc-4f4f-83f8-9474261e2ca4\") " pod="openshift-marketplace/community-operators-mk25l" Feb 17 16:39:11 crc kubenswrapper[4808]: I0217 16:39:11.099593 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/257c9d3f-48cc-4f4f-83f8-9474261e2ca4-catalog-content\") pod \"community-operators-mk25l\" (UID: \"257c9d3f-48cc-4f4f-83f8-9474261e2ca4\") " pod="openshift-marketplace/community-operators-mk25l" Feb 17 16:39:11 crc kubenswrapper[4808]: I0217 16:39:11.099798 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6xxx\" (UniqueName: \"kubernetes.io/projected/257c9d3f-48cc-4f4f-83f8-9474261e2ca4-kube-api-access-g6xxx\") pod \"community-operators-mk25l\" (UID: \"257c9d3f-48cc-4f4f-83f8-9474261e2ca4\") " pod="openshift-marketplace/community-operators-mk25l" Feb 17 16:39:11 crc kubenswrapper[4808]: I0217 16:39:11.202322 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/257c9d3f-48cc-4f4f-83f8-9474261e2ca4-catalog-content\") pod \"community-operators-mk25l\" (UID: \"257c9d3f-48cc-4f4f-83f8-9474261e2ca4\") " pod="openshift-marketplace/community-operators-mk25l" Feb 17 16:39:11 crc kubenswrapper[4808]: I0217 16:39:11.202402 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6xxx\" (UniqueName: \"kubernetes.io/projected/257c9d3f-48cc-4f4f-83f8-9474261e2ca4-kube-api-access-g6xxx\") pod \"community-operators-mk25l\" (UID: \"257c9d3f-48cc-4f4f-83f8-9474261e2ca4\") " pod="openshift-marketplace/community-operators-mk25l" Feb 17 16:39:11 crc kubenswrapper[4808]: I0217 16:39:11.202611 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/257c9d3f-48cc-4f4f-83f8-9474261e2ca4-utilities\") pod \"community-operators-mk25l\" (UID: \"257c9d3f-48cc-4f4f-83f8-9474261e2ca4\") " pod="openshift-marketplace/community-operators-mk25l" Feb 17 16:39:11 crc kubenswrapper[4808]: I0217 16:39:11.203221 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/257c9d3f-48cc-4f4f-83f8-9474261e2ca4-utilities\") pod \"community-operators-mk25l\" (UID: \"257c9d3f-48cc-4f4f-83f8-9474261e2ca4\") " pod="openshift-marketplace/community-operators-mk25l" Feb 17 16:39:11 crc kubenswrapper[4808]: I0217 16:39:11.203327 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/257c9d3f-48cc-4f4f-83f8-9474261e2ca4-catalog-content\") pod \"community-operators-mk25l\" (UID: \"257c9d3f-48cc-4f4f-83f8-9474261e2ca4\") " pod="openshift-marketplace/community-operators-mk25l" Feb 17 16:39:11 crc kubenswrapper[4808]: I0217 16:39:11.239451 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6xxx\" (UniqueName: \"kubernetes.io/projected/257c9d3f-48cc-4f4f-83f8-9474261e2ca4-kube-api-access-g6xxx\") pod \"community-operators-mk25l\" (UID: \"257c9d3f-48cc-4f4f-83f8-9474261e2ca4\") " pod="openshift-marketplace/community-operators-mk25l" Feb 17 16:39:11 crc kubenswrapper[4808]: I0217 16:39:11.290216 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mk25l" Feb 17 16:39:11 crc kubenswrapper[4808]: I0217 16:39:11.868463 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mk25l"] Feb 17 16:39:11 crc kubenswrapper[4808]: W0217 16:39:11.870131 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod257c9d3f_48cc_4f4f_83f8_9474261e2ca4.slice/crio-4104199f86a25c4c9e4fa9c7bdb606ea588c4183c6da3390fa280995babbd394 WatchSource:0}: Error finding container 4104199f86a25c4c9e4fa9c7bdb606ea588c4183c6da3390fa280995babbd394: Status 404 returned error can't find the container with id 4104199f86a25c4c9e4fa9c7bdb606ea588c4183c6da3390fa280995babbd394 Feb 17 16:39:11 crc kubenswrapper[4808]: I0217 16:39:11.983843 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-lzjt6"] Feb 17 16:39:11 crc kubenswrapper[4808]: I0217 16:39:11.990178 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lzjt6" Feb 17 16:39:12 crc kubenswrapper[4808]: I0217 16:39:12.002159 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lzjt6"] Feb 17 16:39:12 crc kubenswrapper[4808]: I0217 16:39:12.126592 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mljbk\" (UniqueName: \"kubernetes.io/projected/d7f557dd-9578-4e27-afb8-2c090c0b6fe2-kube-api-access-mljbk\") pod \"redhat-marketplace-lzjt6\" (UID: \"d7f557dd-9578-4e27-afb8-2c090c0b6fe2\") " pod="openshift-marketplace/redhat-marketplace-lzjt6" Feb 17 16:39:12 crc kubenswrapper[4808]: I0217 16:39:12.126670 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7f557dd-9578-4e27-afb8-2c090c0b6fe2-catalog-content\") pod \"redhat-marketplace-lzjt6\" (UID: \"d7f557dd-9578-4e27-afb8-2c090c0b6fe2\") " pod="openshift-marketplace/redhat-marketplace-lzjt6" Feb 17 16:39:12 crc kubenswrapper[4808]: I0217 16:39:12.127376 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7f557dd-9578-4e27-afb8-2c090c0b6fe2-utilities\") pod \"redhat-marketplace-lzjt6\" (UID: \"d7f557dd-9578-4e27-afb8-2c090c0b6fe2\") " pod="openshift-marketplace/redhat-marketplace-lzjt6" Feb 17 16:39:12 crc kubenswrapper[4808]: E0217 16:39:12.147668 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:39:12 crc kubenswrapper[4808]: I0217 16:39:12.229879 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7f557dd-9578-4e27-afb8-2c090c0b6fe2-utilities\") pod \"redhat-marketplace-lzjt6\" (UID: \"d7f557dd-9578-4e27-afb8-2c090c0b6fe2\") " pod="openshift-marketplace/redhat-marketplace-lzjt6" Feb 17 16:39:12 crc kubenswrapper[4808]: I0217 16:39:12.230105 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mljbk\" (UniqueName: \"kubernetes.io/projected/d7f557dd-9578-4e27-afb8-2c090c0b6fe2-kube-api-access-mljbk\") pod \"redhat-marketplace-lzjt6\" (UID: \"d7f557dd-9578-4e27-afb8-2c090c0b6fe2\") " pod="openshift-marketplace/redhat-marketplace-lzjt6" Feb 17 16:39:12 crc kubenswrapper[4808]: I0217 16:39:12.230197 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7f557dd-9578-4e27-afb8-2c090c0b6fe2-catalog-content\") pod \"redhat-marketplace-lzjt6\" (UID: \"d7f557dd-9578-4e27-afb8-2c090c0b6fe2\") " pod="openshift-marketplace/redhat-marketplace-lzjt6" Feb 17 16:39:12 crc kubenswrapper[4808]: I0217 16:39:12.231372 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7f557dd-9578-4e27-afb8-2c090c0b6fe2-utilities\") pod \"redhat-marketplace-lzjt6\" (UID: \"d7f557dd-9578-4e27-afb8-2c090c0b6fe2\") " pod="openshift-marketplace/redhat-marketplace-lzjt6" Feb 17 16:39:12 crc kubenswrapper[4808]: I0217 16:39:12.231480 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7f557dd-9578-4e27-afb8-2c090c0b6fe2-catalog-content\") pod \"redhat-marketplace-lzjt6\" (UID: \"d7f557dd-9578-4e27-afb8-2c090c0b6fe2\") " pod="openshift-marketplace/redhat-marketplace-lzjt6" Feb 17 16:39:12 crc kubenswrapper[4808]: I0217 16:39:12.257326 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mljbk\" (UniqueName: \"kubernetes.io/projected/d7f557dd-9578-4e27-afb8-2c090c0b6fe2-kube-api-access-mljbk\") pod \"redhat-marketplace-lzjt6\" (UID: \"d7f557dd-9578-4e27-afb8-2c090c0b6fe2\") " pod="openshift-marketplace/redhat-marketplace-lzjt6" Feb 17 16:39:12 crc kubenswrapper[4808]: I0217 16:39:12.365078 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lzjt6" Feb 17 16:39:12 crc kubenswrapper[4808]: I0217 16:39:12.384601 4808 generic.go:334] "Generic (PLEG): container finished" podID="257c9d3f-48cc-4f4f-83f8-9474261e2ca4" containerID="45d999c2987fb418a82509a66be76a15ca8a63bb97febb4600b4d746a45b5add" exitCode=0 Feb 17 16:39:12 crc kubenswrapper[4808]: I0217 16:39:12.384655 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mk25l" event={"ID":"257c9d3f-48cc-4f4f-83f8-9474261e2ca4","Type":"ContainerDied","Data":"45d999c2987fb418a82509a66be76a15ca8a63bb97febb4600b4d746a45b5add"} Feb 17 16:39:12 crc kubenswrapper[4808]: I0217 16:39:12.384691 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mk25l" event={"ID":"257c9d3f-48cc-4f4f-83f8-9474261e2ca4","Type":"ContainerStarted","Data":"4104199f86a25c4c9e4fa9c7bdb606ea588c4183c6da3390fa280995babbd394"} Feb 17 16:39:12 crc kubenswrapper[4808]: I0217 16:39:12.838129 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lzjt6"] Feb 17 16:39:12 crc kubenswrapper[4808]: W0217 16:39:12.845532 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd7f557dd_9578_4e27_afb8_2c090c0b6fe2.slice/crio-e9af397a0a9842006d4b8caff6ddf87f520ee3b3765a58441257646338588cc3 WatchSource:0}: Error finding container e9af397a0a9842006d4b8caff6ddf87f520ee3b3765a58441257646338588cc3: Status 404 returned error can't find the container with id e9af397a0a9842006d4b8caff6ddf87f520ee3b3765a58441257646338588cc3 Feb 17 16:39:13 crc kubenswrapper[4808]: I0217 16:39:13.407997 4808 generic.go:334] "Generic (PLEG): container finished" podID="d7f557dd-9578-4e27-afb8-2c090c0b6fe2" containerID="719b54c5c03e3dd7ce20745cfb6f18d5bc2c2dcf265ea2bc1faf0af0bbdfa61c" exitCode=0 Feb 17 16:39:13 crc kubenswrapper[4808]: I0217 16:39:13.408059 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lzjt6" event={"ID":"d7f557dd-9578-4e27-afb8-2c090c0b6fe2","Type":"ContainerDied","Data":"719b54c5c03e3dd7ce20745cfb6f18d5bc2c2dcf265ea2bc1faf0af0bbdfa61c"} Feb 17 16:39:13 crc kubenswrapper[4808]: I0217 16:39:13.408095 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lzjt6" event={"ID":"d7f557dd-9578-4e27-afb8-2c090c0b6fe2","Type":"ContainerStarted","Data":"e9af397a0a9842006d4b8caff6ddf87f520ee3b3765a58441257646338588cc3"} Feb 17 16:39:14 crc kubenswrapper[4808]: I0217 16:39:14.418760 4808 generic.go:334] "Generic (PLEG): container finished" podID="257c9d3f-48cc-4f4f-83f8-9474261e2ca4" containerID="fc98df33633d8d660711b821b3c95493d06a572131d055bf46e14c3e697ab91a" exitCode=0 Feb 17 16:39:14 crc kubenswrapper[4808]: I0217 16:39:14.418833 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mk25l" event={"ID":"257c9d3f-48cc-4f4f-83f8-9474261e2ca4","Type":"ContainerDied","Data":"fc98df33633d8d660711b821b3c95493d06a572131d055bf46e14c3e697ab91a"} Feb 17 16:39:14 crc kubenswrapper[4808]: I0217 16:39:14.422982 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lzjt6" event={"ID":"d7f557dd-9578-4e27-afb8-2c090c0b6fe2","Type":"ContainerStarted","Data":"cf53011b691b7e94643610a3ef82c7b30f65211a2e2c6a396fc0cac16515f6b9"} Feb 17 16:39:15 crc kubenswrapper[4808]: I0217 16:39:15.435880 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mk25l" event={"ID":"257c9d3f-48cc-4f4f-83f8-9474261e2ca4","Type":"ContainerStarted","Data":"6d7d7cb7eaab69b99d177f638a6d9b174c8556f6e5609c7e27f5361458f7dc4d"} Feb 17 16:39:15 crc kubenswrapper[4808]: I0217 16:39:15.438934 4808 generic.go:334] "Generic (PLEG): container finished" podID="d7f557dd-9578-4e27-afb8-2c090c0b6fe2" containerID="cf53011b691b7e94643610a3ef82c7b30f65211a2e2c6a396fc0cac16515f6b9" exitCode=0 Feb 17 16:39:15 crc kubenswrapper[4808]: I0217 16:39:15.439000 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lzjt6" event={"ID":"d7f557dd-9578-4e27-afb8-2c090c0b6fe2","Type":"ContainerDied","Data":"cf53011b691b7e94643610a3ef82c7b30f65211a2e2c6a396fc0cac16515f6b9"} Feb 17 16:39:15 crc kubenswrapper[4808]: I0217 16:39:15.462433 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-mk25l" podStartSLOduration=3.061521446 podStartE2EDuration="5.462411724s" podCreationTimestamp="2026-02-17 16:39:10 +0000 UTC" firstStartedPulling="2026-02-17 16:39:12.395356779 +0000 UTC m=+2715.911715852" lastFinishedPulling="2026-02-17 16:39:14.796247057 +0000 UTC m=+2718.312606130" observedRunningTime="2026-02-17 16:39:15.458460788 +0000 UTC m=+2718.974819881" watchObservedRunningTime="2026-02-17 16:39:15.462411724 +0000 UTC m=+2718.978770797" Feb 17 16:39:16 crc kubenswrapper[4808]: I0217 16:39:16.449016 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lzjt6" event={"ID":"d7f557dd-9578-4e27-afb8-2c090c0b6fe2","Type":"ContainerStarted","Data":"1c4085eacb3bf18589b2c450bdbc7dc3ddd73319619a1dc42b2d86c146d19a61"} Feb 17 16:39:16 crc kubenswrapper[4808]: I0217 16:39:16.472714 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-lzjt6" podStartSLOduration=2.824454738 podStartE2EDuration="5.472697416s" podCreationTimestamp="2026-02-17 16:39:11 +0000 UTC" firstStartedPulling="2026-02-17 16:39:13.410310836 +0000 UTC m=+2716.926669919" lastFinishedPulling="2026-02-17 16:39:16.058553524 +0000 UTC m=+2719.574912597" observedRunningTime="2026-02-17 16:39:16.472522671 +0000 UTC m=+2719.988881754" watchObservedRunningTime="2026-02-17 16:39:16.472697416 +0000 UTC m=+2719.989056489" Feb 17 16:39:19 crc kubenswrapper[4808]: E0217 16:39:19.148726 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:39:21 crc kubenswrapper[4808]: I0217 16:39:21.290517 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-mk25l" Feb 17 16:39:21 crc kubenswrapper[4808]: I0217 16:39:21.290925 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-mk25l" Feb 17 16:39:21 crc kubenswrapper[4808]: I0217 16:39:21.358014 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-mk25l" Feb 17 16:39:21 crc kubenswrapper[4808]: I0217 16:39:21.543260 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-mk25l" Feb 17 16:39:21 crc kubenswrapper[4808]: I0217 16:39:21.593960 4808 patch_prober.go:28] interesting pod/machine-config-daemon-k8v8k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:39:21 crc kubenswrapper[4808]: I0217 16:39:21.594025 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:39:21 crc kubenswrapper[4808]: I0217 16:39:21.627216 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mk25l"] Feb 17 16:39:22 crc kubenswrapper[4808]: I0217 16:39:22.365610 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-lzjt6" Feb 17 16:39:22 crc kubenswrapper[4808]: I0217 16:39:22.365656 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-lzjt6" Feb 17 16:39:22 crc kubenswrapper[4808]: I0217 16:39:22.431886 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-lzjt6" Feb 17 16:39:22 crc kubenswrapper[4808]: I0217 16:39:22.552021 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-lzjt6" Feb 17 16:39:23 crc kubenswrapper[4808]: I0217 16:39:23.519346 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-mk25l" podUID="257c9d3f-48cc-4f4f-83f8-9474261e2ca4" containerName="registry-server" containerID="cri-o://6d7d7cb7eaab69b99d177f638a6d9b174c8556f6e5609c7e27f5361458f7dc4d" gracePeriod=2 Feb 17 16:39:24 crc kubenswrapper[4808]: I0217 16:39:24.055625 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mk25l" Feb 17 16:39:24 crc kubenswrapper[4808]: I0217 16:39:24.216029 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g6xxx\" (UniqueName: \"kubernetes.io/projected/257c9d3f-48cc-4f4f-83f8-9474261e2ca4-kube-api-access-g6xxx\") pod \"257c9d3f-48cc-4f4f-83f8-9474261e2ca4\" (UID: \"257c9d3f-48cc-4f4f-83f8-9474261e2ca4\") " Feb 17 16:39:24 crc kubenswrapper[4808]: I0217 16:39:24.216238 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/257c9d3f-48cc-4f4f-83f8-9474261e2ca4-utilities\") pod \"257c9d3f-48cc-4f4f-83f8-9474261e2ca4\" (UID: \"257c9d3f-48cc-4f4f-83f8-9474261e2ca4\") " Feb 17 16:39:24 crc kubenswrapper[4808]: I0217 16:39:24.216326 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/257c9d3f-48cc-4f4f-83f8-9474261e2ca4-catalog-content\") pod \"257c9d3f-48cc-4f4f-83f8-9474261e2ca4\" (UID: \"257c9d3f-48cc-4f4f-83f8-9474261e2ca4\") " Feb 17 16:39:24 crc kubenswrapper[4808]: I0217 16:39:24.217604 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/257c9d3f-48cc-4f4f-83f8-9474261e2ca4-utilities" (OuterVolumeSpecName: "utilities") pod "257c9d3f-48cc-4f4f-83f8-9474261e2ca4" (UID: "257c9d3f-48cc-4f4f-83f8-9474261e2ca4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:39:24 crc kubenswrapper[4808]: I0217 16:39:24.223540 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/257c9d3f-48cc-4f4f-83f8-9474261e2ca4-kube-api-access-g6xxx" (OuterVolumeSpecName: "kube-api-access-g6xxx") pod "257c9d3f-48cc-4f4f-83f8-9474261e2ca4" (UID: "257c9d3f-48cc-4f4f-83f8-9474261e2ca4"). InnerVolumeSpecName "kube-api-access-g6xxx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:39:24 crc kubenswrapper[4808]: I0217 16:39:24.319777 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/257c9d3f-48cc-4f4f-83f8-9474261e2ca4-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:39:24 crc kubenswrapper[4808]: I0217 16:39:24.319805 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g6xxx\" (UniqueName: \"kubernetes.io/projected/257c9d3f-48cc-4f4f-83f8-9474261e2ca4-kube-api-access-g6xxx\") on node \"crc\" DevicePath \"\"" Feb 17 16:39:24 crc kubenswrapper[4808]: I0217 16:39:24.532675 4808 generic.go:334] "Generic (PLEG): container finished" podID="257c9d3f-48cc-4f4f-83f8-9474261e2ca4" containerID="6d7d7cb7eaab69b99d177f638a6d9b174c8556f6e5609c7e27f5361458f7dc4d" exitCode=0 Feb 17 16:39:24 crc kubenswrapper[4808]: I0217 16:39:24.532757 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mk25l" event={"ID":"257c9d3f-48cc-4f4f-83f8-9474261e2ca4","Type":"ContainerDied","Data":"6d7d7cb7eaab69b99d177f638a6d9b174c8556f6e5609c7e27f5361458f7dc4d"} Feb 17 16:39:24 crc kubenswrapper[4808]: I0217 16:39:24.533026 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mk25l" event={"ID":"257c9d3f-48cc-4f4f-83f8-9474261e2ca4","Type":"ContainerDied","Data":"4104199f86a25c4c9e4fa9c7bdb606ea588c4183c6da3390fa280995babbd394"} Feb 17 16:39:24 crc kubenswrapper[4808]: I0217 16:39:24.532856 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mk25l" Feb 17 16:39:24 crc kubenswrapper[4808]: I0217 16:39:24.533055 4808 scope.go:117] "RemoveContainer" containerID="6d7d7cb7eaab69b99d177f638a6d9b174c8556f6e5609c7e27f5361458f7dc4d" Feb 17 16:39:24 crc kubenswrapper[4808]: I0217 16:39:24.564772 4808 scope.go:117] "RemoveContainer" containerID="fc98df33633d8d660711b821b3c95493d06a572131d055bf46e14c3e697ab91a" Feb 17 16:39:24 crc kubenswrapper[4808]: I0217 16:39:24.579292 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/257c9d3f-48cc-4f4f-83f8-9474261e2ca4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "257c9d3f-48cc-4f4f-83f8-9474261e2ca4" (UID: "257c9d3f-48cc-4f4f-83f8-9474261e2ca4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:39:24 crc kubenswrapper[4808]: I0217 16:39:24.594902 4808 scope.go:117] "RemoveContainer" containerID="45d999c2987fb418a82509a66be76a15ca8a63bb97febb4600b4d746a45b5add" Feb 17 16:39:24 crc kubenswrapper[4808]: I0217 16:39:24.628338 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/257c9d3f-48cc-4f4f-83f8-9474261e2ca4-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:39:24 crc kubenswrapper[4808]: I0217 16:39:24.645489 4808 scope.go:117] "RemoveContainer" containerID="6d7d7cb7eaab69b99d177f638a6d9b174c8556f6e5609c7e27f5361458f7dc4d" Feb 17 16:39:24 crc kubenswrapper[4808]: E0217 16:39:24.646050 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d7d7cb7eaab69b99d177f638a6d9b174c8556f6e5609c7e27f5361458f7dc4d\": container with ID starting with 6d7d7cb7eaab69b99d177f638a6d9b174c8556f6e5609c7e27f5361458f7dc4d not found: ID does not exist" containerID="6d7d7cb7eaab69b99d177f638a6d9b174c8556f6e5609c7e27f5361458f7dc4d" Feb 17 16:39:24 crc kubenswrapper[4808]: I0217 16:39:24.646119 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d7d7cb7eaab69b99d177f638a6d9b174c8556f6e5609c7e27f5361458f7dc4d"} err="failed to get container status \"6d7d7cb7eaab69b99d177f638a6d9b174c8556f6e5609c7e27f5361458f7dc4d\": rpc error: code = NotFound desc = could not find container \"6d7d7cb7eaab69b99d177f638a6d9b174c8556f6e5609c7e27f5361458f7dc4d\": container with ID starting with 6d7d7cb7eaab69b99d177f638a6d9b174c8556f6e5609c7e27f5361458f7dc4d not found: ID does not exist" Feb 17 16:39:24 crc kubenswrapper[4808]: I0217 16:39:24.646161 4808 scope.go:117] "RemoveContainer" containerID="fc98df33633d8d660711b821b3c95493d06a572131d055bf46e14c3e697ab91a" Feb 17 16:39:24 crc kubenswrapper[4808]: E0217 16:39:24.646656 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc98df33633d8d660711b821b3c95493d06a572131d055bf46e14c3e697ab91a\": container with ID starting with fc98df33633d8d660711b821b3c95493d06a572131d055bf46e14c3e697ab91a not found: ID does not exist" containerID="fc98df33633d8d660711b821b3c95493d06a572131d055bf46e14c3e697ab91a" Feb 17 16:39:24 crc kubenswrapper[4808]: I0217 16:39:24.646892 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc98df33633d8d660711b821b3c95493d06a572131d055bf46e14c3e697ab91a"} err="failed to get container status \"fc98df33633d8d660711b821b3c95493d06a572131d055bf46e14c3e697ab91a\": rpc error: code = NotFound desc = could not find container \"fc98df33633d8d660711b821b3c95493d06a572131d055bf46e14c3e697ab91a\": container with ID starting with fc98df33633d8d660711b821b3c95493d06a572131d055bf46e14c3e697ab91a not found: ID does not exist" Feb 17 16:39:24 crc kubenswrapper[4808]: I0217 16:39:24.647075 4808 scope.go:117] "RemoveContainer" containerID="45d999c2987fb418a82509a66be76a15ca8a63bb97febb4600b4d746a45b5add" Feb 17 16:39:24 crc kubenswrapper[4808]: E0217 16:39:24.647656 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"45d999c2987fb418a82509a66be76a15ca8a63bb97febb4600b4d746a45b5add\": container with ID starting with 45d999c2987fb418a82509a66be76a15ca8a63bb97febb4600b4d746a45b5add not found: ID does not exist" containerID="45d999c2987fb418a82509a66be76a15ca8a63bb97febb4600b4d746a45b5add" Feb 17 16:39:24 crc kubenswrapper[4808]: I0217 16:39:24.647688 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45d999c2987fb418a82509a66be76a15ca8a63bb97febb4600b4d746a45b5add"} err="failed to get container status \"45d999c2987fb418a82509a66be76a15ca8a63bb97febb4600b4d746a45b5add\": rpc error: code = NotFound desc = could not find container \"45d999c2987fb418a82509a66be76a15ca8a63bb97febb4600b4d746a45b5add\": container with ID starting with 45d999c2987fb418a82509a66be76a15ca8a63bb97febb4600b4d746a45b5add not found: ID does not exist" Feb 17 16:39:24 crc kubenswrapper[4808]: I0217 16:39:24.805181 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lzjt6"] Feb 17 16:39:24 crc kubenswrapper[4808]: I0217 16:39:24.805409 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-lzjt6" podUID="d7f557dd-9578-4e27-afb8-2c090c0b6fe2" containerName="registry-server" containerID="cri-o://1c4085eacb3bf18589b2c450bdbc7dc3ddd73319619a1dc42b2d86c146d19a61" gracePeriod=2 Feb 17 16:39:24 crc kubenswrapper[4808]: I0217 16:39:24.927533 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mk25l"] Feb 17 16:39:24 crc kubenswrapper[4808]: I0217 16:39:24.938903 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-mk25l"] Feb 17 16:39:25 crc kubenswrapper[4808]: I0217 16:39:25.166704 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="257c9d3f-48cc-4f4f-83f8-9474261e2ca4" path="/var/lib/kubelet/pods/257c9d3f-48cc-4f4f-83f8-9474261e2ca4/volumes" Feb 17 16:39:25 crc kubenswrapper[4808]: I0217 16:39:25.403928 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lzjt6" Feb 17 16:39:25 crc kubenswrapper[4808]: I0217 16:39:25.544127 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7f557dd-9578-4e27-afb8-2c090c0b6fe2-utilities\") pod \"d7f557dd-9578-4e27-afb8-2c090c0b6fe2\" (UID: \"d7f557dd-9578-4e27-afb8-2c090c0b6fe2\") " Feb 17 16:39:25 crc kubenswrapper[4808]: I0217 16:39:25.544284 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mljbk\" (UniqueName: \"kubernetes.io/projected/d7f557dd-9578-4e27-afb8-2c090c0b6fe2-kube-api-access-mljbk\") pod \"d7f557dd-9578-4e27-afb8-2c090c0b6fe2\" (UID: \"d7f557dd-9578-4e27-afb8-2c090c0b6fe2\") " Feb 17 16:39:25 crc kubenswrapper[4808]: I0217 16:39:25.544450 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7f557dd-9578-4e27-afb8-2c090c0b6fe2-catalog-content\") pod \"d7f557dd-9578-4e27-afb8-2c090c0b6fe2\" (UID: \"d7f557dd-9578-4e27-afb8-2c090c0b6fe2\") " Feb 17 16:39:25 crc kubenswrapper[4808]: I0217 16:39:25.544986 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7f557dd-9578-4e27-afb8-2c090c0b6fe2-utilities" (OuterVolumeSpecName: "utilities") pod "d7f557dd-9578-4e27-afb8-2c090c0b6fe2" (UID: "d7f557dd-9578-4e27-afb8-2c090c0b6fe2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:39:25 crc kubenswrapper[4808]: I0217 16:39:25.545528 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7f557dd-9578-4e27-afb8-2c090c0b6fe2-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:39:25 crc kubenswrapper[4808]: I0217 16:39:25.545988 4808 generic.go:334] "Generic (PLEG): container finished" podID="d7f557dd-9578-4e27-afb8-2c090c0b6fe2" containerID="1c4085eacb3bf18589b2c450bdbc7dc3ddd73319619a1dc42b2d86c146d19a61" exitCode=0 Feb 17 16:39:25 crc kubenswrapper[4808]: I0217 16:39:25.546023 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lzjt6" event={"ID":"d7f557dd-9578-4e27-afb8-2c090c0b6fe2","Type":"ContainerDied","Data":"1c4085eacb3bf18589b2c450bdbc7dc3ddd73319619a1dc42b2d86c146d19a61"} Feb 17 16:39:25 crc kubenswrapper[4808]: I0217 16:39:25.546086 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lzjt6" event={"ID":"d7f557dd-9578-4e27-afb8-2c090c0b6fe2","Type":"ContainerDied","Data":"e9af397a0a9842006d4b8caff6ddf87f520ee3b3765a58441257646338588cc3"} Feb 17 16:39:25 crc kubenswrapper[4808]: I0217 16:39:25.546094 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lzjt6" Feb 17 16:39:25 crc kubenswrapper[4808]: I0217 16:39:25.546105 4808 scope.go:117] "RemoveContainer" containerID="1c4085eacb3bf18589b2c450bdbc7dc3ddd73319619a1dc42b2d86c146d19a61" Feb 17 16:39:25 crc kubenswrapper[4808]: I0217 16:39:25.550041 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7f557dd-9578-4e27-afb8-2c090c0b6fe2-kube-api-access-mljbk" (OuterVolumeSpecName: "kube-api-access-mljbk") pod "d7f557dd-9578-4e27-afb8-2c090c0b6fe2" (UID: "d7f557dd-9578-4e27-afb8-2c090c0b6fe2"). InnerVolumeSpecName "kube-api-access-mljbk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:39:25 crc kubenswrapper[4808]: I0217 16:39:25.570685 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7f557dd-9578-4e27-afb8-2c090c0b6fe2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d7f557dd-9578-4e27-afb8-2c090c0b6fe2" (UID: "d7f557dd-9578-4e27-afb8-2c090c0b6fe2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:39:25 crc kubenswrapper[4808]: I0217 16:39:25.614356 4808 scope.go:117] "RemoveContainer" containerID="cf53011b691b7e94643610a3ef82c7b30f65211a2e2c6a396fc0cac16515f6b9" Feb 17 16:39:25 crc kubenswrapper[4808]: I0217 16:39:25.634791 4808 scope.go:117] "RemoveContainer" containerID="719b54c5c03e3dd7ce20745cfb6f18d5bc2c2dcf265ea2bc1faf0af0bbdfa61c" Feb 17 16:39:25 crc kubenswrapper[4808]: I0217 16:39:25.647532 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mljbk\" (UniqueName: \"kubernetes.io/projected/d7f557dd-9578-4e27-afb8-2c090c0b6fe2-kube-api-access-mljbk\") on node \"crc\" DevicePath \"\"" Feb 17 16:39:25 crc kubenswrapper[4808]: I0217 16:39:25.647662 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7f557dd-9578-4e27-afb8-2c090c0b6fe2-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:39:25 crc kubenswrapper[4808]: I0217 16:39:25.685269 4808 scope.go:117] "RemoveContainer" containerID="1c4085eacb3bf18589b2c450bdbc7dc3ddd73319619a1dc42b2d86c146d19a61" Feb 17 16:39:25 crc kubenswrapper[4808]: E0217 16:39:25.685779 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c4085eacb3bf18589b2c450bdbc7dc3ddd73319619a1dc42b2d86c146d19a61\": container with ID starting with 1c4085eacb3bf18589b2c450bdbc7dc3ddd73319619a1dc42b2d86c146d19a61 not found: ID does not exist" containerID="1c4085eacb3bf18589b2c450bdbc7dc3ddd73319619a1dc42b2d86c146d19a61" Feb 17 16:39:25 crc kubenswrapper[4808]: I0217 16:39:25.685882 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c4085eacb3bf18589b2c450bdbc7dc3ddd73319619a1dc42b2d86c146d19a61"} err="failed to get container status \"1c4085eacb3bf18589b2c450bdbc7dc3ddd73319619a1dc42b2d86c146d19a61\": rpc error: code = NotFound desc = could not find container \"1c4085eacb3bf18589b2c450bdbc7dc3ddd73319619a1dc42b2d86c146d19a61\": container with ID starting with 1c4085eacb3bf18589b2c450bdbc7dc3ddd73319619a1dc42b2d86c146d19a61 not found: ID does not exist" Feb 17 16:39:25 crc kubenswrapper[4808]: I0217 16:39:25.685973 4808 scope.go:117] "RemoveContainer" containerID="cf53011b691b7e94643610a3ef82c7b30f65211a2e2c6a396fc0cac16515f6b9" Feb 17 16:39:25 crc kubenswrapper[4808]: E0217 16:39:25.686298 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf53011b691b7e94643610a3ef82c7b30f65211a2e2c6a396fc0cac16515f6b9\": container with ID starting with cf53011b691b7e94643610a3ef82c7b30f65211a2e2c6a396fc0cac16515f6b9 not found: ID does not exist" containerID="cf53011b691b7e94643610a3ef82c7b30f65211a2e2c6a396fc0cac16515f6b9" Feb 17 16:39:25 crc kubenswrapper[4808]: I0217 16:39:25.686405 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf53011b691b7e94643610a3ef82c7b30f65211a2e2c6a396fc0cac16515f6b9"} err="failed to get container status \"cf53011b691b7e94643610a3ef82c7b30f65211a2e2c6a396fc0cac16515f6b9\": rpc error: code = NotFound desc = could not find container \"cf53011b691b7e94643610a3ef82c7b30f65211a2e2c6a396fc0cac16515f6b9\": container with ID starting with cf53011b691b7e94643610a3ef82c7b30f65211a2e2c6a396fc0cac16515f6b9 not found: ID does not exist" Feb 17 16:39:25 crc kubenswrapper[4808]: I0217 16:39:25.686483 4808 scope.go:117] "RemoveContainer" containerID="719b54c5c03e3dd7ce20745cfb6f18d5bc2c2dcf265ea2bc1faf0af0bbdfa61c" Feb 17 16:39:25 crc kubenswrapper[4808]: E0217 16:39:25.687126 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"719b54c5c03e3dd7ce20745cfb6f18d5bc2c2dcf265ea2bc1faf0af0bbdfa61c\": container with ID starting with 719b54c5c03e3dd7ce20745cfb6f18d5bc2c2dcf265ea2bc1faf0af0bbdfa61c not found: ID does not exist" containerID="719b54c5c03e3dd7ce20745cfb6f18d5bc2c2dcf265ea2bc1faf0af0bbdfa61c" Feb 17 16:39:25 crc kubenswrapper[4808]: I0217 16:39:25.687190 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"719b54c5c03e3dd7ce20745cfb6f18d5bc2c2dcf265ea2bc1faf0af0bbdfa61c"} err="failed to get container status \"719b54c5c03e3dd7ce20745cfb6f18d5bc2c2dcf265ea2bc1faf0af0bbdfa61c\": rpc error: code = NotFound desc = could not find container \"719b54c5c03e3dd7ce20745cfb6f18d5bc2c2dcf265ea2bc1faf0af0bbdfa61c\": container with ID starting with 719b54c5c03e3dd7ce20745cfb6f18d5bc2c2dcf265ea2bc1faf0af0bbdfa61c not found: ID does not exist" Feb 17 16:39:25 crc kubenswrapper[4808]: I0217 16:39:25.882725 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lzjt6"] Feb 17 16:39:25 crc kubenswrapper[4808]: I0217 16:39:25.890213 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-lzjt6"] Feb 17 16:39:26 crc kubenswrapper[4808]: E0217 16:39:26.148403 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:39:27 crc kubenswrapper[4808]: I0217 16:39:27.166251 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7f557dd-9578-4e27-afb8-2c090c0b6fe2" path="/var/lib/kubelet/pods/d7f557dd-9578-4e27-afb8-2c090c0b6fe2/volumes" Feb 17 16:39:34 crc kubenswrapper[4808]: I0217 16:39:34.148331 4808 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 16:39:35 crc kubenswrapper[4808]: E0217 16:39:35.251669 4808 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 17 16:39:35 crc kubenswrapper[4808]: E0217 16:39:35.251742 4808 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 17 16:39:35 crc kubenswrapper[4808]: E0217 16:39:35.251900 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fnd2x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-zl7nk_openstack(a4b182d0-48fc-4487-b7ad-18f7803a4d4c): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:39:35 crc kubenswrapper[4808]: E0217 16:39:35.253150 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:39:40 crc kubenswrapper[4808]: E0217 16:39:40.148963 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:39:46 crc kubenswrapper[4808]: E0217 16:39:46.152248 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:39:51 crc kubenswrapper[4808]: I0217 16:39:51.592465 4808 patch_prober.go:28] interesting pod/machine-config-daemon-k8v8k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:39:51 crc kubenswrapper[4808]: I0217 16:39:51.592965 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:39:53 crc kubenswrapper[4808]: E0217 16:39:53.148119 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:39:59 crc kubenswrapper[4808]: E0217 16:39:59.150432 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:40:08 crc kubenswrapper[4808]: E0217 16:40:08.257919 4808 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:40:08 crc kubenswrapper[4808]: E0217 16:40:08.258666 4808 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:40:08 crc kubenswrapper[4808]: E0217 16:40:08.258841 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nfchb4h678h649h5fbh664h79h7fh666h5bfh68h565h555h59dh5b6h5bfh66ch645h547h5cbh549h9fh58bh5d4hcfh78h68chc7h5ch67dhc7h5b4q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rjgf2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(2876084b-7055-449d-9ddb-447d3a515d80): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:40:08 crc kubenswrapper[4808]: E0217 16:40:08.260097 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:40:10 crc kubenswrapper[4808]: E0217 16:40:10.148890 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:40:21 crc kubenswrapper[4808]: I0217 16:40:21.592258 4808 patch_prober.go:28] interesting pod/machine-config-daemon-k8v8k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:40:21 crc kubenswrapper[4808]: I0217 16:40:21.592891 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:40:21 crc kubenswrapper[4808]: I0217 16:40:21.592940 4808 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" Feb 17 16:40:21 crc kubenswrapper[4808]: I0217 16:40:21.594065 4808 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7e8601a98b232938835916b07f525ce196aee0ee01e8ee4ec9de824633712b8d"} pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 16:40:21 crc kubenswrapper[4808]: I0217 16:40:21.594134 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" containerID="cri-o://7e8601a98b232938835916b07f525ce196aee0ee01e8ee4ec9de824633712b8d" gracePeriod=600 Feb 17 16:40:22 crc kubenswrapper[4808]: E0217 16:40:22.148654 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:40:22 crc kubenswrapper[4808]: I0217 16:40:22.170937 4808 generic.go:334] "Generic (PLEG): container finished" podID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerID="7e8601a98b232938835916b07f525ce196aee0ee01e8ee4ec9de824633712b8d" exitCode=0 Feb 17 16:40:22 crc kubenswrapper[4808]: I0217 16:40:22.170993 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" event={"ID":"ca38b6e7-b21c-453d-8b6c-a163dac84b35","Type":"ContainerDied","Data":"7e8601a98b232938835916b07f525ce196aee0ee01e8ee4ec9de824633712b8d"} Feb 17 16:40:22 crc kubenswrapper[4808]: I0217 16:40:22.171047 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" event={"ID":"ca38b6e7-b21c-453d-8b6c-a163dac84b35","Type":"ContainerStarted","Data":"1d6b62da85cac0888e68836087131544de96c37066f3fa481bdeda1d95bfa143"} Feb 17 16:40:22 crc kubenswrapper[4808]: I0217 16:40:22.171065 4808 scope.go:117] "RemoveContainer" containerID="1bc8c301ec8b4441d9a8329001acd7ade818d27cbaa99f4b04c925c309e2eb22" Feb 17 16:40:23 crc kubenswrapper[4808]: E0217 16:40:23.148659 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:40:35 crc kubenswrapper[4808]: E0217 16:40:35.148589 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:40:35 crc kubenswrapper[4808]: E0217 16:40:35.148905 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:40:46 crc kubenswrapper[4808]: E0217 16:40:46.150095 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:40:47 crc kubenswrapper[4808]: E0217 16:40:47.162934 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:40:57 crc kubenswrapper[4808]: E0217 16:40:57.184435 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:41:02 crc kubenswrapper[4808]: E0217 16:41:02.149312 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:41:08 crc kubenswrapper[4808]: E0217 16:41:08.148105 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:41:14 crc kubenswrapper[4808]: E0217 16:41:14.147822 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:41:23 crc kubenswrapper[4808]: E0217 16:41:23.148263 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:41:26 crc kubenswrapper[4808]: E0217 16:41:26.149159 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:41:37 crc kubenswrapper[4808]: E0217 16:41:37.176549 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:41:38 crc kubenswrapper[4808]: E0217 16:41:38.148366 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:41:48 crc kubenswrapper[4808]: E0217 16:41:48.149430 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:41:50 crc kubenswrapper[4808]: E0217 16:41:50.147779 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:42:03 crc kubenswrapper[4808]: E0217 16:42:03.148271 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:42:05 crc kubenswrapper[4808]: E0217 16:42:05.151911 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:42:16 crc kubenswrapper[4808]: E0217 16:42:16.148977 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:42:20 crc kubenswrapper[4808]: E0217 16:42:20.149599 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:42:21 crc kubenswrapper[4808]: I0217 16:42:21.592358 4808 patch_prober.go:28] interesting pod/machine-config-daemon-k8v8k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:42:21 crc kubenswrapper[4808]: I0217 16:42:21.592461 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:42:28 crc kubenswrapper[4808]: E0217 16:42:28.150497 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:42:32 crc kubenswrapper[4808]: E0217 16:42:32.148262 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:42:41 crc kubenswrapper[4808]: E0217 16:42:41.149866 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:42:46 crc kubenswrapper[4808]: E0217 16:42:46.149091 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:42:51 crc kubenswrapper[4808]: I0217 16:42:51.598671 4808 patch_prober.go:28] interesting pod/machine-config-daemon-k8v8k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:42:51 crc kubenswrapper[4808]: I0217 16:42:51.599392 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:42:52 crc kubenswrapper[4808]: E0217 16:42:52.150140 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:42:52 crc kubenswrapper[4808]: I0217 16:42:52.886618 4808 generic.go:334] "Generic (PLEG): container finished" podID="c51156c6-7d2b-4871-9ae0-963c4eb67454" containerID="65dafe8a1101f4ddfb7e0bce9d223f707cac8bd45bd857f95672b3b349fe2857" exitCode=2 Feb 17 16:42:52 crc kubenswrapper[4808]: I0217 16:42:52.886678 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hsdg8" event={"ID":"c51156c6-7d2b-4871-9ae0-963c4eb67454","Type":"ContainerDied","Data":"65dafe8a1101f4ddfb7e0bce9d223f707cac8bd45bd857f95672b3b349fe2857"} Feb 17 16:42:54 crc kubenswrapper[4808]: I0217 16:42:54.381248 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hsdg8" Feb 17 16:42:54 crc kubenswrapper[4808]: I0217 16:42:54.576852 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c51156c6-7d2b-4871-9ae0-963c4eb67454-inventory\") pod \"c51156c6-7d2b-4871-9ae0-963c4eb67454\" (UID: \"c51156c6-7d2b-4871-9ae0-963c4eb67454\") " Feb 17 16:42:54 crc kubenswrapper[4808]: I0217 16:42:54.577627 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nf9ss\" (UniqueName: \"kubernetes.io/projected/c51156c6-7d2b-4871-9ae0-963c4eb67454-kube-api-access-nf9ss\") pod \"c51156c6-7d2b-4871-9ae0-963c4eb67454\" (UID: \"c51156c6-7d2b-4871-9ae0-963c4eb67454\") " Feb 17 16:42:54 crc kubenswrapper[4808]: I0217 16:42:54.577858 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c51156c6-7d2b-4871-9ae0-963c4eb67454-ssh-key-openstack-edpm-ipam\") pod \"c51156c6-7d2b-4871-9ae0-963c4eb67454\" (UID: \"c51156c6-7d2b-4871-9ae0-963c4eb67454\") " Feb 17 16:42:54 crc kubenswrapper[4808]: I0217 16:42:54.583407 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c51156c6-7d2b-4871-9ae0-963c4eb67454-kube-api-access-nf9ss" (OuterVolumeSpecName: "kube-api-access-nf9ss") pod "c51156c6-7d2b-4871-9ae0-963c4eb67454" (UID: "c51156c6-7d2b-4871-9ae0-963c4eb67454"). InnerVolumeSpecName "kube-api-access-nf9ss". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:42:54 crc kubenswrapper[4808]: I0217 16:42:54.615642 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c51156c6-7d2b-4871-9ae0-963c4eb67454-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c51156c6-7d2b-4871-9ae0-963c4eb67454" (UID: "c51156c6-7d2b-4871-9ae0-963c4eb67454"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:42:54 crc kubenswrapper[4808]: I0217 16:42:54.615722 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c51156c6-7d2b-4871-9ae0-963c4eb67454-inventory" (OuterVolumeSpecName: "inventory") pod "c51156c6-7d2b-4871-9ae0-963c4eb67454" (UID: "c51156c6-7d2b-4871-9ae0-963c4eb67454"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:42:54 crc kubenswrapper[4808]: I0217 16:42:54.680267 4808 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c51156c6-7d2b-4871-9ae0-963c4eb67454-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 16:42:54 crc kubenswrapper[4808]: I0217 16:42:54.680295 4808 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c51156c6-7d2b-4871-9ae0-963c4eb67454-inventory\") on node \"crc\" DevicePath \"\"" Feb 17 16:42:54 crc kubenswrapper[4808]: I0217 16:42:54.680303 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nf9ss\" (UniqueName: \"kubernetes.io/projected/c51156c6-7d2b-4871-9ae0-963c4eb67454-kube-api-access-nf9ss\") on node \"crc\" DevicePath \"\"" Feb 17 16:42:54 crc kubenswrapper[4808]: I0217 16:42:54.914331 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hsdg8" event={"ID":"c51156c6-7d2b-4871-9ae0-963c4eb67454","Type":"ContainerDied","Data":"0bd0464d30a220d6d00def18b5261451af4eeafffd898c8b5ae55cfbfb63623f"} Feb 17 16:42:54 crc kubenswrapper[4808]: I0217 16:42:54.914757 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0bd0464d30a220d6d00def18b5261451af4eeafffd898c8b5ae55cfbfb63623f" Feb 17 16:42:54 crc kubenswrapper[4808]: I0217 16:42:54.914900 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hsdg8" Feb 17 16:43:00 crc kubenswrapper[4808]: E0217 16:43:00.149837 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:43:06 crc kubenswrapper[4808]: E0217 16:43:06.147296 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:43:15 crc kubenswrapper[4808]: E0217 16:43:15.148026 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:43:18 crc kubenswrapper[4808]: E0217 16:43:18.153433 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:43:21 crc kubenswrapper[4808]: I0217 16:43:21.592092 4808 patch_prober.go:28] interesting pod/machine-config-daemon-k8v8k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:43:21 crc kubenswrapper[4808]: I0217 16:43:21.592478 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:43:21 crc kubenswrapper[4808]: I0217 16:43:21.592538 4808 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" Feb 17 16:43:21 crc kubenswrapper[4808]: I0217 16:43:21.593641 4808 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1d6b62da85cac0888e68836087131544de96c37066f3fa481bdeda1d95bfa143"} pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 16:43:21 crc kubenswrapper[4808]: I0217 16:43:21.593735 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" containerID="cri-o://1d6b62da85cac0888e68836087131544de96c37066f3fa481bdeda1d95bfa143" gracePeriod=600 Feb 17 16:43:21 crc kubenswrapper[4808]: E0217 16:43:21.720123 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:43:22 crc kubenswrapper[4808]: I0217 16:43:22.191708 4808 generic.go:334] "Generic (PLEG): container finished" podID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerID="1d6b62da85cac0888e68836087131544de96c37066f3fa481bdeda1d95bfa143" exitCode=0 Feb 17 16:43:22 crc kubenswrapper[4808]: I0217 16:43:22.191749 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" event={"ID":"ca38b6e7-b21c-453d-8b6c-a163dac84b35","Type":"ContainerDied","Data":"1d6b62da85cac0888e68836087131544de96c37066f3fa481bdeda1d95bfa143"} Feb 17 16:43:22 crc kubenswrapper[4808]: I0217 16:43:22.191795 4808 scope.go:117] "RemoveContainer" containerID="7e8601a98b232938835916b07f525ce196aee0ee01e8ee4ec9de824633712b8d" Feb 17 16:43:22 crc kubenswrapper[4808]: I0217 16:43:22.192877 4808 scope.go:117] "RemoveContainer" containerID="1d6b62da85cac0888e68836087131544de96c37066f3fa481bdeda1d95bfa143" Feb 17 16:43:22 crc kubenswrapper[4808]: E0217 16:43:22.193418 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:43:28 crc kubenswrapper[4808]: E0217 16:43:28.148858 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:43:30 crc kubenswrapper[4808]: E0217 16:43:30.148818 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:43:32 crc kubenswrapper[4808]: I0217 16:43:32.035727 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pmbdv"] Feb 17 16:43:32 crc kubenswrapper[4808]: E0217 16:43:32.036610 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7f557dd-9578-4e27-afb8-2c090c0b6fe2" containerName="extract-utilities" Feb 17 16:43:32 crc kubenswrapper[4808]: I0217 16:43:32.036628 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7f557dd-9578-4e27-afb8-2c090c0b6fe2" containerName="extract-utilities" Feb 17 16:43:32 crc kubenswrapper[4808]: E0217 16:43:32.036653 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7f557dd-9578-4e27-afb8-2c090c0b6fe2" containerName="registry-server" Feb 17 16:43:32 crc kubenswrapper[4808]: I0217 16:43:32.036662 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7f557dd-9578-4e27-afb8-2c090c0b6fe2" containerName="registry-server" Feb 17 16:43:32 crc kubenswrapper[4808]: E0217 16:43:32.036678 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="257c9d3f-48cc-4f4f-83f8-9474261e2ca4" containerName="registry-server" Feb 17 16:43:32 crc kubenswrapper[4808]: I0217 16:43:32.036687 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="257c9d3f-48cc-4f4f-83f8-9474261e2ca4" containerName="registry-server" Feb 17 16:43:32 crc kubenswrapper[4808]: E0217 16:43:32.036704 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7f557dd-9578-4e27-afb8-2c090c0b6fe2" containerName="extract-content" Feb 17 16:43:32 crc kubenswrapper[4808]: I0217 16:43:32.036712 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7f557dd-9578-4e27-afb8-2c090c0b6fe2" containerName="extract-content" Feb 17 16:43:32 crc kubenswrapper[4808]: E0217 16:43:32.036737 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="257c9d3f-48cc-4f4f-83f8-9474261e2ca4" containerName="extract-content" Feb 17 16:43:32 crc kubenswrapper[4808]: I0217 16:43:32.036745 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="257c9d3f-48cc-4f4f-83f8-9474261e2ca4" containerName="extract-content" Feb 17 16:43:32 crc kubenswrapper[4808]: E0217 16:43:32.036770 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c51156c6-7d2b-4871-9ae0-963c4eb67454" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 17 16:43:32 crc kubenswrapper[4808]: I0217 16:43:32.036779 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="c51156c6-7d2b-4871-9ae0-963c4eb67454" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 17 16:43:32 crc kubenswrapper[4808]: E0217 16:43:32.036801 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="257c9d3f-48cc-4f4f-83f8-9474261e2ca4" containerName="extract-utilities" Feb 17 16:43:32 crc kubenswrapper[4808]: I0217 16:43:32.036809 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="257c9d3f-48cc-4f4f-83f8-9474261e2ca4" containerName="extract-utilities" Feb 17 16:43:32 crc kubenswrapper[4808]: I0217 16:43:32.037039 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="c51156c6-7d2b-4871-9ae0-963c4eb67454" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 17 16:43:32 crc kubenswrapper[4808]: I0217 16:43:32.037062 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="257c9d3f-48cc-4f4f-83f8-9474261e2ca4" containerName="registry-server" Feb 17 16:43:32 crc kubenswrapper[4808]: I0217 16:43:32.037081 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7f557dd-9578-4e27-afb8-2c090c0b6fe2" containerName="registry-server" Feb 17 16:43:32 crc kubenswrapper[4808]: I0217 16:43:32.038139 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pmbdv" Feb 17 16:43:32 crc kubenswrapper[4808]: I0217 16:43:32.041650 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-gpcsv" Feb 17 16:43:32 crc kubenswrapper[4808]: I0217 16:43:32.042035 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 17 16:43:32 crc kubenswrapper[4808]: I0217 16:43:32.042306 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 17 16:43:32 crc kubenswrapper[4808]: I0217 16:43:32.042922 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 17 16:43:32 crc kubenswrapper[4808]: I0217 16:43:32.066403 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pmbdv"] Feb 17 16:43:32 crc kubenswrapper[4808]: I0217 16:43:32.236969 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d178dfcd-66d8-40ba-b740-909fe6e081ac-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-pmbdv\" (UID: \"d178dfcd-66d8-40ba-b740-909fe6e081ac\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pmbdv" Feb 17 16:43:32 crc kubenswrapper[4808]: I0217 16:43:32.237242 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bjw8\" (UniqueName: \"kubernetes.io/projected/d178dfcd-66d8-40ba-b740-909fe6e081ac-kube-api-access-9bjw8\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-pmbdv\" (UID: \"d178dfcd-66d8-40ba-b740-909fe6e081ac\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pmbdv" Feb 17 16:43:32 crc kubenswrapper[4808]: I0217 16:43:32.237339 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d178dfcd-66d8-40ba-b740-909fe6e081ac-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-pmbdv\" (UID: \"d178dfcd-66d8-40ba-b740-909fe6e081ac\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pmbdv" Feb 17 16:43:32 crc kubenswrapper[4808]: I0217 16:43:32.339222 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d178dfcd-66d8-40ba-b740-909fe6e081ac-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-pmbdv\" (UID: \"d178dfcd-66d8-40ba-b740-909fe6e081ac\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pmbdv" Feb 17 16:43:32 crc kubenswrapper[4808]: I0217 16:43:32.339296 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bjw8\" (UniqueName: \"kubernetes.io/projected/d178dfcd-66d8-40ba-b740-909fe6e081ac-kube-api-access-9bjw8\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-pmbdv\" (UID: \"d178dfcd-66d8-40ba-b740-909fe6e081ac\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pmbdv" Feb 17 16:43:32 crc kubenswrapper[4808]: I0217 16:43:32.339330 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d178dfcd-66d8-40ba-b740-909fe6e081ac-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-pmbdv\" (UID: \"d178dfcd-66d8-40ba-b740-909fe6e081ac\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pmbdv" Feb 17 16:43:32 crc kubenswrapper[4808]: I0217 16:43:32.358800 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d178dfcd-66d8-40ba-b740-909fe6e081ac-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-pmbdv\" (UID: \"d178dfcd-66d8-40ba-b740-909fe6e081ac\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pmbdv" Feb 17 16:43:32 crc kubenswrapper[4808]: I0217 16:43:32.359187 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d178dfcd-66d8-40ba-b740-909fe6e081ac-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-pmbdv\" (UID: \"d178dfcd-66d8-40ba-b740-909fe6e081ac\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pmbdv" Feb 17 16:43:32 crc kubenswrapper[4808]: I0217 16:43:32.376546 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bjw8\" (UniqueName: \"kubernetes.io/projected/d178dfcd-66d8-40ba-b740-909fe6e081ac-kube-api-access-9bjw8\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-pmbdv\" (UID: \"d178dfcd-66d8-40ba-b740-909fe6e081ac\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pmbdv" Feb 17 16:43:32 crc kubenswrapper[4808]: I0217 16:43:32.663966 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pmbdv" Feb 17 16:43:33 crc kubenswrapper[4808]: I0217 16:43:33.215444 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pmbdv"] Feb 17 16:43:33 crc kubenswrapper[4808]: I0217 16:43:33.331545 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pmbdv" event={"ID":"d178dfcd-66d8-40ba-b740-909fe6e081ac","Type":"ContainerStarted","Data":"beadab6c3a4b086c709ebcfa9079469f2ee23c30727b884ea9d18a17c5d65df6"} Feb 17 16:43:34 crc kubenswrapper[4808]: I0217 16:43:34.368414 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pmbdv" event={"ID":"d178dfcd-66d8-40ba-b740-909fe6e081ac","Type":"ContainerStarted","Data":"29d16363f6fa98f265f09c289debfecc64d954c62ee36d69f30d4932fce9caae"} Feb 17 16:43:34 crc kubenswrapper[4808]: I0217 16:43:34.407677 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pmbdv" podStartSLOduration=1.950833644 podStartE2EDuration="2.407659793s" podCreationTimestamp="2026-02-17 16:43:32 +0000 UTC" firstStartedPulling="2026-02-17 16:43:33.227560202 +0000 UTC m=+2976.743919275" lastFinishedPulling="2026-02-17 16:43:33.684386311 +0000 UTC m=+2977.200745424" observedRunningTime="2026-02-17 16:43:34.390801608 +0000 UTC m=+2977.907160761" watchObservedRunningTime="2026-02-17 16:43:34.407659793 +0000 UTC m=+2977.924018856" Feb 17 16:43:35 crc kubenswrapper[4808]: I0217 16:43:35.146405 4808 scope.go:117] "RemoveContainer" containerID="1d6b62da85cac0888e68836087131544de96c37066f3fa481bdeda1d95bfa143" Feb 17 16:43:35 crc kubenswrapper[4808]: E0217 16:43:35.147272 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:43:41 crc kubenswrapper[4808]: E0217 16:43:41.149237 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:43:45 crc kubenswrapper[4808]: E0217 16:43:45.148711 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:43:47 crc kubenswrapper[4808]: I0217 16:43:47.154407 4808 scope.go:117] "RemoveContainer" containerID="1d6b62da85cac0888e68836087131544de96c37066f3fa481bdeda1d95bfa143" Feb 17 16:43:47 crc kubenswrapper[4808]: E0217 16:43:47.155029 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:43:53 crc kubenswrapper[4808]: E0217 16:43:53.151507 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:43:56 crc kubenswrapper[4808]: E0217 16:43:56.148489 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:43:58 crc kubenswrapper[4808]: I0217 16:43:58.146296 4808 scope.go:117] "RemoveContainer" containerID="1d6b62da85cac0888e68836087131544de96c37066f3fa481bdeda1d95bfa143" Feb 17 16:43:58 crc kubenswrapper[4808]: E0217 16:43:58.147170 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:44:07 crc kubenswrapper[4808]: E0217 16:44:07.153967 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:44:08 crc kubenswrapper[4808]: E0217 16:44:08.148268 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:44:09 crc kubenswrapper[4808]: I0217 16:44:09.146416 4808 scope.go:117] "RemoveContainer" containerID="1d6b62da85cac0888e68836087131544de96c37066f3fa481bdeda1d95bfa143" Feb 17 16:44:09 crc kubenswrapper[4808]: E0217 16:44:09.147768 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:44:19 crc kubenswrapper[4808]: E0217 16:44:19.147945 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:44:21 crc kubenswrapper[4808]: E0217 16:44:21.146747 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:44:24 crc kubenswrapper[4808]: I0217 16:44:24.147044 4808 scope.go:117] "RemoveContainer" containerID="1d6b62da85cac0888e68836087131544de96c37066f3fa481bdeda1d95bfa143" Feb 17 16:44:24 crc kubenswrapper[4808]: E0217 16:44:24.149672 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:44:31 crc kubenswrapper[4808]: E0217 16:44:31.162876 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:44:36 crc kubenswrapper[4808]: E0217 16:44:36.148593 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:44:38 crc kubenswrapper[4808]: I0217 16:44:38.146959 4808 scope.go:117] "RemoveContainer" containerID="1d6b62da85cac0888e68836087131544de96c37066f3fa481bdeda1d95bfa143" Feb 17 16:44:38 crc kubenswrapper[4808]: E0217 16:44:38.147994 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:44:42 crc kubenswrapper[4808]: I0217 16:44:42.149621 4808 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 16:44:42 crc kubenswrapper[4808]: E0217 16:44:42.274565 4808 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 17 16:44:42 crc kubenswrapper[4808]: E0217 16:44:42.274648 4808 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 17 16:44:42 crc kubenswrapper[4808]: E0217 16:44:42.274810 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fnd2x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-zl7nk_openstack(a4b182d0-48fc-4487-b7ad-18f7803a4d4c): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:44:42 crc kubenswrapper[4808]: E0217 16:44:42.276202 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:44:47 crc kubenswrapper[4808]: E0217 16:44:47.154121 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:44:52 crc kubenswrapper[4808]: I0217 16:44:52.146441 4808 scope.go:117] "RemoveContainer" containerID="1d6b62da85cac0888e68836087131544de96c37066f3fa481bdeda1d95bfa143" Feb 17 16:44:52 crc kubenswrapper[4808]: E0217 16:44:52.147248 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:44:55 crc kubenswrapper[4808]: E0217 16:44:55.148096 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:45:00 crc kubenswrapper[4808]: I0217 16:45:00.161914 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522445-ttsld"] Feb 17 16:45:00 crc kubenswrapper[4808]: I0217 16:45:00.164217 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522445-ttsld" Feb 17 16:45:00 crc kubenswrapper[4808]: I0217 16:45:00.166326 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 17 16:45:00 crc kubenswrapper[4808]: I0217 16:45:00.166674 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 17 16:45:00 crc kubenswrapper[4808]: I0217 16:45:00.175383 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522445-ttsld"] Feb 17 16:45:00 crc kubenswrapper[4808]: I0217 16:45:00.216878 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jskgc\" (UniqueName: \"kubernetes.io/projected/450a44d1-3fb2-41f5-9200-59c6c1838c86-kube-api-access-jskgc\") pod \"collect-profiles-29522445-ttsld\" (UID: \"450a44d1-3fb2-41f5-9200-59c6c1838c86\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522445-ttsld" Feb 17 16:45:00 crc kubenswrapper[4808]: I0217 16:45:00.217262 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/450a44d1-3fb2-41f5-9200-59c6c1838c86-config-volume\") pod \"collect-profiles-29522445-ttsld\" (UID: \"450a44d1-3fb2-41f5-9200-59c6c1838c86\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522445-ttsld" Feb 17 16:45:00 crc kubenswrapper[4808]: I0217 16:45:00.217390 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/450a44d1-3fb2-41f5-9200-59c6c1838c86-secret-volume\") pod \"collect-profiles-29522445-ttsld\" (UID: \"450a44d1-3fb2-41f5-9200-59c6c1838c86\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522445-ttsld" Feb 17 16:45:00 crc kubenswrapper[4808]: I0217 16:45:00.319232 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jskgc\" (UniqueName: \"kubernetes.io/projected/450a44d1-3fb2-41f5-9200-59c6c1838c86-kube-api-access-jskgc\") pod \"collect-profiles-29522445-ttsld\" (UID: \"450a44d1-3fb2-41f5-9200-59c6c1838c86\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522445-ttsld" Feb 17 16:45:00 crc kubenswrapper[4808]: I0217 16:45:00.319296 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/450a44d1-3fb2-41f5-9200-59c6c1838c86-config-volume\") pod \"collect-profiles-29522445-ttsld\" (UID: \"450a44d1-3fb2-41f5-9200-59c6c1838c86\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522445-ttsld" Feb 17 16:45:00 crc kubenswrapper[4808]: I0217 16:45:00.319389 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/450a44d1-3fb2-41f5-9200-59c6c1838c86-secret-volume\") pod \"collect-profiles-29522445-ttsld\" (UID: \"450a44d1-3fb2-41f5-9200-59c6c1838c86\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522445-ttsld" Feb 17 16:45:00 crc kubenswrapper[4808]: I0217 16:45:00.320533 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/450a44d1-3fb2-41f5-9200-59c6c1838c86-config-volume\") pod \"collect-profiles-29522445-ttsld\" (UID: \"450a44d1-3fb2-41f5-9200-59c6c1838c86\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522445-ttsld" Feb 17 16:45:00 crc kubenswrapper[4808]: I0217 16:45:00.326187 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/450a44d1-3fb2-41f5-9200-59c6c1838c86-secret-volume\") pod \"collect-profiles-29522445-ttsld\" (UID: \"450a44d1-3fb2-41f5-9200-59c6c1838c86\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522445-ttsld" Feb 17 16:45:00 crc kubenswrapper[4808]: I0217 16:45:00.338476 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jskgc\" (UniqueName: \"kubernetes.io/projected/450a44d1-3fb2-41f5-9200-59c6c1838c86-kube-api-access-jskgc\") pod \"collect-profiles-29522445-ttsld\" (UID: \"450a44d1-3fb2-41f5-9200-59c6c1838c86\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522445-ttsld" Feb 17 16:45:00 crc kubenswrapper[4808]: I0217 16:45:00.498144 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522445-ttsld" Feb 17 16:45:01 crc kubenswrapper[4808]: I0217 16:45:01.013127 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522445-ttsld"] Feb 17 16:45:01 crc kubenswrapper[4808]: W0217 16:45:01.023067 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod450a44d1_3fb2_41f5_9200_59c6c1838c86.slice/crio-f78e333a85660ba0ab90b842f06bdef2cc11d93ba9f91c2311c87b04bcae1a10 WatchSource:0}: Error finding container f78e333a85660ba0ab90b842f06bdef2cc11d93ba9f91c2311c87b04bcae1a10: Status 404 returned error can't find the container with id f78e333a85660ba0ab90b842f06bdef2cc11d93ba9f91c2311c87b04bcae1a10 Feb 17 16:45:01 crc kubenswrapper[4808]: I0217 16:45:01.412806 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522445-ttsld" event={"ID":"450a44d1-3fb2-41f5-9200-59c6c1838c86","Type":"ContainerStarted","Data":"51178eccc89b955640453b414bcd16d1523ac289cf0ed8497a9b4ca6a3ebaa2d"} Feb 17 16:45:01 crc kubenswrapper[4808]: I0217 16:45:01.413090 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522445-ttsld" event={"ID":"450a44d1-3fb2-41f5-9200-59c6c1838c86","Type":"ContainerStarted","Data":"f78e333a85660ba0ab90b842f06bdef2cc11d93ba9f91c2311c87b04bcae1a10"} Feb 17 16:45:02 crc kubenswrapper[4808]: E0217 16:45:02.149808 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:45:02 crc kubenswrapper[4808]: I0217 16:45:02.424308 4808 generic.go:334] "Generic (PLEG): container finished" podID="450a44d1-3fb2-41f5-9200-59c6c1838c86" containerID="51178eccc89b955640453b414bcd16d1523ac289cf0ed8497a9b4ca6a3ebaa2d" exitCode=0 Feb 17 16:45:02 crc kubenswrapper[4808]: I0217 16:45:02.424365 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522445-ttsld" event={"ID":"450a44d1-3fb2-41f5-9200-59c6c1838c86","Type":"ContainerDied","Data":"51178eccc89b955640453b414bcd16d1523ac289cf0ed8497a9b4ca6a3ebaa2d"} Feb 17 16:45:02 crc kubenswrapper[4808]: I0217 16:45:02.921057 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522445-ttsld" Feb 17 16:45:02 crc kubenswrapper[4808]: I0217 16:45:02.980186 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/450a44d1-3fb2-41f5-9200-59c6c1838c86-config-volume\") pod \"450a44d1-3fb2-41f5-9200-59c6c1838c86\" (UID: \"450a44d1-3fb2-41f5-9200-59c6c1838c86\") " Feb 17 16:45:02 crc kubenswrapper[4808]: I0217 16:45:02.980287 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jskgc\" (UniqueName: \"kubernetes.io/projected/450a44d1-3fb2-41f5-9200-59c6c1838c86-kube-api-access-jskgc\") pod \"450a44d1-3fb2-41f5-9200-59c6c1838c86\" (UID: \"450a44d1-3fb2-41f5-9200-59c6c1838c86\") " Feb 17 16:45:02 crc kubenswrapper[4808]: I0217 16:45:02.980372 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/450a44d1-3fb2-41f5-9200-59c6c1838c86-secret-volume\") pod \"450a44d1-3fb2-41f5-9200-59c6c1838c86\" (UID: \"450a44d1-3fb2-41f5-9200-59c6c1838c86\") " Feb 17 16:45:02 crc kubenswrapper[4808]: I0217 16:45:02.981313 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/450a44d1-3fb2-41f5-9200-59c6c1838c86-config-volume" (OuterVolumeSpecName: "config-volume") pod "450a44d1-3fb2-41f5-9200-59c6c1838c86" (UID: "450a44d1-3fb2-41f5-9200-59c6c1838c86"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:45:02 crc kubenswrapper[4808]: I0217 16:45:02.985372 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/450a44d1-3fb2-41f5-9200-59c6c1838c86-kube-api-access-jskgc" (OuterVolumeSpecName: "kube-api-access-jskgc") pod "450a44d1-3fb2-41f5-9200-59c6c1838c86" (UID: "450a44d1-3fb2-41f5-9200-59c6c1838c86"). InnerVolumeSpecName "kube-api-access-jskgc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:45:02 crc kubenswrapper[4808]: I0217 16:45:02.991339 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/450a44d1-3fb2-41f5-9200-59c6c1838c86-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "450a44d1-3fb2-41f5-9200-59c6c1838c86" (UID: "450a44d1-3fb2-41f5-9200-59c6c1838c86"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:45:03 crc kubenswrapper[4808]: I0217 16:45:03.083223 4808 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/450a44d1-3fb2-41f5-9200-59c6c1838c86-config-volume\") on node \"crc\" DevicePath \"\"" Feb 17 16:45:03 crc kubenswrapper[4808]: I0217 16:45:03.083274 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jskgc\" (UniqueName: \"kubernetes.io/projected/450a44d1-3fb2-41f5-9200-59c6c1838c86-kube-api-access-jskgc\") on node \"crc\" DevicePath \"\"" Feb 17 16:45:03 crc kubenswrapper[4808]: I0217 16:45:03.083290 4808 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/450a44d1-3fb2-41f5-9200-59c6c1838c86-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 17 16:45:03 crc kubenswrapper[4808]: I0217 16:45:03.433980 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522445-ttsld" event={"ID":"450a44d1-3fb2-41f5-9200-59c6c1838c86","Type":"ContainerDied","Data":"f78e333a85660ba0ab90b842f06bdef2cc11d93ba9f91c2311c87b04bcae1a10"} Feb 17 16:45:03 crc kubenswrapper[4808]: I0217 16:45:03.434012 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522445-ttsld" Feb 17 16:45:03 crc kubenswrapper[4808]: I0217 16:45:03.434016 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f78e333a85660ba0ab90b842f06bdef2cc11d93ba9f91c2311c87b04bcae1a10" Feb 17 16:45:04 crc kubenswrapper[4808]: I0217 16:45:04.013410 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522400-gqxpq"] Feb 17 16:45:04 crc kubenswrapper[4808]: I0217 16:45:04.023863 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522400-gqxpq"] Feb 17 16:45:05 crc kubenswrapper[4808]: I0217 16:45:05.182897 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d231c3b2-ee81-488d-b526-77ab9c8a2822" path="/var/lib/kubelet/pods/d231c3b2-ee81-488d-b526-77ab9c8a2822/volumes" Feb 17 16:45:07 crc kubenswrapper[4808]: I0217 16:45:07.161395 4808 scope.go:117] "RemoveContainer" containerID="1d6b62da85cac0888e68836087131544de96c37066f3fa481bdeda1d95bfa143" Feb 17 16:45:07 crc kubenswrapper[4808]: E0217 16:45:07.162780 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:45:07 crc kubenswrapper[4808]: E0217 16:45:07.163297 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:45:16 crc kubenswrapper[4808]: E0217 16:45:16.271114 4808 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:45:16 crc kubenswrapper[4808]: E0217 16:45:16.271693 4808 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:45:16 crc kubenswrapper[4808]: E0217 16:45:16.271858 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nfchb4h678h649h5fbh664h79h7fh666h5bfh68h565h555h59dh5b6h5bfh66ch645h547h5cbh549h9fh58bh5d4hcfh78h68chc7h5ch67dhc7h5b4q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rjgf2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(2876084b-7055-449d-9ddb-447d3a515d80): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:45:16 crc kubenswrapper[4808]: E0217 16:45:16.273214 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:45:18 crc kubenswrapper[4808]: I0217 16:45:18.156064 4808 scope.go:117] "RemoveContainer" containerID="a5c43165b9e051b89a89100aebbe7b3cc4c01775c317fec65c06ca231b1fc493" Feb 17 16:45:19 crc kubenswrapper[4808]: E0217 16:45:19.147666 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:45:21 crc kubenswrapper[4808]: I0217 16:45:21.146194 4808 scope.go:117] "RemoveContainer" containerID="1d6b62da85cac0888e68836087131544de96c37066f3fa481bdeda1d95bfa143" Feb 17 16:45:21 crc kubenswrapper[4808]: E0217 16:45:21.146805 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:45:31 crc kubenswrapper[4808]: E0217 16:45:31.150211 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:45:32 crc kubenswrapper[4808]: I0217 16:45:32.145853 4808 scope.go:117] "RemoveContainer" containerID="1d6b62da85cac0888e68836087131544de96c37066f3fa481bdeda1d95bfa143" Feb 17 16:45:32 crc kubenswrapper[4808]: E0217 16:45:32.146138 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:45:33 crc kubenswrapper[4808]: E0217 16:45:33.150554 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:45:44 crc kubenswrapper[4808]: E0217 16:45:44.149205 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:45:46 crc kubenswrapper[4808]: I0217 16:45:46.146913 4808 scope.go:117] "RemoveContainer" containerID="1d6b62da85cac0888e68836087131544de96c37066f3fa481bdeda1d95bfa143" Feb 17 16:45:46 crc kubenswrapper[4808]: E0217 16:45:46.148482 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:45:46 crc kubenswrapper[4808]: E0217 16:45:46.149795 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:45:57 crc kubenswrapper[4808]: E0217 16:45:57.160807 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:45:58 crc kubenswrapper[4808]: E0217 16:45:58.148499 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:46:01 crc kubenswrapper[4808]: I0217 16:46:01.146229 4808 scope.go:117] "RemoveContainer" containerID="1d6b62da85cac0888e68836087131544de96c37066f3fa481bdeda1d95bfa143" Feb 17 16:46:01 crc kubenswrapper[4808]: E0217 16:46:01.147175 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:46:09 crc kubenswrapper[4808]: E0217 16:46:09.150005 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:46:13 crc kubenswrapper[4808]: E0217 16:46:13.149037 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:46:14 crc kubenswrapper[4808]: I0217 16:46:14.146353 4808 scope.go:117] "RemoveContainer" containerID="1d6b62da85cac0888e68836087131544de96c37066f3fa481bdeda1d95bfa143" Feb 17 16:46:14 crc kubenswrapper[4808]: E0217 16:46:14.147079 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:46:24 crc kubenswrapper[4808]: E0217 16:46:24.149784 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:46:24 crc kubenswrapper[4808]: E0217 16:46:24.149851 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:46:26 crc kubenswrapper[4808]: I0217 16:46:26.146954 4808 scope.go:117] "RemoveContainer" containerID="1d6b62da85cac0888e68836087131544de96c37066f3fa481bdeda1d95bfa143" Feb 17 16:46:26 crc kubenswrapper[4808]: E0217 16:46:26.147779 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:46:36 crc kubenswrapper[4808]: E0217 16:46:36.146926 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:46:37 crc kubenswrapper[4808]: E0217 16:46:37.152680 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:46:39 crc kubenswrapper[4808]: I0217 16:46:39.146271 4808 scope.go:117] "RemoveContainer" containerID="1d6b62da85cac0888e68836087131544de96c37066f3fa481bdeda1d95bfa143" Feb 17 16:46:39 crc kubenswrapper[4808]: E0217 16:46:39.147125 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:46:49 crc kubenswrapper[4808]: E0217 16:46:49.149757 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:46:51 crc kubenswrapper[4808]: I0217 16:46:51.146489 4808 scope.go:117] "RemoveContainer" containerID="1d6b62da85cac0888e68836087131544de96c37066f3fa481bdeda1d95bfa143" Feb 17 16:46:51 crc kubenswrapper[4808]: E0217 16:46:51.147109 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:46:52 crc kubenswrapper[4808]: E0217 16:46:52.148979 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:47:02 crc kubenswrapper[4808]: E0217 16:47:02.150860 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:47:04 crc kubenswrapper[4808]: E0217 16:47:04.147859 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:47:05 crc kubenswrapper[4808]: I0217 16:47:05.146765 4808 scope.go:117] "RemoveContainer" containerID="1d6b62da85cac0888e68836087131544de96c37066f3fa481bdeda1d95bfa143" Feb 17 16:47:05 crc kubenswrapper[4808]: E0217 16:47:05.147652 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:47:14 crc kubenswrapper[4808]: E0217 16:47:14.148635 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:47:15 crc kubenswrapper[4808]: E0217 16:47:15.147436 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:47:20 crc kubenswrapper[4808]: I0217 16:47:20.146450 4808 scope.go:117] "RemoveContainer" containerID="1d6b62da85cac0888e68836087131544de96c37066f3fa481bdeda1d95bfa143" Feb 17 16:47:20 crc kubenswrapper[4808]: E0217 16:47:20.147070 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:47:25 crc kubenswrapper[4808]: E0217 16:47:25.149411 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:47:29 crc kubenswrapper[4808]: E0217 16:47:29.148226 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:47:33 crc kubenswrapper[4808]: I0217 16:47:33.771295 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-ptxmb"] Feb 17 16:47:33 crc kubenswrapper[4808]: E0217 16:47:33.772320 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="450a44d1-3fb2-41f5-9200-59c6c1838c86" containerName="collect-profiles" Feb 17 16:47:33 crc kubenswrapper[4808]: I0217 16:47:33.772338 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="450a44d1-3fb2-41f5-9200-59c6c1838c86" containerName="collect-profiles" Feb 17 16:47:33 crc kubenswrapper[4808]: I0217 16:47:33.772544 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="450a44d1-3fb2-41f5-9200-59c6c1838c86" containerName="collect-profiles" Feb 17 16:47:33 crc kubenswrapper[4808]: I0217 16:47:33.774295 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ptxmb" Feb 17 16:47:33 crc kubenswrapper[4808]: I0217 16:47:33.787486 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ptxmb"] Feb 17 16:47:33 crc kubenswrapper[4808]: I0217 16:47:33.836871 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b9d4467-638d-493d-8574-8499f17c5670-utilities\") pod \"certified-operators-ptxmb\" (UID: \"7b9d4467-638d-493d-8574-8499f17c5670\") " pod="openshift-marketplace/certified-operators-ptxmb" Feb 17 16:47:33 crc kubenswrapper[4808]: I0217 16:47:33.837167 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b9d4467-638d-493d-8574-8499f17c5670-catalog-content\") pod \"certified-operators-ptxmb\" (UID: \"7b9d4467-638d-493d-8574-8499f17c5670\") " pod="openshift-marketplace/certified-operators-ptxmb" Feb 17 16:47:33 crc kubenswrapper[4808]: I0217 16:47:33.837299 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6snv\" (UniqueName: \"kubernetes.io/projected/7b9d4467-638d-493d-8574-8499f17c5670-kube-api-access-k6snv\") pod \"certified-operators-ptxmb\" (UID: \"7b9d4467-638d-493d-8574-8499f17c5670\") " pod="openshift-marketplace/certified-operators-ptxmb" Feb 17 16:47:33 crc kubenswrapper[4808]: I0217 16:47:33.939729 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6snv\" (UniqueName: \"kubernetes.io/projected/7b9d4467-638d-493d-8574-8499f17c5670-kube-api-access-k6snv\") pod \"certified-operators-ptxmb\" (UID: \"7b9d4467-638d-493d-8574-8499f17c5670\") " pod="openshift-marketplace/certified-operators-ptxmb" Feb 17 16:47:33 crc kubenswrapper[4808]: I0217 16:47:33.940138 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b9d4467-638d-493d-8574-8499f17c5670-utilities\") pod \"certified-operators-ptxmb\" (UID: \"7b9d4467-638d-493d-8574-8499f17c5670\") " pod="openshift-marketplace/certified-operators-ptxmb" Feb 17 16:47:33 crc kubenswrapper[4808]: I0217 16:47:33.940338 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b9d4467-638d-493d-8574-8499f17c5670-catalog-content\") pod \"certified-operators-ptxmb\" (UID: \"7b9d4467-638d-493d-8574-8499f17c5670\") " pod="openshift-marketplace/certified-operators-ptxmb" Feb 17 16:47:33 crc kubenswrapper[4808]: I0217 16:47:33.940646 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b9d4467-638d-493d-8574-8499f17c5670-utilities\") pod \"certified-operators-ptxmb\" (UID: \"7b9d4467-638d-493d-8574-8499f17c5670\") " pod="openshift-marketplace/certified-operators-ptxmb" Feb 17 16:47:33 crc kubenswrapper[4808]: I0217 16:47:33.940814 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b9d4467-638d-493d-8574-8499f17c5670-catalog-content\") pod \"certified-operators-ptxmb\" (UID: \"7b9d4467-638d-493d-8574-8499f17c5670\") " pod="openshift-marketplace/certified-operators-ptxmb" Feb 17 16:47:33 crc kubenswrapper[4808]: I0217 16:47:33.959922 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6snv\" (UniqueName: \"kubernetes.io/projected/7b9d4467-638d-493d-8574-8499f17c5670-kube-api-access-k6snv\") pod \"certified-operators-ptxmb\" (UID: \"7b9d4467-638d-493d-8574-8499f17c5670\") " pod="openshift-marketplace/certified-operators-ptxmb" Feb 17 16:47:34 crc kubenswrapper[4808]: I0217 16:47:34.106729 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ptxmb" Feb 17 16:47:34 crc kubenswrapper[4808]: I0217 16:47:34.666433 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ptxmb"] Feb 17 16:47:35 crc kubenswrapper[4808]: I0217 16:47:35.081817 4808 generic.go:334] "Generic (PLEG): container finished" podID="7b9d4467-638d-493d-8574-8499f17c5670" containerID="2aadb63da0ff36488275b133e78d3349cd437753a033489a12901fde5be0ceb5" exitCode=0 Feb 17 16:47:35 crc kubenswrapper[4808]: I0217 16:47:35.081884 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ptxmb" event={"ID":"7b9d4467-638d-493d-8574-8499f17c5670","Type":"ContainerDied","Data":"2aadb63da0ff36488275b133e78d3349cd437753a033489a12901fde5be0ceb5"} Feb 17 16:47:35 crc kubenswrapper[4808]: I0217 16:47:35.082132 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ptxmb" event={"ID":"7b9d4467-638d-493d-8574-8499f17c5670","Type":"ContainerStarted","Data":"8fafe0d538171128d4325a574285c5aef22785e8fd1300457f0668def81f80ee"} Feb 17 16:47:35 crc kubenswrapper[4808]: I0217 16:47:35.146487 4808 scope.go:117] "RemoveContainer" containerID="1d6b62da85cac0888e68836087131544de96c37066f3fa481bdeda1d95bfa143" Feb 17 16:47:35 crc kubenswrapper[4808]: E0217 16:47:35.147553 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:47:36 crc kubenswrapper[4808]: I0217 16:47:36.097432 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ptxmb" event={"ID":"7b9d4467-638d-493d-8574-8499f17c5670","Type":"ContainerStarted","Data":"7daf403bbf6561e6314c6056d8bb742d0d4a00320ef03dada6c40d1cbea42a8b"} Feb 17 16:47:38 crc kubenswrapper[4808]: I0217 16:47:38.121310 4808 generic.go:334] "Generic (PLEG): container finished" podID="7b9d4467-638d-493d-8574-8499f17c5670" containerID="7daf403bbf6561e6314c6056d8bb742d0d4a00320ef03dada6c40d1cbea42a8b" exitCode=0 Feb 17 16:47:38 crc kubenswrapper[4808]: I0217 16:47:38.121362 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ptxmb" event={"ID":"7b9d4467-638d-493d-8574-8499f17c5670","Type":"ContainerDied","Data":"7daf403bbf6561e6314c6056d8bb742d0d4a00320ef03dada6c40d1cbea42a8b"} Feb 17 16:47:39 crc kubenswrapper[4808]: I0217 16:47:39.134133 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ptxmb" event={"ID":"7b9d4467-638d-493d-8574-8499f17c5670","Type":"ContainerStarted","Data":"40c732f138e421113ed1646234f6a69eabfa71612439a8e04b012186c72a86b9"} Feb 17 16:47:39 crc kubenswrapper[4808]: E0217 16:47:39.147982 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:47:44 crc kubenswrapper[4808]: I0217 16:47:44.107401 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-ptxmb" Feb 17 16:47:44 crc kubenswrapper[4808]: I0217 16:47:44.107967 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-ptxmb" Feb 17 16:47:44 crc kubenswrapper[4808]: E0217 16:47:44.149821 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:47:44 crc kubenswrapper[4808]: I0217 16:47:44.156742 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-ptxmb" Feb 17 16:47:44 crc kubenswrapper[4808]: I0217 16:47:44.195432 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-ptxmb" podStartSLOduration=7.7533514100000005 podStartE2EDuration="11.195409097s" podCreationTimestamp="2026-02-17 16:47:33 +0000 UTC" firstStartedPulling="2026-02-17 16:47:35.083809114 +0000 UTC m=+3218.600168187" lastFinishedPulling="2026-02-17 16:47:38.525866801 +0000 UTC m=+3222.042225874" observedRunningTime="2026-02-17 16:47:39.15360748 +0000 UTC m=+3222.669966563" watchObservedRunningTime="2026-02-17 16:47:44.195409097 +0000 UTC m=+3227.711768170" Feb 17 16:47:44 crc kubenswrapper[4808]: I0217 16:47:44.235216 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-ptxmb" Feb 17 16:47:44 crc kubenswrapper[4808]: I0217 16:47:44.407707 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ptxmb"] Feb 17 16:47:46 crc kubenswrapper[4808]: I0217 16:47:46.146632 4808 scope.go:117] "RemoveContainer" containerID="1d6b62da85cac0888e68836087131544de96c37066f3fa481bdeda1d95bfa143" Feb 17 16:47:46 crc kubenswrapper[4808]: E0217 16:47:46.147246 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:47:46 crc kubenswrapper[4808]: I0217 16:47:46.208179 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-ptxmb" podUID="7b9d4467-638d-493d-8574-8499f17c5670" containerName="registry-server" containerID="cri-o://40c732f138e421113ed1646234f6a69eabfa71612439a8e04b012186c72a86b9" gracePeriod=2 Feb 17 16:47:46 crc kubenswrapper[4808]: I0217 16:47:46.693637 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ptxmb" Feb 17 16:47:46 crc kubenswrapper[4808]: I0217 16:47:46.815839 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k6snv\" (UniqueName: \"kubernetes.io/projected/7b9d4467-638d-493d-8574-8499f17c5670-kube-api-access-k6snv\") pod \"7b9d4467-638d-493d-8574-8499f17c5670\" (UID: \"7b9d4467-638d-493d-8574-8499f17c5670\") " Feb 17 16:47:46 crc kubenswrapper[4808]: I0217 16:47:46.816061 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b9d4467-638d-493d-8574-8499f17c5670-catalog-content\") pod \"7b9d4467-638d-493d-8574-8499f17c5670\" (UID: \"7b9d4467-638d-493d-8574-8499f17c5670\") " Feb 17 16:47:46 crc kubenswrapper[4808]: I0217 16:47:46.816148 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b9d4467-638d-493d-8574-8499f17c5670-utilities\") pod \"7b9d4467-638d-493d-8574-8499f17c5670\" (UID: \"7b9d4467-638d-493d-8574-8499f17c5670\") " Feb 17 16:47:46 crc kubenswrapper[4808]: I0217 16:47:46.817240 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b9d4467-638d-493d-8574-8499f17c5670-utilities" (OuterVolumeSpecName: "utilities") pod "7b9d4467-638d-493d-8574-8499f17c5670" (UID: "7b9d4467-638d-493d-8574-8499f17c5670"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:47:46 crc kubenswrapper[4808]: I0217 16:47:46.821674 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b9d4467-638d-493d-8574-8499f17c5670-kube-api-access-k6snv" (OuterVolumeSpecName: "kube-api-access-k6snv") pod "7b9d4467-638d-493d-8574-8499f17c5670" (UID: "7b9d4467-638d-493d-8574-8499f17c5670"). InnerVolumeSpecName "kube-api-access-k6snv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:47:46 crc kubenswrapper[4808]: I0217 16:47:46.888283 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b9d4467-638d-493d-8574-8499f17c5670-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7b9d4467-638d-493d-8574-8499f17c5670" (UID: "7b9d4467-638d-493d-8574-8499f17c5670"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:47:46 crc kubenswrapper[4808]: I0217 16:47:46.919254 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b9d4467-638d-493d-8574-8499f17c5670-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:47:46 crc kubenswrapper[4808]: I0217 16:47:46.919301 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k6snv\" (UniqueName: \"kubernetes.io/projected/7b9d4467-638d-493d-8574-8499f17c5670-kube-api-access-k6snv\") on node \"crc\" DevicePath \"\"" Feb 17 16:47:46 crc kubenswrapper[4808]: I0217 16:47:46.919318 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b9d4467-638d-493d-8574-8499f17c5670-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:47:47 crc kubenswrapper[4808]: I0217 16:47:47.218780 4808 generic.go:334] "Generic (PLEG): container finished" podID="7b9d4467-638d-493d-8574-8499f17c5670" containerID="40c732f138e421113ed1646234f6a69eabfa71612439a8e04b012186c72a86b9" exitCode=0 Feb 17 16:47:47 crc kubenswrapper[4808]: I0217 16:47:47.218832 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ptxmb" Feb 17 16:47:47 crc kubenswrapper[4808]: I0217 16:47:47.218833 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ptxmb" event={"ID":"7b9d4467-638d-493d-8574-8499f17c5670","Type":"ContainerDied","Data":"40c732f138e421113ed1646234f6a69eabfa71612439a8e04b012186c72a86b9"} Feb 17 16:47:47 crc kubenswrapper[4808]: I0217 16:47:47.219025 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ptxmb" event={"ID":"7b9d4467-638d-493d-8574-8499f17c5670","Type":"ContainerDied","Data":"8fafe0d538171128d4325a574285c5aef22785e8fd1300457f0668def81f80ee"} Feb 17 16:47:47 crc kubenswrapper[4808]: I0217 16:47:47.219068 4808 scope.go:117] "RemoveContainer" containerID="40c732f138e421113ed1646234f6a69eabfa71612439a8e04b012186c72a86b9" Feb 17 16:47:47 crc kubenswrapper[4808]: I0217 16:47:47.243293 4808 scope.go:117] "RemoveContainer" containerID="7daf403bbf6561e6314c6056d8bb742d0d4a00320ef03dada6c40d1cbea42a8b" Feb 17 16:47:47 crc kubenswrapper[4808]: I0217 16:47:47.250262 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ptxmb"] Feb 17 16:47:47 crc kubenswrapper[4808]: I0217 16:47:47.258843 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-ptxmb"] Feb 17 16:47:47 crc kubenswrapper[4808]: I0217 16:47:47.267378 4808 scope.go:117] "RemoveContainer" containerID="2aadb63da0ff36488275b133e78d3349cd437753a033489a12901fde5be0ceb5" Feb 17 16:47:47 crc kubenswrapper[4808]: I0217 16:47:47.316978 4808 scope.go:117] "RemoveContainer" containerID="40c732f138e421113ed1646234f6a69eabfa71612439a8e04b012186c72a86b9" Feb 17 16:47:47 crc kubenswrapper[4808]: E0217 16:47:47.317662 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40c732f138e421113ed1646234f6a69eabfa71612439a8e04b012186c72a86b9\": container with ID starting with 40c732f138e421113ed1646234f6a69eabfa71612439a8e04b012186c72a86b9 not found: ID does not exist" containerID="40c732f138e421113ed1646234f6a69eabfa71612439a8e04b012186c72a86b9" Feb 17 16:47:47 crc kubenswrapper[4808]: I0217 16:47:47.317711 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40c732f138e421113ed1646234f6a69eabfa71612439a8e04b012186c72a86b9"} err="failed to get container status \"40c732f138e421113ed1646234f6a69eabfa71612439a8e04b012186c72a86b9\": rpc error: code = NotFound desc = could not find container \"40c732f138e421113ed1646234f6a69eabfa71612439a8e04b012186c72a86b9\": container with ID starting with 40c732f138e421113ed1646234f6a69eabfa71612439a8e04b012186c72a86b9 not found: ID does not exist" Feb 17 16:47:47 crc kubenswrapper[4808]: I0217 16:47:47.317746 4808 scope.go:117] "RemoveContainer" containerID="7daf403bbf6561e6314c6056d8bb742d0d4a00320ef03dada6c40d1cbea42a8b" Feb 17 16:47:47 crc kubenswrapper[4808]: E0217 16:47:47.318161 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7daf403bbf6561e6314c6056d8bb742d0d4a00320ef03dada6c40d1cbea42a8b\": container with ID starting with 7daf403bbf6561e6314c6056d8bb742d0d4a00320ef03dada6c40d1cbea42a8b not found: ID does not exist" containerID="7daf403bbf6561e6314c6056d8bb742d0d4a00320ef03dada6c40d1cbea42a8b" Feb 17 16:47:47 crc kubenswrapper[4808]: I0217 16:47:47.318195 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7daf403bbf6561e6314c6056d8bb742d0d4a00320ef03dada6c40d1cbea42a8b"} err="failed to get container status \"7daf403bbf6561e6314c6056d8bb742d0d4a00320ef03dada6c40d1cbea42a8b\": rpc error: code = NotFound desc = could not find container \"7daf403bbf6561e6314c6056d8bb742d0d4a00320ef03dada6c40d1cbea42a8b\": container with ID starting with 7daf403bbf6561e6314c6056d8bb742d0d4a00320ef03dada6c40d1cbea42a8b not found: ID does not exist" Feb 17 16:47:47 crc kubenswrapper[4808]: I0217 16:47:47.318215 4808 scope.go:117] "RemoveContainer" containerID="2aadb63da0ff36488275b133e78d3349cd437753a033489a12901fde5be0ceb5" Feb 17 16:47:47 crc kubenswrapper[4808]: E0217 16:47:47.318609 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2aadb63da0ff36488275b133e78d3349cd437753a033489a12901fde5be0ceb5\": container with ID starting with 2aadb63da0ff36488275b133e78d3349cd437753a033489a12901fde5be0ceb5 not found: ID does not exist" containerID="2aadb63da0ff36488275b133e78d3349cd437753a033489a12901fde5be0ceb5" Feb 17 16:47:47 crc kubenswrapper[4808]: I0217 16:47:47.318640 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2aadb63da0ff36488275b133e78d3349cd437753a033489a12901fde5be0ceb5"} err="failed to get container status \"2aadb63da0ff36488275b133e78d3349cd437753a033489a12901fde5be0ceb5\": rpc error: code = NotFound desc = could not find container \"2aadb63da0ff36488275b133e78d3349cd437753a033489a12901fde5be0ceb5\": container with ID starting with 2aadb63da0ff36488275b133e78d3349cd437753a033489a12901fde5be0ceb5 not found: ID does not exist" Feb 17 16:47:49 crc kubenswrapper[4808]: I0217 16:47:49.171213 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b9d4467-638d-493d-8574-8499f17c5670" path="/var/lib/kubelet/pods/7b9d4467-638d-493d-8574-8499f17c5670/volumes" Feb 17 16:47:51 crc kubenswrapper[4808]: E0217 16:47:51.149220 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:47:55 crc kubenswrapper[4808]: E0217 16:47:55.148013 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:48:01 crc kubenswrapper[4808]: I0217 16:48:01.146408 4808 scope.go:117] "RemoveContainer" containerID="1d6b62da85cac0888e68836087131544de96c37066f3fa481bdeda1d95bfa143" Feb 17 16:48:01 crc kubenswrapper[4808]: E0217 16:48:01.149397 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:48:03 crc kubenswrapper[4808]: E0217 16:48:03.148395 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:48:09 crc kubenswrapper[4808]: E0217 16:48:09.147360 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:48:15 crc kubenswrapper[4808]: I0217 16:48:15.174290 4808 scope.go:117] "RemoveContainer" containerID="1d6b62da85cac0888e68836087131544de96c37066f3fa481bdeda1d95bfa143" Feb 17 16:48:15 crc kubenswrapper[4808]: E0217 16:48:15.175117 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:48:15 crc kubenswrapper[4808]: E0217 16:48:15.178832 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:48:20 crc kubenswrapper[4808]: E0217 16:48:20.147433 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:48:26 crc kubenswrapper[4808]: E0217 16:48:26.148617 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:48:28 crc kubenswrapper[4808]: I0217 16:48:28.146454 4808 scope.go:117] "RemoveContainer" containerID="1d6b62da85cac0888e68836087131544de96c37066f3fa481bdeda1d95bfa143" Feb 17 16:48:28 crc kubenswrapper[4808]: I0217 16:48:28.669907 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" event={"ID":"ca38b6e7-b21c-453d-8b6c-a163dac84b35","Type":"ContainerStarted","Data":"2a8ba27f36ba0ee53790b7b2ad1919c83731b5c9274456151ce2d8a4df4fea50"} Feb 17 16:48:31 crc kubenswrapper[4808]: E0217 16:48:31.150399 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:48:38 crc kubenswrapper[4808]: E0217 16:48:38.149336 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:48:44 crc kubenswrapper[4808]: E0217 16:48:44.148909 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:48:53 crc kubenswrapper[4808]: E0217 16:48:53.151523 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:48:59 crc kubenswrapper[4808]: E0217 16:48:59.148226 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:49:05 crc kubenswrapper[4808]: E0217 16:49:05.148458 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:49:07 crc kubenswrapper[4808]: I0217 16:49:07.501106 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-p7gsg"] Feb 17 16:49:07 crc kubenswrapper[4808]: E0217 16:49:07.502042 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b9d4467-638d-493d-8574-8499f17c5670" containerName="registry-server" Feb 17 16:49:07 crc kubenswrapper[4808]: I0217 16:49:07.502065 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b9d4467-638d-493d-8574-8499f17c5670" containerName="registry-server" Feb 17 16:49:07 crc kubenswrapper[4808]: E0217 16:49:07.502112 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b9d4467-638d-493d-8574-8499f17c5670" containerName="extract-utilities" Feb 17 16:49:07 crc kubenswrapper[4808]: I0217 16:49:07.502125 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b9d4467-638d-493d-8574-8499f17c5670" containerName="extract-utilities" Feb 17 16:49:07 crc kubenswrapper[4808]: E0217 16:49:07.502171 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b9d4467-638d-493d-8574-8499f17c5670" containerName="extract-content" Feb 17 16:49:07 crc kubenswrapper[4808]: I0217 16:49:07.502183 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b9d4467-638d-493d-8574-8499f17c5670" containerName="extract-content" Feb 17 16:49:07 crc kubenswrapper[4808]: I0217 16:49:07.502556 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b9d4467-638d-493d-8574-8499f17c5670" containerName="registry-server" Feb 17 16:49:07 crc kubenswrapper[4808]: I0217 16:49:07.505108 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p7gsg" Feb 17 16:49:07 crc kubenswrapper[4808]: I0217 16:49:07.529692 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-p7gsg"] Feb 17 16:49:07 crc kubenswrapper[4808]: I0217 16:49:07.675928 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78fee2d5-85c6-48be-bc7f-bcdcb0720230-catalog-content\") pod \"redhat-operators-p7gsg\" (UID: \"78fee2d5-85c6-48be-bc7f-bcdcb0720230\") " pod="openshift-marketplace/redhat-operators-p7gsg" Feb 17 16:49:07 crc kubenswrapper[4808]: I0217 16:49:07.676029 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78fee2d5-85c6-48be-bc7f-bcdcb0720230-utilities\") pod \"redhat-operators-p7gsg\" (UID: \"78fee2d5-85c6-48be-bc7f-bcdcb0720230\") " pod="openshift-marketplace/redhat-operators-p7gsg" Feb 17 16:49:07 crc kubenswrapper[4808]: I0217 16:49:07.676381 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkslj\" (UniqueName: \"kubernetes.io/projected/78fee2d5-85c6-48be-bc7f-bcdcb0720230-kube-api-access-tkslj\") pod \"redhat-operators-p7gsg\" (UID: \"78fee2d5-85c6-48be-bc7f-bcdcb0720230\") " pod="openshift-marketplace/redhat-operators-p7gsg" Feb 17 16:49:07 crc kubenswrapper[4808]: I0217 16:49:07.778420 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tkslj\" (UniqueName: \"kubernetes.io/projected/78fee2d5-85c6-48be-bc7f-bcdcb0720230-kube-api-access-tkslj\") pod \"redhat-operators-p7gsg\" (UID: \"78fee2d5-85c6-48be-bc7f-bcdcb0720230\") " pod="openshift-marketplace/redhat-operators-p7gsg" Feb 17 16:49:07 crc kubenswrapper[4808]: I0217 16:49:07.778516 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78fee2d5-85c6-48be-bc7f-bcdcb0720230-catalog-content\") pod \"redhat-operators-p7gsg\" (UID: \"78fee2d5-85c6-48be-bc7f-bcdcb0720230\") " pod="openshift-marketplace/redhat-operators-p7gsg" Feb 17 16:49:07 crc kubenswrapper[4808]: I0217 16:49:07.778560 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78fee2d5-85c6-48be-bc7f-bcdcb0720230-utilities\") pod \"redhat-operators-p7gsg\" (UID: \"78fee2d5-85c6-48be-bc7f-bcdcb0720230\") " pod="openshift-marketplace/redhat-operators-p7gsg" Feb 17 16:49:07 crc kubenswrapper[4808]: I0217 16:49:07.779055 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78fee2d5-85c6-48be-bc7f-bcdcb0720230-utilities\") pod \"redhat-operators-p7gsg\" (UID: \"78fee2d5-85c6-48be-bc7f-bcdcb0720230\") " pod="openshift-marketplace/redhat-operators-p7gsg" Feb 17 16:49:07 crc kubenswrapper[4808]: I0217 16:49:07.779098 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78fee2d5-85c6-48be-bc7f-bcdcb0720230-catalog-content\") pod \"redhat-operators-p7gsg\" (UID: \"78fee2d5-85c6-48be-bc7f-bcdcb0720230\") " pod="openshift-marketplace/redhat-operators-p7gsg" Feb 17 16:49:07 crc kubenswrapper[4808]: I0217 16:49:07.800978 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkslj\" (UniqueName: \"kubernetes.io/projected/78fee2d5-85c6-48be-bc7f-bcdcb0720230-kube-api-access-tkslj\") pod \"redhat-operators-p7gsg\" (UID: \"78fee2d5-85c6-48be-bc7f-bcdcb0720230\") " pod="openshift-marketplace/redhat-operators-p7gsg" Feb 17 16:49:07 crc kubenswrapper[4808]: I0217 16:49:07.836337 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p7gsg" Feb 17 16:49:08 crc kubenswrapper[4808]: I0217 16:49:08.310495 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-p7gsg"] Feb 17 16:49:09 crc kubenswrapper[4808]: I0217 16:49:09.154866 4808 generic.go:334] "Generic (PLEG): container finished" podID="78fee2d5-85c6-48be-bc7f-bcdcb0720230" containerID="a4fd2a4323cf9e15599cd70d49d32a2eaffec7fc1158a739bb67c40420264af1" exitCode=0 Feb 17 16:49:09 crc kubenswrapper[4808]: I0217 16:49:09.156681 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p7gsg" event={"ID":"78fee2d5-85c6-48be-bc7f-bcdcb0720230","Type":"ContainerDied","Data":"a4fd2a4323cf9e15599cd70d49d32a2eaffec7fc1158a739bb67c40420264af1"} Feb 17 16:49:09 crc kubenswrapper[4808]: I0217 16:49:09.156742 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p7gsg" event={"ID":"78fee2d5-85c6-48be-bc7f-bcdcb0720230","Type":"ContainerStarted","Data":"c7bdfc2fd5f40c6a9fd9e74ee22160de04cc32cff6460663c59ebee846db84e6"} Feb 17 16:49:10 crc kubenswrapper[4808]: E0217 16:49:10.147731 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:49:10 crc kubenswrapper[4808]: I0217 16:49:10.171290 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p7gsg" event={"ID":"78fee2d5-85c6-48be-bc7f-bcdcb0720230","Type":"ContainerStarted","Data":"a8f094f2bfd8f10f743b554fde672e9f5ad03d309530070a4481f63088f499e2"} Feb 17 16:49:13 crc kubenswrapper[4808]: I0217 16:49:13.200332 4808 generic.go:334] "Generic (PLEG): container finished" podID="78fee2d5-85c6-48be-bc7f-bcdcb0720230" containerID="a8f094f2bfd8f10f743b554fde672e9f5ad03d309530070a4481f63088f499e2" exitCode=0 Feb 17 16:49:13 crc kubenswrapper[4808]: I0217 16:49:13.200439 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p7gsg" event={"ID":"78fee2d5-85c6-48be-bc7f-bcdcb0720230","Type":"ContainerDied","Data":"a8f094f2bfd8f10f743b554fde672e9f5ad03d309530070a4481f63088f499e2"} Feb 17 16:49:14 crc kubenswrapper[4808]: I0217 16:49:14.212800 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p7gsg" event={"ID":"78fee2d5-85c6-48be-bc7f-bcdcb0720230","Type":"ContainerStarted","Data":"3759364be8b05f033434157d113ec3e3045aefb7ca60068d18073c5b8d33762a"} Feb 17 16:49:14 crc kubenswrapper[4808]: I0217 16:49:14.239322 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-p7gsg" podStartSLOduration=2.765029117 podStartE2EDuration="7.239305336s" podCreationTimestamp="2026-02-17 16:49:07 +0000 UTC" firstStartedPulling="2026-02-17 16:49:09.156484575 +0000 UTC m=+3312.672843638" lastFinishedPulling="2026-02-17 16:49:13.630760754 +0000 UTC m=+3317.147119857" observedRunningTime="2026-02-17 16:49:14.231012733 +0000 UTC m=+3317.747371806" watchObservedRunningTime="2026-02-17 16:49:14.239305336 +0000 UTC m=+3317.755664409" Feb 17 16:49:17 crc kubenswrapper[4808]: I0217 16:49:17.837133 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-p7gsg" Feb 17 16:49:17 crc kubenswrapper[4808]: I0217 16:49:17.837792 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-p7gsg" Feb 17 16:49:18 crc kubenswrapper[4808]: I0217 16:49:18.884088 4808 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-p7gsg" podUID="78fee2d5-85c6-48be-bc7f-bcdcb0720230" containerName="registry-server" probeResult="failure" output=< Feb 17 16:49:18 crc kubenswrapper[4808]: timeout: failed to connect service ":50051" within 1s Feb 17 16:49:18 crc kubenswrapper[4808]: > Feb 17 16:49:20 crc kubenswrapper[4808]: E0217 16:49:20.148331 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:49:23 crc kubenswrapper[4808]: E0217 16:49:23.146773 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:49:27 crc kubenswrapper[4808]: I0217 16:49:27.905401 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-p7gsg" Feb 17 16:49:27 crc kubenswrapper[4808]: I0217 16:49:27.972735 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-p7gsg" Feb 17 16:49:28 crc kubenswrapper[4808]: I0217 16:49:28.151346 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-p7gsg"] Feb 17 16:49:29 crc kubenswrapper[4808]: I0217 16:49:29.359197 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-p7gsg" podUID="78fee2d5-85c6-48be-bc7f-bcdcb0720230" containerName="registry-server" containerID="cri-o://3759364be8b05f033434157d113ec3e3045aefb7ca60068d18073c5b8d33762a" gracePeriod=2 Feb 17 16:49:30 crc kubenswrapper[4808]: I0217 16:49:30.015819 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p7gsg" Feb 17 16:49:30 crc kubenswrapper[4808]: I0217 16:49:30.116476 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkslj\" (UniqueName: \"kubernetes.io/projected/78fee2d5-85c6-48be-bc7f-bcdcb0720230-kube-api-access-tkslj\") pod \"78fee2d5-85c6-48be-bc7f-bcdcb0720230\" (UID: \"78fee2d5-85c6-48be-bc7f-bcdcb0720230\") " Feb 17 16:49:30 crc kubenswrapper[4808]: I0217 16:49:30.116991 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78fee2d5-85c6-48be-bc7f-bcdcb0720230-utilities\") pod \"78fee2d5-85c6-48be-bc7f-bcdcb0720230\" (UID: \"78fee2d5-85c6-48be-bc7f-bcdcb0720230\") " Feb 17 16:49:30 crc kubenswrapper[4808]: I0217 16:49:30.117205 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78fee2d5-85c6-48be-bc7f-bcdcb0720230-catalog-content\") pod \"78fee2d5-85c6-48be-bc7f-bcdcb0720230\" (UID: \"78fee2d5-85c6-48be-bc7f-bcdcb0720230\") " Feb 17 16:49:30 crc kubenswrapper[4808]: I0217 16:49:30.117674 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78fee2d5-85c6-48be-bc7f-bcdcb0720230-utilities" (OuterVolumeSpecName: "utilities") pod "78fee2d5-85c6-48be-bc7f-bcdcb0720230" (UID: "78fee2d5-85c6-48be-bc7f-bcdcb0720230"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:49:30 crc kubenswrapper[4808]: I0217 16:49:30.118292 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78fee2d5-85c6-48be-bc7f-bcdcb0720230-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:49:30 crc kubenswrapper[4808]: I0217 16:49:30.121791 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78fee2d5-85c6-48be-bc7f-bcdcb0720230-kube-api-access-tkslj" (OuterVolumeSpecName: "kube-api-access-tkslj") pod "78fee2d5-85c6-48be-bc7f-bcdcb0720230" (UID: "78fee2d5-85c6-48be-bc7f-bcdcb0720230"). InnerVolumeSpecName "kube-api-access-tkslj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:49:30 crc kubenswrapper[4808]: I0217 16:49:30.220711 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tkslj\" (UniqueName: \"kubernetes.io/projected/78fee2d5-85c6-48be-bc7f-bcdcb0720230-kube-api-access-tkslj\") on node \"crc\" DevicePath \"\"" Feb 17 16:49:30 crc kubenswrapper[4808]: I0217 16:49:30.239046 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78fee2d5-85c6-48be-bc7f-bcdcb0720230-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "78fee2d5-85c6-48be-bc7f-bcdcb0720230" (UID: "78fee2d5-85c6-48be-bc7f-bcdcb0720230"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:49:30 crc kubenswrapper[4808]: I0217 16:49:30.322639 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78fee2d5-85c6-48be-bc7f-bcdcb0720230-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:49:30 crc kubenswrapper[4808]: I0217 16:49:30.369611 4808 generic.go:334] "Generic (PLEG): container finished" podID="78fee2d5-85c6-48be-bc7f-bcdcb0720230" containerID="3759364be8b05f033434157d113ec3e3045aefb7ca60068d18073c5b8d33762a" exitCode=0 Feb 17 16:49:30 crc kubenswrapper[4808]: I0217 16:49:30.369655 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p7gsg" event={"ID":"78fee2d5-85c6-48be-bc7f-bcdcb0720230","Type":"ContainerDied","Data":"3759364be8b05f033434157d113ec3e3045aefb7ca60068d18073c5b8d33762a"} Feb 17 16:49:30 crc kubenswrapper[4808]: I0217 16:49:30.369690 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p7gsg" event={"ID":"78fee2d5-85c6-48be-bc7f-bcdcb0720230","Type":"ContainerDied","Data":"c7bdfc2fd5f40c6a9fd9e74ee22160de04cc32cff6460663c59ebee846db84e6"} Feb 17 16:49:30 crc kubenswrapper[4808]: I0217 16:49:30.369647 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p7gsg" Feb 17 16:49:30 crc kubenswrapper[4808]: I0217 16:49:30.369712 4808 scope.go:117] "RemoveContainer" containerID="3759364be8b05f033434157d113ec3e3045aefb7ca60068d18073c5b8d33762a" Feb 17 16:49:30 crc kubenswrapper[4808]: I0217 16:49:30.411601 4808 scope.go:117] "RemoveContainer" containerID="a8f094f2bfd8f10f743b554fde672e9f5ad03d309530070a4481f63088f499e2" Feb 17 16:49:30 crc kubenswrapper[4808]: I0217 16:49:30.416056 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-p7gsg"] Feb 17 16:49:30 crc kubenswrapper[4808]: I0217 16:49:30.438135 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-p7gsg"] Feb 17 16:49:30 crc kubenswrapper[4808]: I0217 16:49:30.457831 4808 scope.go:117] "RemoveContainer" containerID="a4fd2a4323cf9e15599cd70d49d32a2eaffec7fc1158a739bb67c40420264af1" Feb 17 16:49:30 crc kubenswrapper[4808]: I0217 16:49:30.485535 4808 scope.go:117] "RemoveContainer" containerID="3759364be8b05f033434157d113ec3e3045aefb7ca60068d18073c5b8d33762a" Feb 17 16:49:30 crc kubenswrapper[4808]: E0217 16:49:30.485861 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3759364be8b05f033434157d113ec3e3045aefb7ca60068d18073c5b8d33762a\": container with ID starting with 3759364be8b05f033434157d113ec3e3045aefb7ca60068d18073c5b8d33762a not found: ID does not exist" containerID="3759364be8b05f033434157d113ec3e3045aefb7ca60068d18073c5b8d33762a" Feb 17 16:49:30 crc kubenswrapper[4808]: I0217 16:49:30.485893 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3759364be8b05f033434157d113ec3e3045aefb7ca60068d18073c5b8d33762a"} err="failed to get container status \"3759364be8b05f033434157d113ec3e3045aefb7ca60068d18073c5b8d33762a\": rpc error: code = NotFound desc = could not find container \"3759364be8b05f033434157d113ec3e3045aefb7ca60068d18073c5b8d33762a\": container with ID starting with 3759364be8b05f033434157d113ec3e3045aefb7ca60068d18073c5b8d33762a not found: ID does not exist" Feb 17 16:49:30 crc kubenswrapper[4808]: I0217 16:49:30.485914 4808 scope.go:117] "RemoveContainer" containerID="a8f094f2bfd8f10f743b554fde672e9f5ad03d309530070a4481f63088f499e2" Feb 17 16:49:30 crc kubenswrapper[4808]: E0217 16:49:30.486208 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8f094f2bfd8f10f743b554fde672e9f5ad03d309530070a4481f63088f499e2\": container with ID starting with a8f094f2bfd8f10f743b554fde672e9f5ad03d309530070a4481f63088f499e2 not found: ID does not exist" containerID="a8f094f2bfd8f10f743b554fde672e9f5ad03d309530070a4481f63088f499e2" Feb 17 16:49:30 crc kubenswrapper[4808]: I0217 16:49:30.486230 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8f094f2bfd8f10f743b554fde672e9f5ad03d309530070a4481f63088f499e2"} err="failed to get container status \"a8f094f2bfd8f10f743b554fde672e9f5ad03d309530070a4481f63088f499e2\": rpc error: code = NotFound desc = could not find container \"a8f094f2bfd8f10f743b554fde672e9f5ad03d309530070a4481f63088f499e2\": container with ID starting with a8f094f2bfd8f10f743b554fde672e9f5ad03d309530070a4481f63088f499e2 not found: ID does not exist" Feb 17 16:49:30 crc kubenswrapper[4808]: I0217 16:49:30.486245 4808 scope.go:117] "RemoveContainer" containerID="a4fd2a4323cf9e15599cd70d49d32a2eaffec7fc1158a739bb67c40420264af1" Feb 17 16:49:30 crc kubenswrapper[4808]: E0217 16:49:30.486616 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4fd2a4323cf9e15599cd70d49d32a2eaffec7fc1158a739bb67c40420264af1\": container with ID starting with a4fd2a4323cf9e15599cd70d49d32a2eaffec7fc1158a739bb67c40420264af1 not found: ID does not exist" containerID="a4fd2a4323cf9e15599cd70d49d32a2eaffec7fc1158a739bb67c40420264af1" Feb 17 16:49:30 crc kubenswrapper[4808]: I0217 16:49:30.486656 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4fd2a4323cf9e15599cd70d49d32a2eaffec7fc1158a739bb67c40420264af1"} err="failed to get container status \"a4fd2a4323cf9e15599cd70d49d32a2eaffec7fc1158a739bb67c40420264af1\": rpc error: code = NotFound desc = could not find container \"a4fd2a4323cf9e15599cd70d49d32a2eaffec7fc1158a739bb67c40420264af1\": container with ID starting with a4fd2a4323cf9e15599cd70d49d32a2eaffec7fc1158a739bb67c40420264af1 not found: ID does not exist" Feb 17 16:49:31 crc kubenswrapper[4808]: I0217 16:49:31.167902 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78fee2d5-85c6-48be-bc7f-bcdcb0720230" path="/var/lib/kubelet/pods/78fee2d5-85c6-48be-bc7f-bcdcb0720230/volumes" Feb 17 16:49:34 crc kubenswrapper[4808]: E0217 16:49:34.147877 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:49:37 crc kubenswrapper[4808]: E0217 16:49:37.157257 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:49:45 crc kubenswrapper[4808]: E0217 16:49:45.148196 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:49:47 crc kubenswrapper[4808]: I0217 16:49:47.082906 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-hpcqt"] Feb 17 16:49:47 crc kubenswrapper[4808]: E0217 16:49:47.083338 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78fee2d5-85c6-48be-bc7f-bcdcb0720230" containerName="registry-server" Feb 17 16:49:47 crc kubenswrapper[4808]: I0217 16:49:47.083352 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="78fee2d5-85c6-48be-bc7f-bcdcb0720230" containerName="registry-server" Feb 17 16:49:47 crc kubenswrapper[4808]: E0217 16:49:47.083372 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78fee2d5-85c6-48be-bc7f-bcdcb0720230" containerName="extract-content" Feb 17 16:49:47 crc kubenswrapper[4808]: I0217 16:49:47.083397 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="78fee2d5-85c6-48be-bc7f-bcdcb0720230" containerName="extract-content" Feb 17 16:49:47 crc kubenswrapper[4808]: E0217 16:49:47.083434 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78fee2d5-85c6-48be-bc7f-bcdcb0720230" containerName="extract-utilities" Feb 17 16:49:47 crc kubenswrapper[4808]: I0217 16:49:47.083444 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="78fee2d5-85c6-48be-bc7f-bcdcb0720230" containerName="extract-utilities" Feb 17 16:49:47 crc kubenswrapper[4808]: I0217 16:49:47.083667 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="78fee2d5-85c6-48be-bc7f-bcdcb0720230" containerName="registry-server" Feb 17 16:49:47 crc kubenswrapper[4808]: I0217 16:49:47.090236 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hpcqt" Feb 17 16:49:47 crc kubenswrapper[4808]: I0217 16:49:47.121182 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hpcqt"] Feb 17 16:49:47 crc kubenswrapper[4808]: I0217 16:49:47.236993 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqgn8\" (UniqueName: \"kubernetes.io/projected/376f1060-b0d7-4a70-8d5d-6ce46dd99721-kube-api-access-zqgn8\") pod \"redhat-marketplace-hpcqt\" (UID: \"376f1060-b0d7-4a70-8d5d-6ce46dd99721\") " pod="openshift-marketplace/redhat-marketplace-hpcqt" Feb 17 16:49:47 crc kubenswrapper[4808]: I0217 16:49:47.237644 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/376f1060-b0d7-4a70-8d5d-6ce46dd99721-catalog-content\") pod \"redhat-marketplace-hpcqt\" (UID: \"376f1060-b0d7-4a70-8d5d-6ce46dd99721\") " pod="openshift-marketplace/redhat-marketplace-hpcqt" Feb 17 16:49:47 crc kubenswrapper[4808]: I0217 16:49:47.237876 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/376f1060-b0d7-4a70-8d5d-6ce46dd99721-utilities\") pod \"redhat-marketplace-hpcqt\" (UID: \"376f1060-b0d7-4a70-8d5d-6ce46dd99721\") " pod="openshift-marketplace/redhat-marketplace-hpcqt" Feb 17 16:49:47 crc kubenswrapper[4808]: I0217 16:49:47.339394 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/376f1060-b0d7-4a70-8d5d-6ce46dd99721-catalog-content\") pod \"redhat-marketplace-hpcqt\" (UID: \"376f1060-b0d7-4a70-8d5d-6ce46dd99721\") " pod="openshift-marketplace/redhat-marketplace-hpcqt" Feb 17 16:49:47 crc kubenswrapper[4808]: I0217 16:49:47.339589 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/376f1060-b0d7-4a70-8d5d-6ce46dd99721-utilities\") pod \"redhat-marketplace-hpcqt\" (UID: \"376f1060-b0d7-4a70-8d5d-6ce46dd99721\") " pod="openshift-marketplace/redhat-marketplace-hpcqt" Feb 17 16:49:47 crc kubenswrapper[4808]: I0217 16:49:47.339637 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqgn8\" (UniqueName: \"kubernetes.io/projected/376f1060-b0d7-4a70-8d5d-6ce46dd99721-kube-api-access-zqgn8\") pod \"redhat-marketplace-hpcqt\" (UID: \"376f1060-b0d7-4a70-8d5d-6ce46dd99721\") " pod="openshift-marketplace/redhat-marketplace-hpcqt" Feb 17 16:49:47 crc kubenswrapper[4808]: I0217 16:49:47.340008 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/376f1060-b0d7-4a70-8d5d-6ce46dd99721-catalog-content\") pod \"redhat-marketplace-hpcqt\" (UID: \"376f1060-b0d7-4a70-8d5d-6ce46dd99721\") " pod="openshift-marketplace/redhat-marketplace-hpcqt" Feb 17 16:49:47 crc kubenswrapper[4808]: I0217 16:49:47.340338 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/376f1060-b0d7-4a70-8d5d-6ce46dd99721-utilities\") pod \"redhat-marketplace-hpcqt\" (UID: \"376f1060-b0d7-4a70-8d5d-6ce46dd99721\") " pod="openshift-marketplace/redhat-marketplace-hpcqt" Feb 17 16:49:47 crc kubenswrapper[4808]: I0217 16:49:47.365488 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqgn8\" (UniqueName: \"kubernetes.io/projected/376f1060-b0d7-4a70-8d5d-6ce46dd99721-kube-api-access-zqgn8\") pod \"redhat-marketplace-hpcqt\" (UID: \"376f1060-b0d7-4a70-8d5d-6ce46dd99721\") " pod="openshift-marketplace/redhat-marketplace-hpcqt" Feb 17 16:49:47 crc kubenswrapper[4808]: I0217 16:49:47.422977 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hpcqt" Feb 17 16:49:47 crc kubenswrapper[4808]: I0217 16:49:47.896511 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hpcqt"] Feb 17 16:49:48 crc kubenswrapper[4808]: I0217 16:49:48.148683 4808 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 16:49:48 crc kubenswrapper[4808]: E0217 16:49:48.280214 4808 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 17 16:49:48 crc kubenswrapper[4808]: E0217 16:49:48.280292 4808 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 17 16:49:48 crc kubenswrapper[4808]: E0217 16:49:48.280502 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fnd2x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-zl7nk_openstack(a4b182d0-48fc-4487-b7ad-18f7803a4d4c): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:49:48 crc kubenswrapper[4808]: E0217 16:49:48.281994 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:49:48 crc kubenswrapper[4808]: I0217 16:49:48.581665 4808 generic.go:334] "Generic (PLEG): container finished" podID="376f1060-b0d7-4a70-8d5d-6ce46dd99721" containerID="8f156eab1b7f76de86b6dee1414bbbba30b38fd134afd08463c950f30d1e3d40" exitCode=0 Feb 17 16:49:48 crc kubenswrapper[4808]: I0217 16:49:48.581746 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hpcqt" event={"ID":"376f1060-b0d7-4a70-8d5d-6ce46dd99721","Type":"ContainerDied","Data":"8f156eab1b7f76de86b6dee1414bbbba30b38fd134afd08463c950f30d1e3d40"} Feb 17 16:49:48 crc kubenswrapper[4808]: I0217 16:49:48.581792 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hpcqt" event={"ID":"376f1060-b0d7-4a70-8d5d-6ce46dd99721","Type":"ContainerStarted","Data":"d194a15df8fb9a4340820ef784455320a798c31ce9ae86a22448ec96ceaf49bb"} Feb 17 16:49:49 crc kubenswrapper[4808]: I0217 16:49:49.594377 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hpcqt" event={"ID":"376f1060-b0d7-4a70-8d5d-6ce46dd99721","Type":"ContainerStarted","Data":"6fe4dd82b1875674fd59687e40bffbc7f31da63004d13abe9e64a3273979ebc3"} Feb 17 16:49:49 crc kubenswrapper[4808]: I0217 16:49:49.596853 4808 generic.go:334] "Generic (PLEG): container finished" podID="d178dfcd-66d8-40ba-b740-909fe6e081ac" containerID="29d16363f6fa98f265f09c289debfecc64d954c62ee36d69f30d4932fce9caae" exitCode=2 Feb 17 16:49:49 crc kubenswrapper[4808]: I0217 16:49:49.596909 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pmbdv" event={"ID":"d178dfcd-66d8-40ba-b740-909fe6e081ac","Type":"ContainerDied","Data":"29d16363f6fa98f265f09c289debfecc64d954c62ee36d69f30d4932fce9caae"} Feb 17 16:49:50 crc kubenswrapper[4808]: I0217 16:49:50.613441 4808 generic.go:334] "Generic (PLEG): container finished" podID="376f1060-b0d7-4a70-8d5d-6ce46dd99721" containerID="6fe4dd82b1875674fd59687e40bffbc7f31da63004d13abe9e64a3273979ebc3" exitCode=0 Feb 17 16:49:50 crc kubenswrapper[4808]: I0217 16:49:50.613521 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hpcqt" event={"ID":"376f1060-b0d7-4a70-8d5d-6ce46dd99721","Type":"ContainerDied","Data":"6fe4dd82b1875674fd59687e40bffbc7f31da63004d13abe9e64a3273979ebc3"} Feb 17 16:49:51 crc kubenswrapper[4808]: I0217 16:49:51.257004 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pmbdv" Feb 17 16:49:51 crc kubenswrapper[4808]: I0217 16:49:51.343687 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d178dfcd-66d8-40ba-b740-909fe6e081ac-ssh-key-openstack-edpm-ipam\") pod \"d178dfcd-66d8-40ba-b740-909fe6e081ac\" (UID: \"d178dfcd-66d8-40ba-b740-909fe6e081ac\") " Feb 17 16:49:51 crc kubenswrapper[4808]: I0217 16:49:51.343787 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d178dfcd-66d8-40ba-b740-909fe6e081ac-inventory\") pod \"d178dfcd-66d8-40ba-b740-909fe6e081ac\" (UID: \"d178dfcd-66d8-40ba-b740-909fe6e081ac\") " Feb 17 16:49:51 crc kubenswrapper[4808]: I0217 16:49:51.344625 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9bjw8\" (UniqueName: \"kubernetes.io/projected/d178dfcd-66d8-40ba-b740-909fe6e081ac-kube-api-access-9bjw8\") pod \"d178dfcd-66d8-40ba-b740-909fe6e081ac\" (UID: \"d178dfcd-66d8-40ba-b740-909fe6e081ac\") " Feb 17 16:49:51 crc kubenswrapper[4808]: I0217 16:49:51.353319 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d178dfcd-66d8-40ba-b740-909fe6e081ac-kube-api-access-9bjw8" (OuterVolumeSpecName: "kube-api-access-9bjw8") pod "d178dfcd-66d8-40ba-b740-909fe6e081ac" (UID: "d178dfcd-66d8-40ba-b740-909fe6e081ac"). InnerVolumeSpecName "kube-api-access-9bjw8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:49:51 crc kubenswrapper[4808]: I0217 16:49:51.378352 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d178dfcd-66d8-40ba-b740-909fe6e081ac-inventory" (OuterVolumeSpecName: "inventory") pod "d178dfcd-66d8-40ba-b740-909fe6e081ac" (UID: "d178dfcd-66d8-40ba-b740-909fe6e081ac"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:49:51 crc kubenswrapper[4808]: I0217 16:49:51.390369 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d178dfcd-66d8-40ba-b740-909fe6e081ac-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "d178dfcd-66d8-40ba-b740-909fe6e081ac" (UID: "d178dfcd-66d8-40ba-b740-909fe6e081ac"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:49:51 crc kubenswrapper[4808]: I0217 16:49:51.447119 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9bjw8\" (UniqueName: \"kubernetes.io/projected/d178dfcd-66d8-40ba-b740-909fe6e081ac-kube-api-access-9bjw8\") on node \"crc\" DevicePath \"\"" Feb 17 16:49:51 crc kubenswrapper[4808]: I0217 16:49:51.447154 4808 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d178dfcd-66d8-40ba-b740-909fe6e081ac-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 16:49:51 crc kubenswrapper[4808]: I0217 16:49:51.447185 4808 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d178dfcd-66d8-40ba-b740-909fe6e081ac-inventory\") on node \"crc\" DevicePath \"\"" Feb 17 16:49:51 crc kubenswrapper[4808]: I0217 16:49:51.628741 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pmbdv" event={"ID":"d178dfcd-66d8-40ba-b740-909fe6e081ac","Type":"ContainerDied","Data":"beadab6c3a4b086c709ebcfa9079469f2ee23c30727b884ea9d18a17c5d65df6"} Feb 17 16:49:51 crc kubenswrapper[4808]: I0217 16:49:51.629033 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="beadab6c3a4b086c709ebcfa9079469f2ee23c30727b884ea9d18a17c5d65df6" Feb 17 16:49:51 crc kubenswrapper[4808]: I0217 16:49:51.628809 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pmbdv" Feb 17 16:49:51 crc kubenswrapper[4808]: I0217 16:49:51.636853 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hpcqt" event={"ID":"376f1060-b0d7-4a70-8d5d-6ce46dd99721","Type":"ContainerStarted","Data":"7b1126a69cdd91866ac5c85667d69a849292acb693965f5dacaf850152596632"} Feb 17 16:49:51 crc kubenswrapper[4808]: I0217 16:49:51.659374 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-hpcqt" podStartSLOduration=2.202733944 podStartE2EDuration="4.659358689s" podCreationTimestamp="2026-02-17 16:49:47 +0000 UTC" firstStartedPulling="2026-02-17 16:49:48.585064859 +0000 UTC m=+3352.101423952" lastFinishedPulling="2026-02-17 16:49:51.041689624 +0000 UTC m=+3354.558048697" observedRunningTime="2026-02-17 16:49:51.655936427 +0000 UTC m=+3355.172295500" watchObservedRunningTime="2026-02-17 16:49:51.659358689 +0000 UTC m=+3355.175717762" Feb 17 16:49:57 crc kubenswrapper[4808]: I0217 16:49:57.423915 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-hpcqt" Feb 17 16:49:57 crc kubenswrapper[4808]: I0217 16:49:57.424465 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-hpcqt" Feb 17 16:49:57 crc kubenswrapper[4808]: I0217 16:49:57.489985 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-hpcqt" Feb 17 16:49:57 crc kubenswrapper[4808]: I0217 16:49:57.730101 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-hpcqt" Feb 17 16:49:57 crc kubenswrapper[4808]: I0217 16:49:57.781970 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hpcqt"] Feb 17 16:49:59 crc kubenswrapper[4808]: I0217 16:49:59.708332 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-hpcqt" podUID="376f1060-b0d7-4a70-8d5d-6ce46dd99721" containerName="registry-server" containerID="cri-o://7b1126a69cdd91866ac5c85667d69a849292acb693965f5dacaf850152596632" gracePeriod=2 Feb 17 16:50:00 crc kubenswrapper[4808]: E0217 16:50:00.148335 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:50:00 crc kubenswrapper[4808]: I0217 16:50:00.282137 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hpcqt" Feb 17 16:50:00 crc kubenswrapper[4808]: I0217 16:50:00.461742 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/376f1060-b0d7-4a70-8d5d-6ce46dd99721-catalog-content\") pod \"376f1060-b0d7-4a70-8d5d-6ce46dd99721\" (UID: \"376f1060-b0d7-4a70-8d5d-6ce46dd99721\") " Feb 17 16:50:00 crc kubenswrapper[4808]: I0217 16:50:00.461822 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zqgn8\" (UniqueName: \"kubernetes.io/projected/376f1060-b0d7-4a70-8d5d-6ce46dd99721-kube-api-access-zqgn8\") pod \"376f1060-b0d7-4a70-8d5d-6ce46dd99721\" (UID: \"376f1060-b0d7-4a70-8d5d-6ce46dd99721\") " Feb 17 16:50:00 crc kubenswrapper[4808]: I0217 16:50:00.462052 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/376f1060-b0d7-4a70-8d5d-6ce46dd99721-utilities\") pod \"376f1060-b0d7-4a70-8d5d-6ce46dd99721\" (UID: \"376f1060-b0d7-4a70-8d5d-6ce46dd99721\") " Feb 17 16:50:00 crc kubenswrapper[4808]: I0217 16:50:00.462704 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/376f1060-b0d7-4a70-8d5d-6ce46dd99721-utilities" (OuterVolumeSpecName: "utilities") pod "376f1060-b0d7-4a70-8d5d-6ce46dd99721" (UID: "376f1060-b0d7-4a70-8d5d-6ce46dd99721"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:50:00 crc kubenswrapper[4808]: I0217 16:50:00.467633 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/376f1060-b0d7-4a70-8d5d-6ce46dd99721-kube-api-access-zqgn8" (OuterVolumeSpecName: "kube-api-access-zqgn8") pod "376f1060-b0d7-4a70-8d5d-6ce46dd99721" (UID: "376f1060-b0d7-4a70-8d5d-6ce46dd99721"). InnerVolumeSpecName "kube-api-access-zqgn8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:50:00 crc kubenswrapper[4808]: I0217 16:50:00.485254 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/376f1060-b0d7-4a70-8d5d-6ce46dd99721-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "376f1060-b0d7-4a70-8d5d-6ce46dd99721" (UID: "376f1060-b0d7-4a70-8d5d-6ce46dd99721"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:50:00 crc kubenswrapper[4808]: I0217 16:50:00.564730 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/376f1060-b0d7-4a70-8d5d-6ce46dd99721-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:50:00 crc kubenswrapper[4808]: I0217 16:50:00.564775 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/376f1060-b0d7-4a70-8d5d-6ce46dd99721-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:50:00 crc kubenswrapper[4808]: I0217 16:50:00.564794 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zqgn8\" (UniqueName: \"kubernetes.io/projected/376f1060-b0d7-4a70-8d5d-6ce46dd99721-kube-api-access-zqgn8\") on node \"crc\" DevicePath \"\"" Feb 17 16:50:00 crc kubenswrapper[4808]: I0217 16:50:00.723033 4808 generic.go:334] "Generic (PLEG): container finished" podID="376f1060-b0d7-4a70-8d5d-6ce46dd99721" containerID="7b1126a69cdd91866ac5c85667d69a849292acb693965f5dacaf850152596632" exitCode=0 Feb 17 16:50:00 crc kubenswrapper[4808]: I0217 16:50:00.723096 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hpcqt" event={"ID":"376f1060-b0d7-4a70-8d5d-6ce46dd99721","Type":"ContainerDied","Data":"7b1126a69cdd91866ac5c85667d69a849292acb693965f5dacaf850152596632"} Feb 17 16:50:00 crc kubenswrapper[4808]: I0217 16:50:00.723122 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hpcqt" Feb 17 16:50:00 crc kubenswrapper[4808]: I0217 16:50:00.723149 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hpcqt" event={"ID":"376f1060-b0d7-4a70-8d5d-6ce46dd99721","Type":"ContainerDied","Data":"d194a15df8fb9a4340820ef784455320a798c31ce9ae86a22448ec96ceaf49bb"} Feb 17 16:50:00 crc kubenswrapper[4808]: I0217 16:50:00.723179 4808 scope.go:117] "RemoveContainer" containerID="7b1126a69cdd91866ac5c85667d69a849292acb693965f5dacaf850152596632" Feb 17 16:50:00 crc kubenswrapper[4808]: I0217 16:50:00.779682 4808 scope.go:117] "RemoveContainer" containerID="6fe4dd82b1875674fd59687e40bffbc7f31da63004d13abe9e64a3273979ebc3" Feb 17 16:50:00 crc kubenswrapper[4808]: I0217 16:50:00.787289 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hpcqt"] Feb 17 16:50:00 crc kubenswrapper[4808]: I0217 16:50:00.800444 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-hpcqt"] Feb 17 16:50:00 crc kubenswrapper[4808]: I0217 16:50:00.805631 4808 scope.go:117] "RemoveContainer" containerID="8f156eab1b7f76de86b6dee1414bbbba30b38fd134afd08463c950f30d1e3d40" Feb 17 16:50:00 crc kubenswrapper[4808]: I0217 16:50:00.873304 4808 scope.go:117] "RemoveContainer" containerID="7b1126a69cdd91866ac5c85667d69a849292acb693965f5dacaf850152596632" Feb 17 16:50:00 crc kubenswrapper[4808]: E0217 16:50:00.873854 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b1126a69cdd91866ac5c85667d69a849292acb693965f5dacaf850152596632\": container with ID starting with 7b1126a69cdd91866ac5c85667d69a849292acb693965f5dacaf850152596632 not found: ID does not exist" containerID="7b1126a69cdd91866ac5c85667d69a849292acb693965f5dacaf850152596632" Feb 17 16:50:00 crc kubenswrapper[4808]: I0217 16:50:00.873906 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b1126a69cdd91866ac5c85667d69a849292acb693965f5dacaf850152596632"} err="failed to get container status \"7b1126a69cdd91866ac5c85667d69a849292acb693965f5dacaf850152596632\": rpc error: code = NotFound desc = could not find container \"7b1126a69cdd91866ac5c85667d69a849292acb693965f5dacaf850152596632\": container with ID starting with 7b1126a69cdd91866ac5c85667d69a849292acb693965f5dacaf850152596632 not found: ID does not exist" Feb 17 16:50:00 crc kubenswrapper[4808]: I0217 16:50:00.873934 4808 scope.go:117] "RemoveContainer" containerID="6fe4dd82b1875674fd59687e40bffbc7f31da63004d13abe9e64a3273979ebc3" Feb 17 16:50:00 crc kubenswrapper[4808]: E0217 16:50:00.874265 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6fe4dd82b1875674fd59687e40bffbc7f31da63004d13abe9e64a3273979ebc3\": container with ID starting with 6fe4dd82b1875674fd59687e40bffbc7f31da63004d13abe9e64a3273979ebc3 not found: ID does not exist" containerID="6fe4dd82b1875674fd59687e40bffbc7f31da63004d13abe9e64a3273979ebc3" Feb 17 16:50:00 crc kubenswrapper[4808]: I0217 16:50:00.874317 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6fe4dd82b1875674fd59687e40bffbc7f31da63004d13abe9e64a3273979ebc3"} err="failed to get container status \"6fe4dd82b1875674fd59687e40bffbc7f31da63004d13abe9e64a3273979ebc3\": rpc error: code = NotFound desc = could not find container \"6fe4dd82b1875674fd59687e40bffbc7f31da63004d13abe9e64a3273979ebc3\": container with ID starting with 6fe4dd82b1875674fd59687e40bffbc7f31da63004d13abe9e64a3273979ebc3 not found: ID does not exist" Feb 17 16:50:00 crc kubenswrapper[4808]: I0217 16:50:00.874349 4808 scope.go:117] "RemoveContainer" containerID="8f156eab1b7f76de86b6dee1414bbbba30b38fd134afd08463c950f30d1e3d40" Feb 17 16:50:00 crc kubenswrapper[4808]: E0217 16:50:00.874614 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f156eab1b7f76de86b6dee1414bbbba30b38fd134afd08463c950f30d1e3d40\": container with ID starting with 8f156eab1b7f76de86b6dee1414bbbba30b38fd134afd08463c950f30d1e3d40 not found: ID does not exist" containerID="8f156eab1b7f76de86b6dee1414bbbba30b38fd134afd08463c950f30d1e3d40" Feb 17 16:50:00 crc kubenswrapper[4808]: I0217 16:50:00.874641 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f156eab1b7f76de86b6dee1414bbbba30b38fd134afd08463c950f30d1e3d40"} err="failed to get container status \"8f156eab1b7f76de86b6dee1414bbbba30b38fd134afd08463c950f30d1e3d40\": rpc error: code = NotFound desc = could not find container \"8f156eab1b7f76de86b6dee1414bbbba30b38fd134afd08463c950f30d1e3d40\": container with ID starting with 8f156eab1b7f76de86b6dee1414bbbba30b38fd134afd08463c950f30d1e3d40 not found: ID does not exist" Feb 17 16:50:01 crc kubenswrapper[4808]: E0217 16:50:01.149056 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:50:01 crc kubenswrapper[4808]: I0217 16:50:01.162703 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="376f1060-b0d7-4a70-8d5d-6ce46dd99721" path="/var/lib/kubelet/pods/376f1060-b0d7-4a70-8d5d-6ce46dd99721/volumes" Feb 17 16:50:12 crc kubenswrapper[4808]: E0217 16:50:12.149317 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:50:14 crc kubenswrapper[4808]: E0217 16:50:14.147594 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:50:24 crc kubenswrapper[4808]: E0217 16:50:24.147067 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:50:28 crc kubenswrapper[4808]: E0217 16:50:28.293721 4808 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:50:28 crc kubenswrapper[4808]: E0217 16:50:28.294412 4808 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:50:28 crc kubenswrapper[4808]: E0217 16:50:28.294633 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nfchb4h678h649h5fbh664h79h7fh666h5bfh68h565h555h59dh5b6h5bfh66ch645h547h5cbh549h9fh58bh5d4hcfh78h68chc7h5ch67dhc7h5b4q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rjgf2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(2876084b-7055-449d-9ddb-447d3a515d80): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:50:28 crc kubenswrapper[4808]: E0217 16:50:28.296026 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:50:37 crc kubenswrapper[4808]: E0217 16:50:37.155067 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:50:40 crc kubenswrapper[4808]: E0217 16:50:40.148313 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:50:49 crc kubenswrapper[4808]: E0217 16:50:49.147380 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:50:51 crc kubenswrapper[4808]: I0217 16:50:51.592267 4808 patch_prober.go:28] interesting pod/machine-config-daemon-k8v8k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:50:51 crc kubenswrapper[4808]: I0217 16:50:51.592918 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:50:52 crc kubenswrapper[4808]: E0217 16:50:52.148546 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:51:01 crc kubenswrapper[4808]: E0217 16:51:01.148807 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:51:07 crc kubenswrapper[4808]: E0217 16:51:07.161161 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:51:09 crc kubenswrapper[4808]: I0217 16:51:09.039665 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tjd7w"] Feb 17 16:51:09 crc kubenswrapper[4808]: E0217 16:51:09.047640 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d178dfcd-66d8-40ba-b740-909fe6e081ac" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 17 16:51:09 crc kubenswrapper[4808]: I0217 16:51:09.047799 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="d178dfcd-66d8-40ba-b740-909fe6e081ac" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 17 16:51:09 crc kubenswrapper[4808]: E0217 16:51:09.047970 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="376f1060-b0d7-4a70-8d5d-6ce46dd99721" containerName="registry-server" Feb 17 16:51:09 crc kubenswrapper[4808]: I0217 16:51:09.048087 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="376f1060-b0d7-4a70-8d5d-6ce46dd99721" containerName="registry-server" Feb 17 16:51:09 crc kubenswrapper[4808]: E0217 16:51:09.048204 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="376f1060-b0d7-4a70-8d5d-6ce46dd99721" containerName="extract-utilities" Feb 17 16:51:09 crc kubenswrapper[4808]: I0217 16:51:09.048318 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="376f1060-b0d7-4a70-8d5d-6ce46dd99721" containerName="extract-utilities" Feb 17 16:51:09 crc kubenswrapper[4808]: E0217 16:51:09.048465 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="376f1060-b0d7-4a70-8d5d-6ce46dd99721" containerName="extract-content" Feb 17 16:51:09 crc kubenswrapper[4808]: I0217 16:51:09.048634 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="376f1060-b0d7-4a70-8d5d-6ce46dd99721" containerName="extract-content" Feb 17 16:51:09 crc kubenswrapper[4808]: I0217 16:51:09.049154 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="d178dfcd-66d8-40ba-b740-909fe6e081ac" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 17 16:51:09 crc kubenswrapper[4808]: I0217 16:51:09.049321 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="376f1060-b0d7-4a70-8d5d-6ce46dd99721" containerName="registry-server" Feb 17 16:51:09 crc kubenswrapper[4808]: I0217 16:51:09.050666 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tjd7w" Feb 17 16:51:09 crc kubenswrapper[4808]: I0217 16:51:09.056273 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tjd7w"] Feb 17 16:51:09 crc kubenswrapper[4808]: I0217 16:51:09.089695 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 17 16:51:09 crc kubenswrapper[4808]: I0217 16:51:09.089932 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 17 16:51:09 crc kubenswrapper[4808]: I0217 16:51:09.089806 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-gpcsv" Feb 17 16:51:09 crc kubenswrapper[4808]: I0217 16:51:09.089695 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 17 16:51:09 crc kubenswrapper[4808]: I0217 16:51:09.138358 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/11efc7ce-322d-4bfe-95ad-c84d779a80d8-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-tjd7w\" (UID: \"11efc7ce-322d-4bfe-95ad-c84d779a80d8\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tjd7w" Feb 17 16:51:09 crc kubenswrapper[4808]: I0217 16:51:09.138633 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrlwl\" (UniqueName: \"kubernetes.io/projected/11efc7ce-322d-4bfe-95ad-c84d779a80d8-kube-api-access-xrlwl\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-tjd7w\" (UID: \"11efc7ce-322d-4bfe-95ad-c84d779a80d8\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tjd7w" Feb 17 16:51:09 crc kubenswrapper[4808]: I0217 16:51:09.138767 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/11efc7ce-322d-4bfe-95ad-c84d779a80d8-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-tjd7w\" (UID: \"11efc7ce-322d-4bfe-95ad-c84d779a80d8\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tjd7w" Feb 17 16:51:09 crc kubenswrapper[4808]: I0217 16:51:09.240665 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/11efc7ce-322d-4bfe-95ad-c84d779a80d8-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-tjd7w\" (UID: \"11efc7ce-322d-4bfe-95ad-c84d779a80d8\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tjd7w" Feb 17 16:51:09 crc kubenswrapper[4808]: I0217 16:51:09.240769 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrlwl\" (UniqueName: \"kubernetes.io/projected/11efc7ce-322d-4bfe-95ad-c84d779a80d8-kube-api-access-xrlwl\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-tjd7w\" (UID: \"11efc7ce-322d-4bfe-95ad-c84d779a80d8\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tjd7w" Feb 17 16:51:09 crc kubenswrapper[4808]: I0217 16:51:09.240805 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/11efc7ce-322d-4bfe-95ad-c84d779a80d8-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-tjd7w\" (UID: \"11efc7ce-322d-4bfe-95ad-c84d779a80d8\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tjd7w" Feb 17 16:51:09 crc kubenswrapper[4808]: I0217 16:51:09.250973 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/11efc7ce-322d-4bfe-95ad-c84d779a80d8-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-tjd7w\" (UID: \"11efc7ce-322d-4bfe-95ad-c84d779a80d8\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tjd7w" Feb 17 16:51:09 crc kubenswrapper[4808]: I0217 16:51:09.256551 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/11efc7ce-322d-4bfe-95ad-c84d779a80d8-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-tjd7w\" (UID: \"11efc7ce-322d-4bfe-95ad-c84d779a80d8\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tjd7w" Feb 17 16:51:09 crc kubenswrapper[4808]: I0217 16:51:09.257229 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrlwl\" (UniqueName: \"kubernetes.io/projected/11efc7ce-322d-4bfe-95ad-c84d779a80d8-kube-api-access-xrlwl\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-tjd7w\" (UID: \"11efc7ce-322d-4bfe-95ad-c84d779a80d8\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tjd7w" Feb 17 16:51:09 crc kubenswrapper[4808]: I0217 16:51:09.408446 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tjd7w" Feb 17 16:51:09 crc kubenswrapper[4808]: I0217 16:51:09.975404 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tjd7w"] Feb 17 16:51:10 crc kubenswrapper[4808]: I0217 16:51:10.558078 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tjd7w" event={"ID":"11efc7ce-322d-4bfe-95ad-c84d779a80d8","Type":"ContainerStarted","Data":"4d7afca44c0ce541015a9eaa5dd29ff4546d0353ecc28cb2a4ccb253fd063a02"} Feb 17 16:51:11 crc kubenswrapper[4808]: I0217 16:51:11.569915 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tjd7w" event={"ID":"11efc7ce-322d-4bfe-95ad-c84d779a80d8","Type":"ContainerStarted","Data":"eda4c8fb0a2fa7440b4edbd3589d922c68fac2ff1d127cf6afae08986f0dcae1"} Feb 17 16:51:11 crc kubenswrapper[4808]: I0217 16:51:11.595961 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tjd7w" podStartSLOduration=2.153054559 podStartE2EDuration="2.595926054s" podCreationTimestamp="2026-02-17 16:51:09 +0000 UTC" firstStartedPulling="2026-02-17 16:51:09.966545458 +0000 UTC m=+3433.482904541" lastFinishedPulling="2026-02-17 16:51:10.409416953 +0000 UTC m=+3433.925776036" observedRunningTime="2026-02-17 16:51:11.587525899 +0000 UTC m=+3435.103884992" watchObservedRunningTime="2026-02-17 16:51:11.595926054 +0000 UTC m=+3435.112285177" Feb 17 16:51:16 crc kubenswrapper[4808]: E0217 16:51:16.148486 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:51:19 crc kubenswrapper[4808]: E0217 16:51:19.152054 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:51:21 crc kubenswrapper[4808]: I0217 16:51:21.592139 4808 patch_prober.go:28] interesting pod/machine-config-daemon-k8v8k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:51:21 crc kubenswrapper[4808]: I0217 16:51:21.592934 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:51:29 crc kubenswrapper[4808]: E0217 16:51:29.147363 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:51:32 crc kubenswrapper[4808]: E0217 16:51:32.149822 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:51:40 crc kubenswrapper[4808]: E0217 16:51:40.148741 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:51:43 crc kubenswrapper[4808]: E0217 16:51:43.148985 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:51:51 crc kubenswrapper[4808]: I0217 16:51:51.592377 4808 patch_prober.go:28] interesting pod/machine-config-daemon-k8v8k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:51:51 crc kubenswrapper[4808]: I0217 16:51:51.592992 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:51:51 crc kubenswrapper[4808]: I0217 16:51:51.593044 4808 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" Feb 17 16:51:51 crc kubenswrapper[4808]: I0217 16:51:51.593942 4808 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2a8ba27f36ba0ee53790b7b2ad1919c83731b5c9274456151ce2d8a4df4fea50"} pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 16:51:51 crc kubenswrapper[4808]: I0217 16:51:51.594005 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" containerID="cri-o://2a8ba27f36ba0ee53790b7b2ad1919c83731b5c9274456151ce2d8a4df4fea50" gracePeriod=600 Feb 17 16:51:52 crc kubenswrapper[4808]: I0217 16:51:52.016747 4808 generic.go:334] "Generic (PLEG): container finished" podID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerID="2a8ba27f36ba0ee53790b7b2ad1919c83731b5c9274456151ce2d8a4df4fea50" exitCode=0 Feb 17 16:51:52 crc kubenswrapper[4808]: I0217 16:51:52.016856 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" event={"ID":"ca38b6e7-b21c-453d-8b6c-a163dac84b35","Type":"ContainerDied","Data":"2a8ba27f36ba0ee53790b7b2ad1919c83731b5c9274456151ce2d8a4df4fea50"} Feb 17 16:51:52 crc kubenswrapper[4808]: I0217 16:51:52.017155 4808 scope.go:117] "RemoveContainer" containerID="1d6b62da85cac0888e68836087131544de96c37066f3fa481bdeda1d95bfa143" Feb 17 16:51:53 crc kubenswrapper[4808]: E0217 16:51:53.158259 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:51:54 crc kubenswrapper[4808]: I0217 16:51:54.038916 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" event={"ID":"ca38b6e7-b21c-453d-8b6c-a163dac84b35","Type":"ContainerStarted","Data":"7fbe8df1c68f978d3698bd74ae49612c95a40d103c6fa3bdaa17006e991ad2e5"} Feb 17 16:51:55 crc kubenswrapper[4808]: E0217 16:51:55.148350 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:52:05 crc kubenswrapper[4808]: E0217 16:52:05.148349 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:52:07 crc kubenswrapper[4808]: E0217 16:52:07.158925 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:52:18 crc kubenswrapper[4808]: E0217 16:52:18.148833 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:52:20 crc kubenswrapper[4808]: E0217 16:52:20.149140 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:52:23 crc kubenswrapper[4808]: I0217 16:52:23.081677 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7cs6t"] Feb 17 16:52:23 crc kubenswrapper[4808]: I0217 16:52:23.084688 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7cs6t" Feb 17 16:52:23 crc kubenswrapper[4808]: I0217 16:52:23.099836 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rqx5\" (UniqueName: \"kubernetes.io/projected/5952700e-521a-4201-9352-33db5d11abf4-kube-api-access-7rqx5\") pod \"community-operators-7cs6t\" (UID: \"5952700e-521a-4201-9352-33db5d11abf4\") " pod="openshift-marketplace/community-operators-7cs6t" Feb 17 16:52:23 crc kubenswrapper[4808]: I0217 16:52:23.099994 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5952700e-521a-4201-9352-33db5d11abf4-catalog-content\") pod \"community-operators-7cs6t\" (UID: \"5952700e-521a-4201-9352-33db5d11abf4\") " pod="openshift-marketplace/community-operators-7cs6t" Feb 17 16:52:23 crc kubenswrapper[4808]: I0217 16:52:23.100054 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5952700e-521a-4201-9352-33db5d11abf4-utilities\") pod \"community-operators-7cs6t\" (UID: \"5952700e-521a-4201-9352-33db5d11abf4\") " pod="openshift-marketplace/community-operators-7cs6t" Feb 17 16:52:23 crc kubenswrapper[4808]: I0217 16:52:23.109936 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7cs6t"] Feb 17 16:52:23 crc kubenswrapper[4808]: I0217 16:52:23.203112 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5952700e-521a-4201-9352-33db5d11abf4-catalog-content\") pod \"community-operators-7cs6t\" (UID: \"5952700e-521a-4201-9352-33db5d11abf4\") " pod="openshift-marketplace/community-operators-7cs6t" Feb 17 16:52:23 crc kubenswrapper[4808]: I0217 16:52:23.203800 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5952700e-521a-4201-9352-33db5d11abf4-catalog-content\") pod \"community-operators-7cs6t\" (UID: \"5952700e-521a-4201-9352-33db5d11abf4\") " pod="openshift-marketplace/community-operators-7cs6t" Feb 17 16:52:23 crc kubenswrapper[4808]: I0217 16:52:23.203904 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5952700e-521a-4201-9352-33db5d11abf4-utilities\") pod \"community-operators-7cs6t\" (UID: \"5952700e-521a-4201-9352-33db5d11abf4\") " pod="openshift-marketplace/community-operators-7cs6t" Feb 17 16:52:23 crc kubenswrapper[4808]: I0217 16:52:23.204294 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5952700e-521a-4201-9352-33db5d11abf4-utilities\") pod \"community-operators-7cs6t\" (UID: \"5952700e-521a-4201-9352-33db5d11abf4\") " pod="openshift-marketplace/community-operators-7cs6t" Feb 17 16:52:23 crc kubenswrapper[4808]: I0217 16:52:23.204459 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rqx5\" (UniqueName: \"kubernetes.io/projected/5952700e-521a-4201-9352-33db5d11abf4-kube-api-access-7rqx5\") pod \"community-operators-7cs6t\" (UID: \"5952700e-521a-4201-9352-33db5d11abf4\") " pod="openshift-marketplace/community-operators-7cs6t" Feb 17 16:52:23 crc kubenswrapper[4808]: I0217 16:52:23.233676 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rqx5\" (UniqueName: \"kubernetes.io/projected/5952700e-521a-4201-9352-33db5d11abf4-kube-api-access-7rqx5\") pod \"community-operators-7cs6t\" (UID: \"5952700e-521a-4201-9352-33db5d11abf4\") " pod="openshift-marketplace/community-operators-7cs6t" Feb 17 16:52:23 crc kubenswrapper[4808]: I0217 16:52:23.414257 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7cs6t" Feb 17 16:52:23 crc kubenswrapper[4808]: I0217 16:52:23.960534 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7cs6t"] Feb 17 16:52:23 crc kubenswrapper[4808]: W0217 16:52:23.967364 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5952700e_521a_4201_9352_33db5d11abf4.slice/crio-37c98d9de299e8566c34e82d2758704d8f9b59e70d8144af01cba040ee87a286 WatchSource:0}: Error finding container 37c98d9de299e8566c34e82d2758704d8f9b59e70d8144af01cba040ee87a286: Status 404 returned error can't find the container with id 37c98d9de299e8566c34e82d2758704d8f9b59e70d8144af01cba040ee87a286 Feb 17 16:52:24 crc kubenswrapper[4808]: I0217 16:52:24.348733 4808 generic.go:334] "Generic (PLEG): container finished" podID="5952700e-521a-4201-9352-33db5d11abf4" containerID="6223c29dde7884b6815555877c62366f852f82bee876d646dfc281bd4c82062f" exitCode=0 Feb 17 16:52:24 crc kubenswrapper[4808]: I0217 16:52:24.349105 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7cs6t" event={"ID":"5952700e-521a-4201-9352-33db5d11abf4","Type":"ContainerDied","Data":"6223c29dde7884b6815555877c62366f852f82bee876d646dfc281bd4c82062f"} Feb 17 16:52:24 crc kubenswrapper[4808]: I0217 16:52:24.349147 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7cs6t" event={"ID":"5952700e-521a-4201-9352-33db5d11abf4","Type":"ContainerStarted","Data":"37c98d9de299e8566c34e82d2758704d8f9b59e70d8144af01cba040ee87a286"} Feb 17 16:52:25 crc kubenswrapper[4808]: I0217 16:52:25.359077 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7cs6t" event={"ID":"5952700e-521a-4201-9352-33db5d11abf4","Type":"ContainerStarted","Data":"41460f1113d1536dd9edd491a988a7dd8cf67317bb755f3a42694ee4db124b0b"} Feb 17 16:52:26 crc kubenswrapper[4808]: I0217 16:52:26.370504 4808 generic.go:334] "Generic (PLEG): container finished" podID="5952700e-521a-4201-9352-33db5d11abf4" containerID="41460f1113d1536dd9edd491a988a7dd8cf67317bb755f3a42694ee4db124b0b" exitCode=0 Feb 17 16:52:26 crc kubenswrapper[4808]: I0217 16:52:26.370560 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7cs6t" event={"ID":"5952700e-521a-4201-9352-33db5d11abf4","Type":"ContainerDied","Data":"41460f1113d1536dd9edd491a988a7dd8cf67317bb755f3a42694ee4db124b0b"} Feb 17 16:52:27 crc kubenswrapper[4808]: I0217 16:52:27.382489 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7cs6t" event={"ID":"5952700e-521a-4201-9352-33db5d11abf4","Type":"ContainerStarted","Data":"153d3a19ce025670bd8c5af0343a9602ff029535a3e6df8b43c60f6bfe57dc9b"} Feb 17 16:52:27 crc kubenswrapper[4808]: I0217 16:52:27.412014 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7cs6t" podStartSLOduration=1.966009074 podStartE2EDuration="4.411992543s" podCreationTimestamp="2026-02-17 16:52:23 +0000 UTC" firstStartedPulling="2026-02-17 16:52:24.351358379 +0000 UTC m=+3507.867717452" lastFinishedPulling="2026-02-17 16:52:26.797341848 +0000 UTC m=+3510.313700921" observedRunningTime="2026-02-17 16:52:27.406019113 +0000 UTC m=+3510.922378226" watchObservedRunningTime="2026-02-17 16:52:27.411992543 +0000 UTC m=+3510.928351626" Feb 17 16:52:33 crc kubenswrapper[4808]: E0217 16:52:33.147355 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:52:33 crc kubenswrapper[4808]: E0217 16:52:33.147459 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:52:33 crc kubenswrapper[4808]: I0217 16:52:33.415370 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7cs6t" Feb 17 16:52:33 crc kubenswrapper[4808]: I0217 16:52:33.415432 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7cs6t" Feb 17 16:52:33 crc kubenswrapper[4808]: I0217 16:52:33.468664 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7cs6t" Feb 17 16:52:33 crc kubenswrapper[4808]: I0217 16:52:33.519806 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7cs6t" Feb 17 16:52:33 crc kubenswrapper[4808]: I0217 16:52:33.717979 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7cs6t"] Feb 17 16:52:35 crc kubenswrapper[4808]: I0217 16:52:35.463864 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-7cs6t" podUID="5952700e-521a-4201-9352-33db5d11abf4" containerName="registry-server" containerID="cri-o://153d3a19ce025670bd8c5af0343a9602ff029535a3e6df8b43c60f6bfe57dc9b" gracePeriod=2 Feb 17 16:52:36 crc kubenswrapper[4808]: I0217 16:52:36.101391 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7cs6t" Feb 17 16:52:36 crc kubenswrapper[4808]: I0217 16:52:36.196264 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7rqx5\" (UniqueName: \"kubernetes.io/projected/5952700e-521a-4201-9352-33db5d11abf4-kube-api-access-7rqx5\") pod \"5952700e-521a-4201-9352-33db5d11abf4\" (UID: \"5952700e-521a-4201-9352-33db5d11abf4\") " Feb 17 16:52:36 crc kubenswrapper[4808]: I0217 16:52:36.196442 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5952700e-521a-4201-9352-33db5d11abf4-utilities\") pod \"5952700e-521a-4201-9352-33db5d11abf4\" (UID: \"5952700e-521a-4201-9352-33db5d11abf4\") " Feb 17 16:52:36 crc kubenswrapper[4808]: I0217 16:52:36.196484 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5952700e-521a-4201-9352-33db5d11abf4-catalog-content\") pod \"5952700e-521a-4201-9352-33db5d11abf4\" (UID: \"5952700e-521a-4201-9352-33db5d11abf4\") " Feb 17 16:52:36 crc kubenswrapper[4808]: I0217 16:52:36.200487 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5952700e-521a-4201-9352-33db5d11abf4-utilities" (OuterVolumeSpecName: "utilities") pod "5952700e-521a-4201-9352-33db5d11abf4" (UID: "5952700e-521a-4201-9352-33db5d11abf4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:52:36 crc kubenswrapper[4808]: I0217 16:52:36.208079 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5952700e-521a-4201-9352-33db5d11abf4-kube-api-access-7rqx5" (OuterVolumeSpecName: "kube-api-access-7rqx5") pod "5952700e-521a-4201-9352-33db5d11abf4" (UID: "5952700e-521a-4201-9352-33db5d11abf4"). InnerVolumeSpecName "kube-api-access-7rqx5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:52:36 crc kubenswrapper[4808]: I0217 16:52:36.299645 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7rqx5\" (UniqueName: \"kubernetes.io/projected/5952700e-521a-4201-9352-33db5d11abf4-kube-api-access-7rqx5\") on node \"crc\" DevicePath \"\"" Feb 17 16:52:36 crc kubenswrapper[4808]: I0217 16:52:36.299684 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5952700e-521a-4201-9352-33db5d11abf4-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:52:36 crc kubenswrapper[4808]: I0217 16:52:36.436143 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5952700e-521a-4201-9352-33db5d11abf4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5952700e-521a-4201-9352-33db5d11abf4" (UID: "5952700e-521a-4201-9352-33db5d11abf4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:52:36 crc kubenswrapper[4808]: I0217 16:52:36.485056 4808 generic.go:334] "Generic (PLEG): container finished" podID="5952700e-521a-4201-9352-33db5d11abf4" containerID="153d3a19ce025670bd8c5af0343a9602ff029535a3e6df8b43c60f6bfe57dc9b" exitCode=0 Feb 17 16:52:36 crc kubenswrapper[4808]: I0217 16:52:36.485111 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7cs6t" event={"ID":"5952700e-521a-4201-9352-33db5d11abf4","Type":"ContainerDied","Data":"153d3a19ce025670bd8c5af0343a9602ff029535a3e6df8b43c60f6bfe57dc9b"} Feb 17 16:52:36 crc kubenswrapper[4808]: I0217 16:52:36.485154 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7cs6t" event={"ID":"5952700e-521a-4201-9352-33db5d11abf4","Type":"ContainerDied","Data":"37c98d9de299e8566c34e82d2758704d8f9b59e70d8144af01cba040ee87a286"} Feb 17 16:52:36 crc kubenswrapper[4808]: I0217 16:52:36.485176 4808 scope.go:117] "RemoveContainer" containerID="153d3a19ce025670bd8c5af0343a9602ff029535a3e6df8b43c60f6bfe57dc9b" Feb 17 16:52:36 crc kubenswrapper[4808]: I0217 16:52:36.486520 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7cs6t" Feb 17 16:52:36 crc kubenswrapper[4808]: I0217 16:52:36.506424 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5952700e-521a-4201-9352-33db5d11abf4-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:52:36 crc kubenswrapper[4808]: I0217 16:52:36.509609 4808 scope.go:117] "RemoveContainer" containerID="41460f1113d1536dd9edd491a988a7dd8cf67317bb755f3a42694ee4db124b0b" Feb 17 16:52:36 crc kubenswrapper[4808]: I0217 16:52:36.532265 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7cs6t"] Feb 17 16:52:36 crc kubenswrapper[4808]: I0217 16:52:36.541821 4808 scope.go:117] "RemoveContainer" containerID="6223c29dde7884b6815555877c62366f852f82bee876d646dfc281bd4c82062f" Feb 17 16:52:36 crc kubenswrapper[4808]: I0217 16:52:36.544158 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-7cs6t"] Feb 17 16:52:36 crc kubenswrapper[4808]: I0217 16:52:36.579219 4808 scope.go:117] "RemoveContainer" containerID="153d3a19ce025670bd8c5af0343a9602ff029535a3e6df8b43c60f6bfe57dc9b" Feb 17 16:52:36 crc kubenswrapper[4808]: E0217 16:52:36.579836 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"153d3a19ce025670bd8c5af0343a9602ff029535a3e6df8b43c60f6bfe57dc9b\": container with ID starting with 153d3a19ce025670bd8c5af0343a9602ff029535a3e6df8b43c60f6bfe57dc9b not found: ID does not exist" containerID="153d3a19ce025670bd8c5af0343a9602ff029535a3e6df8b43c60f6bfe57dc9b" Feb 17 16:52:36 crc kubenswrapper[4808]: I0217 16:52:36.579922 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"153d3a19ce025670bd8c5af0343a9602ff029535a3e6df8b43c60f6bfe57dc9b"} err="failed to get container status \"153d3a19ce025670bd8c5af0343a9602ff029535a3e6df8b43c60f6bfe57dc9b\": rpc error: code = NotFound desc = could not find container \"153d3a19ce025670bd8c5af0343a9602ff029535a3e6df8b43c60f6bfe57dc9b\": container with ID starting with 153d3a19ce025670bd8c5af0343a9602ff029535a3e6df8b43c60f6bfe57dc9b not found: ID does not exist" Feb 17 16:52:36 crc kubenswrapper[4808]: I0217 16:52:36.579990 4808 scope.go:117] "RemoveContainer" containerID="41460f1113d1536dd9edd491a988a7dd8cf67317bb755f3a42694ee4db124b0b" Feb 17 16:52:36 crc kubenswrapper[4808]: E0217 16:52:36.580381 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41460f1113d1536dd9edd491a988a7dd8cf67317bb755f3a42694ee4db124b0b\": container with ID starting with 41460f1113d1536dd9edd491a988a7dd8cf67317bb755f3a42694ee4db124b0b not found: ID does not exist" containerID="41460f1113d1536dd9edd491a988a7dd8cf67317bb755f3a42694ee4db124b0b" Feb 17 16:52:36 crc kubenswrapper[4808]: I0217 16:52:36.580435 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41460f1113d1536dd9edd491a988a7dd8cf67317bb755f3a42694ee4db124b0b"} err="failed to get container status \"41460f1113d1536dd9edd491a988a7dd8cf67317bb755f3a42694ee4db124b0b\": rpc error: code = NotFound desc = could not find container \"41460f1113d1536dd9edd491a988a7dd8cf67317bb755f3a42694ee4db124b0b\": container with ID starting with 41460f1113d1536dd9edd491a988a7dd8cf67317bb755f3a42694ee4db124b0b not found: ID does not exist" Feb 17 16:52:36 crc kubenswrapper[4808]: I0217 16:52:36.580461 4808 scope.go:117] "RemoveContainer" containerID="6223c29dde7884b6815555877c62366f852f82bee876d646dfc281bd4c82062f" Feb 17 16:52:36 crc kubenswrapper[4808]: E0217 16:52:36.580859 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6223c29dde7884b6815555877c62366f852f82bee876d646dfc281bd4c82062f\": container with ID starting with 6223c29dde7884b6815555877c62366f852f82bee876d646dfc281bd4c82062f not found: ID does not exist" containerID="6223c29dde7884b6815555877c62366f852f82bee876d646dfc281bd4c82062f" Feb 17 16:52:36 crc kubenswrapper[4808]: I0217 16:52:36.580939 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6223c29dde7884b6815555877c62366f852f82bee876d646dfc281bd4c82062f"} err="failed to get container status \"6223c29dde7884b6815555877c62366f852f82bee876d646dfc281bd4c82062f\": rpc error: code = NotFound desc = could not find container \"6223c29dde7884b6815555877c62366f852f82bee876d646dfc281bd4c82062f\": container with ID starting with 6223c29dde7884b6815555877c62366f852f82bee876d646dfc281bd4c82062f not found: ID does not exist" Feb 17 16:52:37 crc kubenswrapper[4808]: I0217 16:52:37.164093 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5952700e-521a-4201-9352-33db5d11abf4" path="/var/lib/kubelet/pods/5952700e-521a-4201-9352-33db5d11abf4/volumes" Feb 17 16:52:45 crc kubenswrapper[4808]: E0217 16:52:45.150271 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:52:48 crc kubenswrapper[4808]: E0217 16:52:48.147936 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:52:57 crc kubenswrapper[4808]: E0217 16:52:57.162277 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:53:03 crc kubenswrapper[4808]: E0217 16:53:03.147868 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:53:08 crc kubenswrapper[4808]: E0217 16:53:08.148950 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:53:18 crc kubenswrapper[4808]: E0217 16:53:18.147825 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:53:20 crc kubenswrapper[4808]: E0217 16:53:20.147120 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:53:31 crc kubenswrapper[4808]: E0217 16:53:31.148149 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:53:33 crc kubenswrapper[4808]: E0217 16:53:33.148411 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:53:45 crc kubenswrapper[4808]: E0217 16:53:45.147921 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:53:48 crc kubenswrapper[4808]: E0217 16:53:48.148518 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:54:00 crc kubenswrapper[4808]: E0217 16:54:00.147814 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:54:01 crc kubenswrapper[4808]: E0217 16:54:01.147347 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:54:15 crc kubenswrapper[4808]: E0217 16:54:15.149937 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:54:16 crc kubenswrapper[4808]: E0217 16:54:16.148076 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:54:21 crc kubenswrapper[4808]: I0217 16:54:21.592503 4808 patch_prober.go:28] interesting pod/machine-config-daemon-k8v8k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:54:21 crc kubenswrapper[4808]: I0217 16:54:21.593071 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:54:26 crc kubenswrapper[4808]: E0217 16:54:26.148414 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:54:31 crc kubenswrapper[4808]: E0217 16:54:31.149373 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:54:39 crc kubenswrapper[4808]: E0217 16:54:39.149175 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:54:45 crc kubenswrapper[4808]: E0217 16:54:45.149015 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:54:51 crc kubenswrapper[4808]: I0217 16:54:51.592547 4808 patch_prober.go:28] interesting pod/machine-config-daemon-k8v8k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:54:51 crc kubenswrapper[4808]: I0217 16:54:51.593077 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:54:54 crc kubenswrapper[4808]: E0217 16:54:54.150030 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:54:58 crc kubenswrapper[4808]: I0217 16:54:58.148232 4808 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 16:54:58 crc kubenswrapper[4808]: E0217 16:54:58.248359 4808 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 17 16:54:58 crc kubenswrapper[4808]: E0217 16:54:58.248420 4808 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 17 16:54:58 crc kubenswrapper[4808]: E0217 16:54:58.248649 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fnd2x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-zl7nk_openstack(a4b182d0-48fc-4487-b7ad-18f7803a4d4c): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:54:58 crc kubenswrapper[4808]: E0217 16:54:58.249851 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:55:07 crc kubenswrapper[4808]: E0217 16:55:07.156762 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:55:11 crc kubenswrapper[4808]: E0217 16:55:11.146974 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:55:21 crc kubenswrapper[4808]: E0217 16:55:21.149039 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:55:21 crc kubenswrapper[4808]: I0217 16:55:21.592071 4808 patch_prober.go:28] interesting pod/machine-config-daemon-k8v8k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:55:21 crc kubenswrapper[4808]: I0217 16:55:21.592131 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:55:21 crc kubenswrapper[4808]: I0217 16:55:21.592173 4808 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" Feb 17 16:55:21 crc kubenswrapper[4808]: I0217 16:55:21.592981 4808 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7fbe8df1c68f978d3698bd74ae49612c95a40d103c6fa3bdaa17006e991ad2e5"} pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 16:55:21 crc kubenswrapper[4808]: I0217 16:55:21.593049 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" containerID="cri-o://7fbe8df1c68f978d3698bd74ae49612c95a40d103c6fa3bdaa17006e991ad2e5" gracePeriod=600 Feb 17 16:55:21 crc kubenswrapper[4808]: E0217 16:55:21.720839 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:55:22 crc kubenswrapper[4808]: I0217 16:55:22.231418 4808 generic.go:334] "Generic (PLEG): container finished" podID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerID="7fbe8df1c68f978d3698bd74ae49612c95a40d103c6fa3bdaa17006e991ad2e5" exitCode=0 Feb 17 16:55:22 crc kubenswrapper[4808]: I0217 16:55:22.231477 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" event={"ID":"ca38b6e7-b21c-453d-8b6c-a163dac84b35","Type":"ContainerDied","Data":"7fbe8df1c68f978d3698bd74ae49612c95a40d103c6fa3bdaa17006e991ad2e5"} Feb 17 16:55:22 crc kubenswrapper[4808]: I0217 16:55:22.231544 4808 scope.go:117] "RemoveContainer" containerID="2a8ba27f36ba0ee53790b7b2ad1919c83731b5c9274456151ce2d8a4df4fea50" Feb 17 16:55:22 crc kubenswrapper[4808]: I0217 16:55:22.232530 4808 scope.go:117] "RemoveContainer" containerID="7fbe8df1c68f978d3698bd74ae49612c95a40d103c6fa3bdaa17006e991ad2e5" Feb 17 16:55:22 crc kubenswrapper[4808]: E0217 16:55:22.232879 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:55:26 crc kubenswrapper[4808]: E0217 16:55:26.148494 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:55:35 crc kubenswrapper[4808]: E0217 16:55:35.276851 4808 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:55:35 crc kubenswrapper[4808]: E0217 16:55:35.277486 4808 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:55:35 crc kubenswrapper[4808]: E0217 16:55:35.277668 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nfchb4h678h649h5fbh664h79h7fh666h5bfh68h565h555h59dh5b6h5bfh66ch645h547h5cbh549h9fh58bh5d4hcfh78h68chc7h5ch67dhc7h5b4q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rjgf2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(2876084b-7055-449d-9ddb-447d3a515d80): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:55:35 crc kubenswrapper[4808]: E0217 16:55:35.279204 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:55:37 crc kubenswrapper[4808]: I0217 16:55:37.157643 4808 scope.go:117] "RemoveContainer" containerID="7fbe8df1c68f978d3698bd74ae49612c95a40d103c6fa3bdaa17006e991ad2e5" Feb 17 16:55:37 crc kubenswrapper[4808]: E0217 16:55:37.158292 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:55:39 crc kubenswrapper[4808]: E0217 16:55:39.149408 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:55:46 crc kubenswrapper[4808]: E0217 16:55:46.147864 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:55:51 crc kubenswrapper[4808]: E0217 16:55:51.148287 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:55:52 crc kubenswrapper[4808]: I0217 16:55:52.146300 4808 scope.go:117] "RemoveContainer" containerID="7fbe8df1c68f978d3698bd74ae49612c95a40d103c6fa3bdaa17006e991ad2e5" Feb 17 16:55:52 crc kubenswrapper[4808]: E0217 16:55:52.147001 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:56:01 crc kubenswrapper[4808]: E0217 16:56:01.148638 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:56:05 crc kubenswrapper[4808]: E0217 16:56:05.148687 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:56:07 crc kubenswrapper[4808]: I0217 16:56:07.161706 4808 scope.go:117] "RemoveContainer" containerID="7fbe8df1c68f978d3698bd74ae49612c95a40d103c6fa3bdaa17006e991ad2e5" Feb 17 16:56:07 crc kubenswrapper[4808]: E0217 16:56:07.162377 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:56:16 crc kubenswrapper[4808]: E0217 16:56:16.148197 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:56:18 crc kubenswrapper[4808]: E0217 16:56:18.148303 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:56:22 crc kubenswrapper[4808]: I0217 16:56:22.146326 4808 scope.go:117] "RemoveContainer" containerID="7fbe8df1c68f978d3698bd74ae49612c95a40d103c6fa3bdaa17006e991ad2e5" Feb 17 16:56:22 crc kubenswrapper[4808]: E0217 16:56:22.147065 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:56:30 crc kubenswrapper[4808]: E0217 16:56:30.147890 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:56:30 crc kubenswrapper[4808]: E0217 16:56:30.147931 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:56:34 crc kubenswrapper[4808]: I0217 16:56:34.146386 4808 scope.go:117] "RemoveContainer" containerID="7fbe8df1c68f978d3698bd74ae49612c95a40d103c6fa3bdaa17006e991ad2e5" Feb 17 16:56:34 crc kubenswrapper[4808]: E0217 16:56:34.147648 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:56:41 crc kubenswrapper[4808]: E0217 16:56:41.150417 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:56:45 crc kubenswrapper[4808]: E0217 16:56:45.149535 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:56:47 crc kubenswrapper[4808]: I0217 16:56:47.156226 4808 scope.go:117] "RemoveContainer" containerID="7fbe8df1c68f978d3698bd74ae49612c95a40d103c6fa3bdaa17006e991ad2e5" Feb 17 16:56:47 crc kubenswrapper[4808]: E0217 16:56:47.157040 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:56:55 crc kubenswrapper[4808]: E0217 16:56:55.149979 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:57:00 crc kubenswrapper[4808]: E0217 16:57:00.150198 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:57:02 crc kubenswrapper[4808]: I0217 16:57:02.148186 4808 scope.go:117] "RemoveContainer" containerID="7fbe8df1c68f978d3698bd74ae49612c95a40d103c6fa3bdaa17006e991ad2e5" Feb 17 16:57:02 crc kubenswrapper[4808]: E0217 16:57:02.148910 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:57:08 crc kubenswrapper[4808]: E0217 16:57:08.148465 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:57:14 crc kubenswrapper[4808]: I0217 16:57:14.146642 4808 scope.go:117] "RemoveContainer" containerID="7fbe8df1c68f978d3698bd74ae49612c95a40d103c6fa3bdaa17006e991ad2e5" Feb 17 16:57:14 crc kubenswrapper[4808]: E0217 16:57:14.147715 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:57:14 crc kubenswrapper[4808]: E0217 16:57:14.148393 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:57:22 crc kubenswrapper[4808]: I0217 16:57:22.513981 4808 generic.go:334] "Generic (PLEG): container finished" podID="11efc7ce-322d-4bfe-95ad-c84d779a80d8" containerID="eda4c8fb0a2fa7440b4edbd3589d922c68fac2ff1d127cf6afae08986f0dcae1" exitCode=2 Feb 17 16:57:22 crc kubenswrapper[4808]: I0217 16:57:22.514203 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tjd7w" event={"ID":"11efc7ce-322d-4bfe-95ad-c84d779a80d8","Type":"ContainerDied","Data":"eda4c8fb0a2fa7440b4edbd3589d922c68fac2ff1d127cf6afae08986f0dcae1"} Feb 17 16:57:23 crc kubenswrapper[4808]: E0217 16:57:23.148822 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:57:24 crc kubenswrapper[4808]: I0217 16:57:24.129629 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tjd7w" Feb 17 16:57:24 crc kubenswrapper[4808]: I0217 16:57:24.294127 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/11efc7ce-322d-4bfe-95ad-c84d779a80d8-ssh-key-openstack-edpm-ipam\") pod \"11efc7ce-322d-4bfe-95ad-c84d779a80d8\" (UID: \"11efc7ce-322d-4bfe-95ad-c84d779a80d8\") " Feb 17 16:57:24 crc kubenswrapper[4808]: I0217 16:57:24.294288 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/11efc7ce-322d-4bfe-95ad-c84d779a80d8-inventory\") pod \"11efc7ce-322d-4bfe-95ad-c84d779a80d8\" (UID: \"11efc7ce-322d-4bfe-95ad-c84d779a80d8\") " Feb 17 16:57:24 crc kubenswrapper[4808]: I0217 16:57:24.294345 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xrlwl\" (UniqueName: \"kubernetes.io/projected/11efc7ce-322d-4bfe-95ad-c84d779a80d8-kube-api-access-xrlwl\") pod \"11efc7ce-322d-4bfe-95ad-c84d779a80d8\" (UID: \"11efc7ce-322d-4bfe-95ad-c84d779a80d8\") " Feb 17 16:57:24 crc kubenswrapper[4808]: I0217 16:57:24.302082 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11efc7ce-322d-4bfe-95ad-c84d779a80d8-kube-api-access-xrlwl" (OuterVolumeSpecName: "kube-api-access-xrlwl") pod "11efc7ce-322d-4bfe-95ad-c84d779a80d8" (UID: "11efc7ce-322d-4bfe-95ad-c84d779a80d8"). InnerVolumeSpecName "kube-api-access-xrlwl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:57:24 crc kubenswrapper[4808]: I0217 16:57:24.330207 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11efc7ce-322d-4bfe-95ad-c84d779a80d8-inventory" (OuterVolumeSpecName: "inventory") pod "11efc7ce-322d-4bfe-95ad-c84d779a80d8" (UID: "11efc7ce-322d-4bfe-95ad-c84d779a80d8"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:57:24 crc kubenswrapper[4808]: I0217 16:57:24.340374 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11efc7ce-322d-4bfe-95ad-c84d779a80d8-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "11efc7ce-322d-4bfe-95ad-c84d779a80d8" (UID: "11efc7ce-322d-4bfe-95ad-c84d779a80d8"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:57:24 crc kubenswrapper[4808]: I0217 16:57:24.397358 4808 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/11efc7ce-322d-4bfe-95ad-c84d779a80d8-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 16:57:24 crc kubenswrapper[4808]: I0217 16:57:24.397404 4808 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/11efc7ce-322d-4bfe-95ad-c84d779a80d8-inventory\") on node \"crc\" DevicePath \"\"" Feb 17 16:57:24 crc kubenswrapper[4808]: I0217 16:57:24.397417 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xrlwl\" (UniqueName: \"kubernetes.io/projected/11efc7ce-322d-4bfe-95ad-c84d779a80d8-kube-api-access-xrlwl\") on node \"crc\" DevicePath \"\"" Feb 17 16:57:24 crc kubenswrapper[4808]: I0217 16:57:24.536735 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tjd7w" event={"ID":"11efc7ce-322d-4bfe-95ad-c84d779a80d8","Type":"ContainerDied","Data":"4d7afca44c0ce541015a9eaa5dd29ff4546d0353ecc28cb2a4ccb253fd063a02"} Feb 17 16:57:24 crc kubenswrapper[4808]: I0217 16:57:24.536781 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4d7afca44c0ce541015a9eaa5dd29ff4546d0353ecc28cb2a4ccb253fd063a02" Feb 17 16:57:24 crc kubenswrapper[4808]: I0217 16:57:24.536816 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tjd7w" Feb 17 16:57:25 crc kubenswrapper[4808]: E0217 16:57:25.148176 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:57:27 crc kubenswrapper[4808]: I0217 16:57:27.151691 4808 scope.go:117] "RemoveContainer" containerID="7fbe8df1c68f978d3698bd74ae49612c95a40d103c6fa3bdaa17006e991ad2e5" Feb 17 16:57:27 crc kubenswrapper[4808]: E0217 16:57:27.152386 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:57:37 crc kubenswrapper[4808]: E0217 16:57:37.156709 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:57:38 crc kubenswrapper[4808]: E0217 16:57:38.147313 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:57:42 crc kubenswrapper[4808]: I0217 16:57:42.146086 4808 scope.go:117] "RemoveContainer" containerID="7fbe8df1c68f978d3698bd74ae49612c95a40d103c6fa3bdaa17006e991ad2e5" Feb 17 16:57:42 crc kubenswrapper[4808]: E0217 16:57:42.147136 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:57:50 crc kubenswrapper[4808]: E0217 16:57:50.149309 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:57:51 crc kubenswrapper[4808]: E0217 16:57:51.148294 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:57:53 crc kubenswrapper[4808]: I0217 16:57:53.145719 4808 scope.go:117] "RemoveContainer" containerID="7fbe8df1c68f978d3698bd74ae49612c95a40d103c6fa3bdaa17006e991ad2e5" Feb 17 16:57:53 crc kubenswrapper[4808]: E0217 16:57:53.146293 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:58:01 crc kubenswrapper[4808]: E0217 16:58:01.148315 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:58:03 crc kubenswrapper[4808]: E0217 16:58:03.147671 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:58:05 crc kubenswrapper[4808]: I0217 16:58:05.146479 4808 scope.go:117] "RemoveContainer" containerID="7fbe8df1c68f978d3698bd74ae49612c95a40d103c6fa3bdaa17006e991ad2e5" Feb 17 16:58:05 crc kubenswrapper[4808]: E0217 16:58:05.147103 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:58:14 crc kubenswrapper[4808]: E0217 16:58:14.149225 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:58:16 crc kubenswrapper[4808]: E0217 16:58:16.148037 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:58:19 crc kubenswrapper[4808]: I0217 16:58:19.146754 4808 scope.go:117] "RemoveContainer" containerID="7fbe8df1c68f978d3698bd74ae49612c95a40d103c6fa3bdaa17006e991ad2e5" Feb 17 16:58:19 crc kubenswrapper[4808]: E0217 16:58:19.147461 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:58:27 crc kubenswrapper[4808]: E0217 16:58:27.158865 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:58:28 crc kubenswrapper[4808]: E0217 16:58:28.155938 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:58:30 crc kubenswrapper[4808]: I0217 16:58:30.147908 4808 scope.go:117] "RemoveContainer" containerID="7fbe8df1c68f978d3698bd74ae49612c95a40d103c6fa3bdaa17006e991ad2e5" Feb 17 16:58:30 crc kubenswrapper[4808]: E0217 16:58:30.148734 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:58:39 crc kubenswrapper[4808]: E0217 16:58:39.149132 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:58:39 crc kubenswrapper[4808]: E0217 16:58:39.149800 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:58:44 crc kubenswrapper[4808]: I0217 16:58:44.145186 4808 scope.go:117] "RemoveContainer" containerID="7fbe8df1c68f978d3698bd74ae49612c95a40d103c6fa3bdaa17006e991ad2e5" Feb 17 16:58:44 crc kubenswrapper[4808]: E0217 16:58:44.145785 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:58:49 crc kubenswrapper[4808]: I0217 16:58:49.568517 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-szhdh"] Feb 17 16:58:49 crc kubenswrapper[4808]: E0217 16:58:49.569971 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5952700e-521a-4201-9352-33db5d11abf4" containerName="extract-content" Feb 17 16:58:49 crc kubenswrapper[4808]: I0217 16:58:49.570000 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="5952700e-521a-4201-9352-33db5d11abf4" containerName="extract-content" Feb 17 16:58:49 crc kubenswrapper[4808]: E0217 16:58:49.570031 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11efc7ce-322d-4bfe-95ad-c84d779a80d8" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 17 16:58:49 crc kubenswrapper[4808]: I0217 16:58:49.570045 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="11efc7ce-322d-4bfe-95ad-c84d779a80d8" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 17 16:58:49 crc kubenswrapper[4808]: E0217 16:58:49.570091 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5952700e-521a-4201-9352-33db5d11abf4" containerName="registry-server" Feb 17 16:58:49 crc kubenswrapper[4808]: I0217 16:58:49.570103 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="5952700e-521a-4201-9352-33db5d11abf4" containerName="registry-server" Feb 17 16:58:49 crc kubenswrapper[4808]: E0217 16:58:49.570171 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5952700e-521a-4201-9352-33db5d11abf4" containerName="extract-utilities" Feb 17 16:58:49 crc kubenswrapper[4808]: I0217 16:58:49.570190 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="5952700e-521a-4201-9352-33db5d11abf4" containerName="extract-utilities" Feb 17 16:58:49 crc kubenswrapper[4808]: I0217 16:58:49.570636 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="5952700e-521a-4201-9352-33db5d11abf4" containerName="registry-server" Feb 17 16:58:49 crc kubenswrapper[4808]: I0217 16:58:49.570696 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="11efc7ce-322d-4bfe-95ad-c84d779a80d8" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 17 16:58:49 crc kubenswrapper[4808]: I0217 16:58:49.573465 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-szhdh" Feb 17 16:58:49 crc kubenswrapper[4808]: I0217 16:58:49.583417 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-szhdh"] Feb 17 16:58:49 crc kubenswrapper[4808]: I0217 16:58:49.685382 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/740e9eba-2f31-48f8-af0e-68aec31e27cf-catalog-content\") pod \"certified-operators-szhdh\" (UID: \"740e9eba-2f31-48f8-af0e-68aec31e27cf\") " pod="openshift-marketplace/certified-operators-szhdh" Feb 17 16:58:49 crc kubenswrapper[4808]: I0217 16:58:49.685423 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/740e9eba-2f31-48f8-af0e-68aec31e27cf-utilities\") pod \"certified-operators-szhdh\" (UID: \"740e9eba-2f31-48f8-af0e-68aec31e27cf\") " pod="openshift-marketplace/certified-operators-szhdh" Feb 17 16:58:49 crc kubenswrapper[4808]: I0217 16:58:49.686035 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czltc\" (UniqueName: \"kubernetes.io/projected/740e9eba-2f31-48f8-af0e-68aec31e27cf-kube-api-access-czltc\") pod \"certified-operators-szhdh\" (UID: \"740e9eba-2f31-48f8-af0e-68aec31e27cf\") " pod="openshift-marketplace/certified-operators-szhdh" Feb 17 16:58:49 crc kubenswrapper[4808]: I0217 16:58:49.787612 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czltc\" (UniqueName: \"kubernetes.io/projected/740e9eba-2f31-48f8-af0e-68aec31e27cf-kube-api-access-czltc\") pod \"certified-operators-szhdh\" (UID: \"740e9eba-2f31-48f8-af0e-68aec31e27cf\") " pod="openshift-marketplace/certified-operators-szhdh" Feb 17 16:58:49 crc kubenswrapper[4808]: I0217 16:58:49.787690 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/740e9eba-2f31-48f8-af0e-68aec31e27cf-catalog-content\") pod \"certified-operators-szhdh\" (UID: \"740e9eba-2f31-48f8-af0e-68aec31e27cf\") " pod="openshift-marketplace/certified-operators-szhdh" Feb 17 16:58:49 crc kubenswrapper[4808]: I0217 16:58:49.787715 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/740e9eba-2f31-48f8-af0e-68aec31e27cf-utilities\") pod \"certified-operators-szhdh\" (UID: \"740e9eba-2f31-48f8-af0e-68aec31e27cf\") " pod="openshift-marketplace/certified-operators-szhdh" Feb 17 16:58:49 crc kubenswrapper[4808]: I0217 16:58:49.788161 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/740e9eba-2f31-48f8-af0e-68aec31e27cf-catalog-content\") pod \"certified-operators-szhdh\" (UID: \"740e9eba-2f31-48f8-af0e-68aec31e27cf\") " pod="openshift-marketplace/certified-operators-szhdh" Feb 17 16:58:49 crc kubenswrapper[4808]: I0217 16:58:49.788176 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/740e9eba-2f31-48f8-af0e-68aec31e27cf-utilities\") pod \"certified-operators-szhdh\" (UID: \"740e9eba-2f31-48f8-af0e-68aec31e27cf\") " pod="openshift-marketplace/certified-operators-szhdh" Feb 17 16:58:49 crc kubenswrapper[4808]: I0217 16:58:49.809752 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czltc\" (UniqueName: \"kubernetes.io/projected/740e9eba-2f31-48f8-af0e-68aec31e27cf-kube-api-access-czltc\") pod \"certified-operators-szhdh\" (UID: \"740e9eba-2f31-48f8-af0e-68aec31e27cf\") " pod="openshift-marketplace/certified-operators-szhdh" Feb 17 16:58:49 crc kubenswrapper[4808]: I0217 16:58:49.903595 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-szhdh" Feb 17 16:58:50 crc kubenswrapper[4808]: E0217 16:58:50.149755 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:58:50 crc kubenswrapper[4808]: E0217 16:58:50.149982 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:58:50 crc kubenswrapper[4808]: I0217 16:58:50.437683 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-szhdh"] Feb 17 16:58:51 crc kubenswrapper[4808]: I0217 16:58:51.436319 4808 generic.go:334] "Generic (PLEG): container finished" podID="740e9eba-2f31-48f8-af0e-68aec31e27cf" containerID="0f7be76c253b421188bbb3b738a02d69e75584ea443f6d666f3927a89f0359d4" exitCode=0 Feb 17 16:58:51 crc kubenswrapper[4808]: I0217 16:58:51.436657 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-szhdh" event={"ID":"740e9eba-2f31-48f8-af0e-68aec31e27cf","Type":"ContainerDied","Data":"0f7be76c253b421188bbb3b738a02d69e75584ea443f6d666f3927a89f0359d4"} Feb 17 16:58:51 crc kubenswrapper[4808]: I0217 16:58:51.436688 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-szhdh" event={"ID":"740e9eba-2f31-48f8-af0e-68aec31e27cf","Type":"ContainerStarted","Data":"9e069878c6614ce22e9d278c679f49524cf425a1cd6c9df95a316782240123ee"} Feb 17 16:58:52 crc kubenswrapper[4808]: I0217 16:58:52.447384 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-szhdh" event={"ID":"740e9eba-2f31-48f8-af0e-68aec31e27cf","Type":"ContainerStarted","Data":"5e8850501eb79a3ea1c89c761415222512c2f195ce6edc451621d50b059d2db2"} Feb 17 16:58:53 crc kubenswrapper[4808]: I0217 16:58:53.456003 4808 generic.go:334] "Generic (PLEG): container finished" podID="740e9eba-2f31-48f8-af0e-68aec31e27cf" containerID="5e8850501eb79a3ea1c89c761415222512c2f195ce6edc451621d50b059d2db2" exitCode=0 Feb 17 16:58:53 crc kubenswrapper[4808]: I0217 16:58:53.456086 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-szhdh" event={"ID":"740e9eba-2f31-48f8-af0e-68aec31e27cf","Type":"ContainerDied","Data":"5e8850501eb79a3ea1c89c761415222512c2f195ce6edc451621d50b059d2db2"} Feb 17 16:58:54 crc kubenswrapper[4808]: I0217 16:58:54.470413 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-szhdh" event={"ID":"740e9eba-2f31-48f8-af0e-68aec31e27cf","Type":"ContainerStarted","Data":"7365620845db54ba879f3622835dda751053aefedf606fd24aaeff794ccfed44"} Feb 17 16:58:54 crc kubenswrapper[4808]: I0217 16:58:54.500660 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-szhdh" podStartSLOduration=3.014932426 podStartE2EDuration="5.500643293s" podCreationTimestamp="2026-02-17 16:58:49 +0000 UTC" firstStartedPulling="2026-02-17 16:58:51.438606416 +0000 UTC m=+3894.954965489" lastFinishedPulling="2026-02-17 16:58:53.924317263 +0000 UTC m=+3897.440676356" observedRunningTime="2026-02-17 16:58:54.490697773 +0000 UTC m=+3898.007056856" watchObservedRunningTime="2026-02-17 16:58:54.500643293 +0000 UTC m=+3898.017002366" Feb 17 16:58:57 crc kubenswrapper[4808]: I0217 16:58:57.157681 4808 scope.go:117] "RemoveContainer" containerID="7fbe8df1c68f978d3698bd74ae49612c95a40d103c6fa3bdaa17006e991ad2e5" Feb 17 16:58:57 crc kubenswrapper[4808]: E0217 16:58:57.158675 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:58:59 crc kubenswrapper[4808]: I0217 16:58:59.903858 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-szhdh" Feb 17 16:58:59 crc kubenswrapper[4808]: I0217 16:58:59.904317 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-szhdh" Feb 17 16:58:59 crc kubenswrapper[4808]: I0217 16:58:59.959836 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-szhdh" Feb 17 16:59:00 crc kubenswrapper[4808]: I0217 16:59:00.585890 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-szhdh" Feb 17 16:59:00 crc kubenswrapper[4808]: I0217 16:59:00.634627 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-szhdh"] Feb 17 16:59:02 crc kubenswrapper[4808]: I0217 16:59:02.552069 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-szhdh" podUID="740e9eba-2f31-48f8-af0e-68aec31e27cf" containerName="registry-server" containerID="cri-o://7365620845db54ba879f3622835dda751053aefedf606fd24aaeff794ccfed44" gracePeriod=2 Feb 17 16:59:03 crc kubenswrapper[4808]: I0217 16:59:03.567942 4808 generic.go:334] "Generic (PLEG): container finished" podID="740e9eba-2f31-48f8-af0e-68aec31e27cf" containerID="7365620845db54ba879f3622835dda751053aefedf606fd24aaeff794ccfed44" exitCode=0 Feb 17 16:59:03 crc kubenswrapper[4808]: I0217 16:59:03.568093 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-szhdh" event={"ID":"740e9eba-2f31-48f8-af0e-68aec31e27cf","Type":"ContainerDied","Data":"7365620845db54ba879f3622835dda751053aefedf606fd24aaeff794ccfed44"} Feb 17 16:59:04 crc kubenswrapper[4808]: I0217 16:59:04.130013 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-szhdh" Feb 17 16:59:04 crc kubenswrapper[4808]: E0217 16:59:04.147684 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:59:04 crc kubenswrapper[4808]: E0217 16:59:04.148653 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:59:04 crc kubenswrapper[4808]: I0217 16:59:04.313710 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-czltc\" (UniqueName: \"kubernetes.io/projected/740e9eba-2f31-48f8-af0e-68aec31e27cf-kube-api-access-czltc\") pod \"740e9eba-2f31-48f8-af0e-68aec31e27cf\" (UID: \"740e9eba-2f31-48f8-af0e-68aec31e27cf\") " Feb 17 16:59:04 crc kubenswrapper[4808]: I0217 16:59:04.313862 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/740e9eba-2f31-48f8-af0e-68aec31e27cf-utilities\") pod \"740e9eba-2f31-48f8-af0e-68aec31e27cf\" (UID: \"740e9eba-2f31-48f8-af0e-68aec31e27cf\") " Feb 17 16:59:04 crc kubenswrapper[4808]: I0217 16:59:04.313983 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/740e9eba-2f31-48f8-af0e-68aec31e27cf-catalog-content\") pod \"740e9eba-2f31-48f8-af0e-68aec31e27cf\" (UID: \"740e9eba-2f31-48f8-af0e-68aec31e27cf\") " Feb 17 16:59:04 crc kubenswrapper[4808]: I0217 16:59:04.314801 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/740e9eba-2f31-48f8-af0e-68aec31e27cf-utilities" (OuterVolumeSpecName: "utilities") pod "740e9eba-2f31-48f8-af0e-68aec31e27cf" (UID: "740e9eba-2f31-48f8-af0e-68aec31e27cf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:59:04 crc kubenswrapper[4808]: I0217 16:59:04.317290 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/740e9eba-2f31-48f8-af0e-68aec31e27cf-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:59:04 crc kubenswrapper[4808]: I0217 16:59:04.327963 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/740e9eba-2f31-48f8-af0e-68aec31e27cf-kube-api-access-czltc" (OuterVolumeSpecName: "kube-api-access-czltc") pod "740e9eba-2f31-48f8-af0e-68aec31e27cf" (UID: "740e9eba-2f31-48f8-af0e-68aec31e27cf"). InnerVolumeSpecName "kube-api-access-czltc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:59:04 crc kubenswrapper[4808]: I0217 16:59:04.408985 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/740e9eba-2f31-48f8-af0e-68aec31e27cf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "740e9eba-2f31-48f8-af0e-68aec31e27cf" (UID: "740e9eba-2f31-48f8-af0e-68aec31e27cf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:59:04 crc kubenswrapper[4808]: I0217 16:59:04.419417 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-czltc\" (UniqueName: \"kubernetes.io/projected/740e9eba-2f31-48f8-af0e-68aec31e27cf-kube-api-access-czltc\") on node \"crc\" DevicePath \"\"" Feb 17 16:59:04 crc kubenswrapper[4808]: I0217 16:59:04.419446 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/740e9eba-2f31-48f8-af0e-68aec31e27cf-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:59:04 crc kubenswrapper[4808]: I0217 16:59:04.586680 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-szhdh" event={"ID":"740e9eba-2f31-48f8-af0e-68aec31e27cf","Type":"ContainerDied","Data":"9e069878c6614ce22e9d278c679f49524cf425a1cd6c9df95a316782240123ee"} Feb 17 16:59:04 crc kubenswrapper[4808]: I0217 16:59:04.586784 4808 scope.go:117] "RemoveContainer" containerID="7365620845db54ba879f3622835dda751053aefedf606fd24aaeff794ccfed44" Feb 17 16:59:04 crc kubenswrapper[4808]: I0217 16:59:04.586845 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-szhdh" Feb 17 16:59:04 crc kubenswrapper[4808]: I0217 16:59:04.617494 4808 scope.go:117] "RemoveContainer" containerID="5e8850501eb79a3ea1c89c761415222512c2f195ce6edc451621d50b059d2db2" Feb 17 16:59:04 crc kubenswrapper[4808]: I0217 16:59:04.652951 4808 scope.go:117] "RemoveContainer" containerID="0f7be76c253b421188bbb3b738a02d69e75584ea443f6d666f3927a89f0359d4" Feb 17 16:59:04 crc kubenswrapper[4808]: I0217 16:59:04.656262 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-szhdh"] Feb 17 16:59:04 crc kubenswrapper[4808]: I0217 16:59:04.668802 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-szhdh"] Feb 17 16:59:05 crc kubenswrapper[4808]: I0217 16:59:05.157921 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="740e9eba-2f31-48f8-af0e-68aec31e27cf" path="/var/lib/kubelet/pods/740e9eba-2f31-48f8-af0e-68aec31e27cf/volumes" Feb 17 16:59:12 crc kubenswrapper[4808]: I0217 16:59:12.145852 4808 scope.go:117] "RemoveContainer" containerID="7fbe8df1c68f978d3698bd74ae49612c95a40d103c6fa3bdaa17006e991ad2e5" Feb 17 16:59:12 crc kubenswrapper[4808]: E0217 16:59:12.146669 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:59:16 crc kubenswrapper[4808]: E0217 16:59:16.149849 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:59:19 crc kubenswrapper[4808]: E0217 16:59:19.147995 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:59:24 crc kubenswrapper[4808]: I0217 16:59:24.146116 4808 scope.go:117] "RemoveContainer" containerID="7fbe8df1c68f978d3698bd74ae49612c95a40d103c6fa3bdaa17006e991ad2e5" Feb 17 16:59:24 crc kubenswrapper[4808]: E0217 16:59:24.147320 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:59:29 crc kubenswrapper[4808]: E0217 16:59:29.149535 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:59:33 crc kubenswrapper[4808]: E0217 16:59:33.148842 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:59:37 crc kubenswrapper[4808]: I0217 16:59:37.151530 4808 scope.go:117] "RemoveContainer" containerID="7fbe8df1c68f978d3698bd74ae49612c95a40d103c6fa3bdaa17006e991ad2e5" Feb 17 16:59:37 crc kubenswrapper[4808]: E0217 16:59:37.152196 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:59:44 crc kubenswrapper[4808]: E0217 16:59:44.148225 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:59:44 crc kubenswrapper[4808]: E0217 16:59:44.148643 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:59:49 crc kubenswrapper[4808]: I0217 16:59:49.147171 4808 scope.go:117] "RemoveContainer" containerID="7fbe8df1c68f978d3698bd74ae49612c95a40d103c6fa3bdaa17006e991ad2e5" Feb 17 16:59:49 crc kubenswrapper[4808]: E0217 16:59:49.148182 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:59:55 crc kubenswrapper[4808]: E0217 16:59:55.148065 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:59:58 crc kubenswrapper[4808]: E0217 16:59:58.151218 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:00:00 crc kubenswrapper[4808]: I0217 17:00:00.177724 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522460-lvm9k"] Feb 17 17:00:00 crc kubenswrapper[4808]: E0217 17:00:00.178692 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="740e9eba-2f31-48f8-af0e-68aec31e27cf" containerName="extract-content" Feb 17 17:00:00 crc kubenswrapper[4808]: I0217 17:00:00.178707 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="740e9eba-2f31-48f8-af0e-68aec31e27cf" containerName="extract-content" Feb 17 17:00:00 crc kubenswrapper[4808]: E0217 17:00:00.178749 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="740e9eba-2f31-48f8-af0e-68aec31e27cf" containerName="extract-utilities" Feb 17 17:00:00 crc kubenswrapper[4808]: I0217 17:00:00.178757 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="740e9eba-2f31-48f8-af0e-68aec31e27cf" containerName="extract-utilities" Feb 17 17:00:00 crc kubenswrapper[4808]: E0217 17:00:00.178776 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="740e9eba-2f31-48f8-af0e-68aec31e27cf" containerName="registry-server" Feb 17 17:00:00 crc kubenswrapper[4808]: I0217 17:00:00.178782 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="740e9eba-2f31-48f8-af0e-68aec31e27cf" containerName="registry-server" Feb 17 17:00:00 crc kubenswrapper[4808]: I0217 17:00:00.178993 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="740e9eba-2f31-48f8-af0e-68aec31e27cf" containerName="registry-server" Feb 17 17:00:00 crc kubenswrapper[4808]: I0217 17:00:00.179781 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522460-lvm9k" Feb 17 17:00:00 crc kubenswrapper[4808]: I0217 17:00:00.182181 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 17 17:00:00 crc kubenswrapper[4808]: I0217 17:00:00.186737 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 17 17:00:00 crc kubenswrapper[4808]: I0217 17:00:00.204204 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522460-lvm9k"] Feb 17 17:00:00 crc kubenswrapper[4808]: I0217 17:00:00.232563 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7a359510-529f-4c70-8fee-5415433f1aff-config-volume\") pod \"collect-profiles-29522460-lvm9k\" (UID: \"7a359510-529f-4c70-8fee-5415433f1aff\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522460-lvm9k" Feb 17 17:00:00 crc kubenswrapper[4808]: I0217 17:00:00.232706 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlwr9\" (UniqueName: \"kubernetes.io/projected/7a359510-529f-4c70-8fee-5415433f1aff-kube-api-access-jlwr9\") pod \"collect-profiles-29522460-lvm9k\" (UID: \"7a359510-529f-4c70-8fee-5415433f1aff\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522460-lvm9k" Feb 17 17:00:00 crc kubenswrapper[4808]: I0217 17:00:00.232885 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7a359510-529f-4c70-8fee-5415433f1aff-secret-volume\") pod \"collect-profiles-29522460-lvm9k\" (UID: \"7a359510-529f-4c70-8fee-5415433f1aff\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522460-lvm9k" Feb 17 17:00:00 crc kubenswrapper[4808]: I0217 17:00:00.335274 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7a359510-529f-4c70-8fee-5415433f1aff-config-volume\") pod \"collect-profiles-29522460-lvm9k\" (UID: \"7a359510-529f-4c70-8fee-5415433f1aff\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522460-lvm9k" Feb 17 17:00:00 crc kubenswrapper[4808]: I0217 17:00:00.335402 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlwr9\" (UniqueName: \"kubernetes.io/projected/7a359510-529f-4c70-8fee-5415433f1aff-kube-api-access-jlwr9\") pod \"collect-profiles-29522460-lvm9k\" (UID: \"7a359510-529f-4c70-8fee-5415433f1aff\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522460-lvm9k" Feb 17 17:00:00 crc kubenswrapper[4808]: I0217 17:00:00.335534 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7a359510-529f-4c70-8fee-5415433f1aff-secret-volume\") pod \"collect-profiles-29522460-lvm9k\" (UID: \"7a359510-529f-4c70-8fee-5415433f1aff\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522460-lvm9k" Feb 17 17:00:00 crc kubenswrapper[4808]: I0217 17:00:00.336535 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7a359510-529f-4c70-8fee-5415433f1aff-config-volume\") pod \"collect-profiles-29522460-lvm9k\" (UID: \"7a359510-529f-4c70-8fee-5415433f1aff\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522460-lvm9k" Feb 17 17:00:00 crc kubenswrapper[4808]: I0217 17:00:00.343266 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7a359510-529f-4c70-8fee-5415433f1aff-secret-volume\") pod \"collect-profiles-29522460-lvm9k\" (UID: \"7a359510-529f-4c70-8fee-5415433f1aff\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522460-lvm9k" Feb 17 17:00:00 crc kubenswrapper[4808]: I0217 17:00:00.353772 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlwr9\" (UniqueName: \"kubernetes.io/projected/7a359510-529f-4c70-8fee-5415433f1aff-kube-api-access-jlwr9\") pod \"collect-profiles-29522460-lvm9k\" (UID: \"7a359510-529f-4c70-8fee-5415433f1aff\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522460-lvm9k" Feb 17 17:00:00 crc kubenswrapper[4808]: I0217 17:00:00.502231 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522460-lvm9k" Feb 17 17:00:00 crc kubenswrapper[4808]: I0217 17:00:00.950057 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522460-lvm9k"] Feb 17 17:00:01 crc kubenswrapper[4808]: I0217 17:00:01.028953 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zzjwk"] Feb 17 17:00:01 crc kubenswrapper[4808]: I0217 17:00:01.031075 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zzjwk" Feb 17 17:00:01 crc kubenswrapper[4808]: I0217 17:00:01.033033 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-gpcsv" Feb 17 17:00:01 crc kubenswrapper[4808]: I0217 17:00:01.034113 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 17 17:00:01 crc kubenswrapper[4808]: I0217 17:00:01.034127 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 17 17:00:01 crc kubenswrapper[4808]: I0217 17:00:01.036439 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 17 17:00:01 crc kubenswrapper[4808]: I0217 17:00:01.037423 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zzjwk"] Feb 17 17:00:01 crc kubenswrapper[4808]: I0217 17:00:01.065265 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6fa90ca1-9ae4-4cce-a41f-640f2629ccfd-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-zzjwk\" (UID: \"6fa90ca1-9ae4-4cce-a41f-640f2629ccfd\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zzjwk" Feb 17 17:00:01 crc kubenswrapper[4808]: I0217 17:00:01.065374 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6fa90ca1-9ae4-4cce-a41f-640f2629ccfd-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-zzjwk\" (UID: \"6fa90ca1-9ae4-4cce-a41f-640f2629ccfd\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zzjwk" Feb 17 17:00:01 crc kubenswrapper[4808]: I0217 17:00:01.065469 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94ggj\" (UniqueName: \"kubernetes.io/projected/6fa90ca1-9ae4-4cce-a41f-640f2629ccfd-kube-api-access-94ggj\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-zzjwk\" (UID: \"6fa90ca1-9ae4-4cce-a41f-640f2629ccfd\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zzjwk" Feb 17 17:00:01 crc kubenswrapper[4808]: I0217 17:00:01.168804 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6fa90ca1-9ae4-4cce-a41f-640f2629ccfd-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-zzjwk\" (UID: \"6fa90ca1-9ae4-4cce-a41f-640f2629ccfd\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zzjwk" Feb 17 17:00:01 crc kubenswrapper[4808]: I0217 17:00:01.169874 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94ggj\" (UniqueName: \"kubernetes.io/projected/6fa90ca1-9ae4-4cce-a41f-640f2629ccfd-kube-api-access-94ggj\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-zzjwk\" (UID: \"6fa90ca1-9ae4-4cce-a41f-640f2629ccfd\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zzjwk" Feb 17 17:00:01 crc kubenswrapper[4808]: I0217 17:00:01.170138 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6fa90ca1-9ae4-4cce-a41f-640f2629ccfd-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-zzjwk\" (UID: \"6fa90ca1-9ae4-4cce-a41f-640f2629ccfd\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zzjwk" Feb 17 17:00:01 crc kubenswrapper[4808]: I0217 17:00:01.175453 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6fa90ca1-9ae4-4cce-a41f-640f2629ccfd-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-zzjwk\" (UID: \"6fa90ca1-9ae4-4cce-a41f-640f2629ccfd\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zzjwk" Feb 17 17:00:01 crc kubenswrapper[4808]: I0217 17:00:01.175979 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6fa90ca1-9ae4-4cce-a41f-640f2629ccfd-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-zzjwk\" (UID: \"6fa90ca1-9ae4-4cce-a41f-640f2629ccfd\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zzjwk" Feb 17 17:00:01 crc kubenswrapper[4808]: I0217 17:00:01.186683 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94ggj\" (UniqueName: \"kubernetes.io/projected/6fa90ca1-9ae4-4cce-a41f-640f2629ccfd-kube-api-access-94ggj\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-zzjwk\" (UID: \"6fa90ca1-9ae4-4cce-a41f-640f2629ccfd\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zzjwk" Feb 17 17:00:01 crc kubenswrapper[4808]: I0217 17:00:01.195118 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522460-lvm9k" event={"ID":"7a359510-529f-4c70-8fee-5415433f1aff","Type":"ContainerStarted","Data":"33c65ad70d91085715bc675a67dc26448778e53315c13827ed28c79f1083adea"} Feb 17 17:00:01 crc kubenswrapper[4808]: I0217 17:00:01.195162 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522460-lvm9k" event={"ID":"7a359510-529f-4c70-8fee-5415433f1aff","Type":"ContainerStarted","Data":"4d072af7d7b41f63565bf3505064037fbb281aa8cd9e503fc5a958dbac22ec0e"} Feb 17 17:00:01 crc kubenswrapper[4808]: I0217 17:00:01.211058 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29522460-lvm9k" podStartSLOduration=1.2110377159999999 podStartE2EDuration="1.211037716s" podCreationTimestamp="2026-02-17 17:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 17:00:01.207266022 +0000 UTC m=+3964.723625115" watchObservedRunningTime="2026-02-17 17:00:01.211037716 +0000 UTC m=+3964.727396789" Feb 17 17:00:01 crc kubenswrapper[4808]: I0217 17:00:01.368998 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zzjwk" Feb 17 17:00:01 crc kubenswrapper[4808]: I0217 17:00:01.914524 4808 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 17:00:01 crc kubenswrapper[4808]: I0217 17:00:01.918973 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zzjwk"] Feb 17 17:00:02 crc kubenswrapper[4808]: I0217 17:00:02.204691 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zzjwk" event={"ID":"6fa90ca1-9ae4-4cce-a41f-640f2629ccfd","Type":"ContainerStarted","Data":"7ccbd48b8c6ddd33e393b5cc60c189b1890685479c8bc28981b9cf1783cd1867"} Feb 17 17:00:02 crc kubenswrapper[4808]: I0217 17:00:02.206605 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522460-lvm9k" event={"ID":"7a359510-529f-4c70-8fee-5415433f1aff","Type":"ContainerDied","Data":"33c65ad70d91085715bc675a67dc26448778e53315c13827ed28c79f1083adea"} Feb 17 17:00:02 crc kubenswrapper[4808]: I0217 17:00:02.206556 4808 generic.go:334] "Generic (PLEG): container finished" podID="7a359510-529f-4c70-8fee-5415433f1aff" containerID="33c65ad70d91085715bc675a67dc26448778e53315c13827ed28c79f1083adea" exitCode=0 Feb 17 17:00:03 crc kubenswrapper[4808]: I0217 17:00:03.145428 4808 scope.go:117] "RemoveContainer" containerID="7fbe8df1c68f978d3698bd74ae49612c95a40d103c6fa3bdaa17006e991ad2e5" Feb 17 17:00:03 crc kubenswrapper[4808]: E0217 17:00:03.146053 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 17:00:03 crc kubenswrapper[4808]: I0217 17:00:03.268297 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zzjwk" event={"ID":"6fa90ca1-9ae4-4cce-a41f-640f2629ccfd","Type":"ContainerStarted","Data":"6287c9af3f8fc5a9bacd7d967c6c0711a69d46294cccb346aa34f674145f916b"} Feb 17 17:00:03 crc kubenswrapper[4808]: I0217 17:00:03.296360 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zzjwk" podStartSLOduration=1.663979924 podStartE2EDuration="2.29633784s" podCreationTimestamp="2026-02-17 17:00:01 +0000 UTC" firstStartedPulling="2026-02-17 17:00:01.914310793 +0000 UTC m=+3965.430669866" lastFinishedPulling="2026-02-17 17:00:02.546668709 +0000 UTC m=+3966.063027782" observedRunningTime="2026-02-17 17:00:03.286700378 +0000 UTC m=+3966.803059461" watchObservedRunningTime="2026-02-17 17:00:03.29633784 +0000 UTC m=+3966.812696913" Feb 17 17:00:03 crc kubenswrapper[4808]: I0217 17:00:03.743778 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522460-lvm9k" Feb 17 17:00:03 crc kubenswrapper[4808]: I0217 17:00:03.822013 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7a359510-529f-4c70-8fee-5415433f1aff-config-volume\") pod \"7a359510-529f-4c70-8fee-5415433f1aff\" (UID: \"7a359510-529f-4c70-8fee-5415433f1aff\") " Feb 17 17:00:03 crc kubenswrapper[4808]: I0217 17:00:03.822242 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7a359510-529f-4c70-8fee-5415433f1aff-secret-volume\") pod \"7a359510-529f-4c70-8fee-5415433f1aff\" (UID: \"7a359510-529f-4c70-8fee-5415433f1aff\") " Feb 17 17:00:03 crc kubenswrapper[4808]: I0217 17:00:03.822410 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jlwr9\" (UniqueName: \"kubernetes.io/projected/7a359510-529f-4c70-8fee-5415433f1aff-kube-api-access-jlwr9\") pod \"7a359510-529f-4c70-8fee-5415433f1aff\" (UID: \"7a359510-529f-4c70-8fee-5415433f1aff\") " Feb 17 17:00:03 crc kubenswrapper[4808]: I0217 17:00:03.823022 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a359510-529f-4c70-8fee-5415433f1aff-config-volume" (OuterVolumeSpecName: "config-volume") pod "7a359510-529f-4c70-8fee-5415433f1aff" (UID: "7a359510-529f-4c70-8fee-5415433f1aff"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 17:00:03 crc kubenswrapper[4808]: I0217 17:00:03.829302 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a359510-529f-4c70-8fee-5415433f1aff-kube-api-access-jlwr9" (OuterVolumeSpecName: "kube-api-access-jlwr9") pod "7a359510-529f-4c70-8fee-5415433f1aff" (UID: "7a359510-529f-4c70-8fee-5415433f1aff"). InnerVolumeSpecName "kube-api-access-jlwr9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:00:03 crc kubenswrapper[4808]: I0217 17:00:03.836809 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a359510-529f-4c70-8fee-5415433f1aff-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "7a359510-529f-4c70-8fee-5415433f1aff" (UID: "7a359510-529f-4c70-8fee-5415433f1aff"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:00:03 crc kubenswrapper[4808]: I0217 17:00:03.925831 4808 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7a359510-529f-4c70-8fee-5415433f1aff-config-volume\") on node \"crc\" DevicePath \"\"" Feb 17 17:00:03 crc kubenswrapper[4808]: I0217 17:00:03.925874 4808 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7a359510-529f-4c70-8fee-5415433f1aff-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 17 17:00:03 crc kubenswrapper[4808]: I0217 17:00:03.925889 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jlwr9\" (UniqueName: \"kubernetes.io/projected/7a359510-529f-4c70-8fee-5415433f1aff-kube-api-access-jlwr9\") on node \"crc\" DevicePath \"\"" Feb 17 17:00:04 crc kubenswrapper[4808]: I0217 17:00:04.279415 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522460-lvm9k" Feb 17 17:00:04 crc kubenswrapper[4808]: I0217 17:00:04.279399 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522460-lvm9k" event={"ID":"7a359510-529f-4c70-8fee-5415433f1aff","Type":"ContainerDied","Data":"4d072af7d7b41f63565bf3505064037fbb281aa8cd9e503fc5a958dbac22ec0e"} Feb 17 17:00:04 crc kubenswrapper[4808]: I0217 17:00:04.279539 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4d072af7d7b41f63565bf3505064037fbb281aa8cd9e503fc5a958dbac22ec0e" Feb 17 17:00:04 crc kubenswrapper[4808]: I0217 17:00:04.326043 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522415-pp7nh"] Feb 17 17:00:04 crc kubenswrapper[4808]: I0217 17:00:04.336053 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522415-pp7nh"] Feb 17 17:00:04 crc kubenswrapper[4808]: I0217 17:00:04.806463 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-jrqlg"] Feb 17 17:00:04 crc kubenswrapper[4808]: E0217 17:00:04.807000 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a359510-529f-4c70-8fee-5415433f1aff" containerName="collect-profiles" Feb 17 17:00:04 crc kubenswrapper[4808]: I0217 17:00:04.807024 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a359510-529f-4c70-8fee-5415433f1aff" containerName="collect-profiles" Feb 17 17:00:04 crc kubenswrapper[4808]: I0217 17:00:04.807271 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a359510-529f-4c70-8fee-5415433f1aff" containerName="collect-profiles" Feb 17 17:00:04 crc kubenswrapper[4808]: I0217 17:00:04.808968 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jrqlg" Feb 17 17:00:04 crc kubenswrapper[4808]: I0217 17:00:04.830895 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jrqlg"] Feb 17 17:00:04 crc kubenswrapper[4808]: I0217 17:00:04.861641 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxlpb\" (UniqueName: \"kubernetes.io/projected/3e83d8af-25d4-4332-921b-7f4e8b4373c6-kube-api-access-dxlpb\") pod \"redhat-marketplace-jrqlg\" (UID: \"3e83d8af-25d4-4332-921b-7f4e8b4373c6\") " pod="openshift-marketplace/redhat-marketplace-jrqlg" Feb 17 17:00:04 crc kubenswrapper[4808]: I0217 17:00:04.861819 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e83d8af-25d4-4332-921b-7f4e8b4373c6-utilities\") pod \"redhat-marketplace-jrqlg\" (UID: \"3e83d8af-25d4-4332-921b-7f4e8b4373c6\") " pod="openshift-marketplace/redhat-marketplace-jrqlg" Feb 17 17:00:04 crc kubenswrapper[4808]: I0217 17:00:04.862129 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e83d8af-25d4-4332-921b-7f4e8b4373c6-catalog-content\") pod \"redhat-marketplace-jrqlg\" (UID: \"3e83d8af-25d4-4332-921b-7f4e8b4373c6\") " pod="openshift-marketplace/redhat-marketplace-jrqlg" Feb 17 17:00:04 crc kubenswrapper[4808]: I0217 17:00:04.970196 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e83d8af-25d4-4332-921b-7f4e8b4373c6-catalog-content\") pod \"redhat-marketplace-jrqlg\" (UID: \"3e83d8af-25d4-4332-921b-7f4e8b4373c6\") " pod="openshift-marketplace/redhat-marketplace-jrqlg" Feb 17 17:00:04 crc kubenswrapper[4808]: I0217 17:00:04.970306 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxlpb\" (UniqueName: \"kubernetes.io/projected/3e83d8af-25d4-4332-921b-7f4e8b4373c6-kube-api-access-dxlpb\") pod \"redhat-marketplace-jrqlg\" (UID: \"3e83d8af-25d4-4332-921b-7f4e8b4373c6\") " pod="openshift-marketplace/redhat-marketplace-jrqlg" Feb 17 17:00:04 crc kubenswrapper[4808]: I0217 17:00:04.970383 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e83d8af-25d4-4332-921b-7f4e8b4373c6-utilities\") pod \"redhat-marketplace-jrqlg\" (UID: \"3e83d8af-25d4-4332-921b-7f4e8b4373c6\") " pod="openshift-marketplace/redhat-marketplace-jrqlg" Feb 17 17:00:04 crc kubenswrapper[4808]: I0217 17:00:04.970719 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e83d8af-25d4-4332-921b-7f4e8b4373c6-catalog-content\") pod \"redhat-marketplace-jrqlg\" (UID: \"3e83d8af-25d4-4332-921b-7f4e8b4373c6\") " pod="openshift-marketplace/redhat-marketplace-jrqlg" Feb 17 17:00:04 crc kubenswrapper[4808]: I0217 17:00:04.970785 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e83d8af-25d4-4332-921b-7f4e8b4373c6-utilities\") pod \"redhat-marketplace-jrqlg\" (UID: \"3e83d8af-25d4-4332-921b-7f4e8b4373c6\") " pod="openshift-marketplace/redhat-marketplace-jrqlg" Feb 17 17:00:04 crc kubenswrapper[4808]: I0217 17:00:04.995939 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxlpb\" (UniqueName: \"kubernetes.io/projected/3e83d8af-25d4-4332-921b-7f4e8b4373c6-kube-api-access-dxlpb\") pod \"redhat-marketplace-jrqlg\" (UID: \"3e83d8af-25d4-4332-921b-7f4e8b4373c6\") " pod="openshift-marketplace/redhat-marketplace-jrqlg" Feb 17 17:00:05 crc kubenswrapper[4808]: I0217 17:00:05.158964 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41f86f53-7772-428e-b916-8624c83de123" path="/var/lib/kubelet/pods/41f86f53-7772-428e-b916-8624c83de123/volumes" Feb 17 17:00:05 crc kubenswrapper[4808]: I0217 17:00:05.168305 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jrqlg" Feb 17 17:00:05 crc kubenswrapper[4808]: I0217 17:00:05.624776 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jrqlg"] Feb 17 17:00:06 crc kubenswrapper[4808]: E0217 17:00:06.270146 4808 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 17 17:00:06 crc kubenswrapper[4808]: E0217 17:00:06.270473 4808 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 17 17:00:06 crc kubenswrapper[4808]: E0217 17:00:06.270659 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fnd2x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-zl7nk_openstack(a4b182d0-48fc-4487-b7ad-18f7803a4d4c): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 17:00:06 crc kubenswrapper[4808]: E0217 17:00:06.271857 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:00:06 crc kubenswrapper[4808]: I0217 17:00:06.298470 4808 generic.go:334] "Generic (PLEG): container finished" podID="3e83d8af-25d4-4332-921b-7f4e8b4373c6" containerID="3799d28c7a608a801a3f204853db0abffaef6f609e58ac97e901828d128b6262" exitCode=0 Feb 17 17:00:06 crc kubenswrapper[4808]: I0217 17:00:06.298571 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jrqlg" event={"ID":"3e83d8af-25d4-4332-921b-7f4e8b4373c6","Type":"ContainerDied","Data":"3799d28c7a608a801a3f204853db0abffaef6f609e58ac97e901828d128b6262"} Feb 17 17:00:06 crc kubenswrapper[4808]: I0217 17:00:06.298941 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jrqlg" event={"ID":"3e83d8af-25d4-4332-921b-7f4e8b4373c6","Type":"ContainerStarted","Data":"ce300cc7efbbfaf7ea087a5e466967ad1bec84cde0e6b17839e7b09b820d7cd6"} Feb 17 17:00:07 crc kubenswrapper[4808]: I0217 17:00:07.312087 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jrqlg" event={"ID":"3e83d8af-25d4-4332-921b-7f4e8b4373c6","Type":"ContainerStarted","Data":"76f01c5c36a5224959dfdedf23a07830accee10e090dfb6a907075bc920bbd21"} Feb 17 17:00:08 crc kubenswrapper[4808]: I0217 17:00:08.328932 4808 generic.go:334] "Generic (PLEG): container finished" podID="3e83d8af-25d4-4332-921b-7f4e8b4373c6" containerID="76f01c5c36a5224959dfdedf23a07830accee10e090dfb6a907075bc920bbd21" exitCode=0 Feb 17 17:00:08 crc kubenswrapper[4808]: I0217 17:00:08.329056 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jrqlg" event={"ID":"3e83d8af-25d4-4332-921b-7f4e8b4373c6","Type":"ContainerDied","Data":"76f01c5c36a5224959dfdedf23a07830accee10e090dfb6a907075bc920bbd21"} Feb 17 17:00:08 crc kubenswrapper[4808]: I0217 17:00:08.787436 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7hsbw"] Feb 17 17:00:08 crc kubenswrapper[4808]: I0217 17:00:08.789827 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7hsbw" Feb 17 17:00:08 crc kubenswrapper[4808]: I0217 17:00:08.815263 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7hsbw"] Feb 17 17:00:08 crc kubenswrapper[4808]: I0217 17:00:08.868621 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b80afd2-f4bc-40fe-9082-9f8db573476c-catalog-content\") pod \"redhat-operators-7hsbw\" (UID: \"0b80afd2-f4bc-40fe-9082-9f8db573476c\") " pod="openshift-marketplace/redhat-operators-7hsbw" Feb 17 17:00:08 crc kubenswrapper[4808]: I0217 17:00:08.868757 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b80afd2-f4bc-40fe-9082-9f8db573476c-utilities\") pod \"redhat-operators-7hsbw\" (UID: \"0b80afd2-f4bc-40fe-9082-9f8db573476c\") " pod="openshift-marketplace/redhat-operators-7hsbw" Feb 17 17:00:08 crc kubenswrapper[4808]: I0217 17:00:08.868813 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8t7k7\" (UniqueName: \"kubernetes.io/projected/0b80afd2-f4bc-40fe-9082-9f8db573476c-kube-api-access-8t7k7\") pod \"redhat-operators-7hsbw\" (UID: \"0b80afd2-f4bc-40fe-9082-9f8db573476c\") " pod="openshift-marketplace/redhat-operators-7hsbw" Feb 17 17:00:08 crc kubenswrapper[4808]: I0217 17:00:08.972633 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b80afd2-f4bc-40fe-9082-9f8db573476c-catalog-content\") pod \"redhat-operators-7hsbw\" (UID: \"0b80afd2-f4bc-40fe-9082-9f8db573476c\") " pod="openshift-marketplace/redhat-operators-7hsbw" Feb 17 17:00:08 crc kubenswrapper[4808]: I0217 17:00:08.972742 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b80afd2-f4bc-40fe-9082-9f8db573476c-utilities\") pod \"redhat-operators-7hsbw\" (UID: \"0b80afd2-f4bc-40fe-9082-9f8db573476c\") " pod="openshift-marketplace/redhat-operators-7hsbw" Feb 17 17:00:08 crc kubenswrapper[4808]: I0217 17:00:08.972779 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8t7k7\" (UniqueName: \"kubernetes.io/projected/0b80afd2-f4bc-40fe-9082-9f8db573476c-kube-api-access-8t7k7\") pod \"redhat-operators-7hsbw\" (UID: \"0b80afd2-f4bc-40fe-9082-9f8db573476c\") " pod="openshift-marketplace/redhat-operators-7hsbw" Feb 17 17:00:08 crc kubenswrapper[4808]: I0217 17:00:08.973815 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b80afd2-f4bc-40fe-9082-9f8db573476c-catalog-content\") pod \"redhat-operators-7hsbw\" (UID: \"0b80afd2-f4bc-40fe-9082-9f8db573476c\") " pod="openshift-marketplace/redhat-operators-7hsbw" Feb 17 17:00:08 crc kubenswrapper[4808]: I0217 17:00:08.973859 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b80afd2-f4bc-40fe-9082-9f8db573476c-utilities\") pod \"redhat-operators-7hsbw\" (UID: \"0b80afd2-f4bc-40fe-9082-9f8db573476c\") " pod="openshift-marketplace/redhat-operators-7hsbw" Feb 17 17:00:08 crc kubenswrapper[4808]: I0217 17:00:08.992745 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8t7k7\" (UniqueName: \"kubernetes.io/projected/0b80afd2-f4bc-40fe-9082-9f8db573476c-kube-api-access-8t7k7\") pod \"redhat-operators-7hsbw\" (UID: \"0b80afd2-f4bc-40fe-9082-9f8db573476c\") " pod="openshift-marketplace/redhat-operators-7hsbw" Feb 17 17:00:09 crc kubenswrapper[4808]: I0217 17:00:09.116837 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7hsbw" Feb 17 17:00:09 crc kubenswrapper[4808]: I0217 17:00:09.349023 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jrqlg" event={"ID":"3e83d8af-25d4-4332-921b-7f4e8b4373c6","Type":"ContainerStarted","Data":"1cf73a78abc574fcd9ab5d34937fd405d8ea74de7b2c04d9595ec6692931b433"} Feb 17 17:00:09 crc kubenswrapper[4808]: I0217 17:00:09.376104 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-jrqlg" podStartSLOduration=2.722683836 podStartE2EDuration="5.376082628s" podCreationTimestamp="2026-02-17 17:00:04 +0000 UTC" firstStartedPulling="2026-02-17 17:00:06.300676557 +0000 UTC m=+3969.817035630" lastFinishedPulling="2026-02-17 17:00:08.954075349 +0000 UTC m=+3972.470434422" observedRunningTime="2026-02-17 17:00:09.36805007 +0000 UTC m=+3972.884409133" watchObservedRunningTime="2026-02-17 17:00:09.376082628 +0000 UTC m=+3972.892441701" Feb 17 17:00:09 crc kubenswrapper[4808]: I0217 17:00:09.664223 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7hsbw"] Feb 17 17:00:10 crc kubenswrapper[4808]: I0217 17:00:10.358230 4808 generic.go:334] "Generic (PLEG): container finished" podID="0b80afd2-f4bc-40fe-9082-9f8db573476c" containerID="96f1f271d2bd07ead3d1f83bebbdbbb97452db459ce59a3b4676fb385cc8c17e" exitCode=0 Feb 17 17:00:10 crc kubenswrapper[4808]: I0217 17:00:10.358300 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7hsbw" event={"ID":"0b80afd2-f4bc-40fe-9082-9f8db573476c","Type":"ContainerDied","Data":"96f1f271d2bd07ead3d1f83bebbdbbb97452db459ce59a3b4676fb385cc8c17e"} Feb 17 17:00:10 crc kubenswrapper[4808]: I0217 17:00:10.359771 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7hsbw" event={"ID":"0b80afd2-f4bc-40fe-9082-9f8db573476c","Type":"ContainerStarted","Data":"318fa89bc3edd094bbe66b4e0345273e686b3e18d39970e04a57723871357c51"} Feb 17 17:00:11 crc kubenswrapper[4808]: I0217 17:00:11.369822 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7hsbw" event={"ID":"0b80afd2-f4bc-40fe-9082-9f8db573476c","Type":"ContainerStarted","Data":"36ad0d790006e5a1ec22dff95061c4149c581b4e4339f62b424674bff8ee3dea"} Feb 17 17:00:12 crc kubenswrapper[4808]: E0217 17:00:12.148028 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:00:14 crc kubenswrapper[4808]: I0217 17:00:14.401522 4808 generic.go:334] "Generic (PLEG): container finished" podID="0b80afd2-f4bc-40fe-9082-9f8db573476c" containerID="36ad0d790006e5a1ec22dff95061c4149c581b4e4339f62b424674bff8ee3dea" exitCode=0 Feb 17 17:00:14 crc kubenswrapper[4808]: I0217 17:00:14.401611 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7hsbw" event={"ID":"0b80afd2-f4bc-40fe-9082-9f8db573476c","Type":"ContainerDied","Data":"36ad0d790006e5a1ec22dff95061c4149c581b4e4339f62b424674bff8ee3dea"} Feb 17 17:00:15 crc kubenswrapper[4808]: I0217 17:00:15.169024 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-jrqlg" Feb 17 17:00:15 crc kubenswrapper[4808]: I0217 17:00:15.169257 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-jrqlg" Feb 17 17:00:15 crc kubenswrapper[4808]: I0217 17:00:15.216503 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-jrqlg" Feb 17 17:00:15 crc kubenswrapper[4808]: I0217 17:00:15.414082 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7hsbw" event={"ID":"0b80afd2-f4bc-40fe-9082-9f8db573476c","Type":"ContainerStarted","Data":"1708d7b0d3eb7e0941e2a134e49dd13a3649ddc50b2e62db6277f45786ecf0a9"} Feb 17 17:00:15 crc kubenswrapper[4808]: I0217 17:00:15.430639 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7hsbw" podStartSLOduration=2.979943617 podStartE2EDuration="7.430622232s" podCreationTimestamp="2026-02-17 17:00:08 +0000 UTC" firstStartedPulling="2026-02-17 17:00:10.360294485 +0000 UTC m=+3973.876653548" lastFinishedPulling="2026-02-17 17:00:14.81097309 +0000 UTC m=+3978.327332163" observedRunningTime="2026-02-17 17:00:15.429038348 +0000 UTC m=+3978.945397431" watchObservedRunningTime="2026-02-17 17:00:15.430622232 +0000 UTC m=+3978.946981305" Feb 17 17:00:15 crc kubenswrapper[4808]: I0217 17:00:15.484321 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-jrqlg" Feb 17 17:00:15 crc kubenswrapper[4808]: I0217 17:00:15.979305 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jrqlg"] Feb 17 17:00:17 crc kubenswrapper[4808]: I0217 17:00:17.152039 4808 scope.go:117] "RemoveContainer" containerID="7fbe8df1c68f978d3698bd74ae49612c95a40d103c6fa3bdaa17006e991ad2e5" Feb 17 17:00:17 crc kubenswrapper[4808]: E0217 17:00:17.152699 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 17:00:17 crc kubenswrapper[4808]: I0217 17:00:17.432700 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-jrqlg" podUID="3e83d8af-25d4-4332-921b-7f4e8b4373c6" containerName="registry-server" containerID="cri-o://1cf73a78abc574fcd9ab5d34937fd405d8ea74de7b2c04d9595ec6692931b433" gracePeriod=2 Feb 17 17:00:18 crc kubenswrapper[4808]: I0217 17:00:18.061259 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jrqlg" Feb 17 17:00:18 crc kubenswrapper[4808]: E0217 17:00:18.147484 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:00:18 crc kubenswrapper[4808]: I0217 17:00:18.175558 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dxlpb\" (UniqueName: \"kubernetes.io/projected/3e83d8af-25d4-4332-921b-7f4e8b4373c6-kube-api-access-dxlpb\") pod \"3e83d8af-25d4-4332-921b-7f4e8b4373c6\" (UID: \"3e83d8af-25d4-4332-921b-7f4e8b4373c6\") " Feb 17 17:00:18 crc kubenswrapper[4808]: I0217 17:00:18.176907 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e83d8af-25d4-4332-921b-7f4e8b4373c6-utilities\") pod \"3e83d8af-25d4-4332-921b-7f4e8b4373c6\" (UID: \"3e83d8af-25d4-4332-921b-7f4e8b4373c6\") " Feb 17 17:00:18 crc kubenswrapper[4808]: I0217 17:00:18.177256 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e83d8af-25d4-4332-921b-7f4e8b4373c6-catalog-content\") pod \"3e83d8af-25d4-4332-921b-7f4e8b4373c6\" (UID: \"3e83d8af-25d4-4332-921b-7f4e8b4373c6\") " Feb 17 17:00:18 crc kubenswrapper[4808]: I0217 17:00:18.191337 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e83d8af-25d4-4332-921b-7f4e8b4373c6-utilities" (OuterVolumeSpecName: "utilities") pod "3e83d8af-25d4-4332-921b-7f4e8b4373c6" (UID: "3e83d8af-25d4-4332-921b-7f4e8b4373c6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:00:18 crc kubenswrapper[4808]: I0217 17:00:18.212605 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e83d8af-25d4-4332-921b-7f4e8b4373c6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3e83d8af-25d4-4332-921b-7f4e8b4373c6" (UID: "3e83d8af-25d4-4332-921b-7f4e8b4373c6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:00:18 crc kubenswrapper[4808]: I0217 17:00:18.251515 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e83d8af-25d4-4332-921b-7f4e8b4373c6-kube-api-access-dxlpb" (OuterVolumeSpecName: "kube-api-access-dxlpb") pod "3e83d8af-25d4-4332-921b-7f4e8b4373c6" (UID: "3e83d8af-25d4-4332-921b-7f4e8b4373c6"). InnerVolumeSpecName "kube-api-access-dxlpb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:00:18 crc kubenswrapper[4808]: I0217 17:00:18.280195 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e83d8af-25d4-4332-921b-7f4e8b4373c6-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 17:00:18 crc kubenswrapper[4808]: I0217 17:00:18.280234 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dxlpb\" (UniqueName: \"kubernetes.io/projected/3e83d8af-25d4-4332-921b-7f4e8b4373c6-kube-api-access-dxlpb\") on node \"crc\" DevicePath \"\"" Feb 17 17:00:18 crc kubenswrapper[4808]: I0217 17:00:18.280247 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e83d8af-25d4-4332-921b-7f4e8b4373c6-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 17:00:18 crc kubenswrapper[4808]: I0217 17:00:18.443235 4808 generic.go:334] "Generic (PLEG): container finished" podID="3e83d8af-25d4-4332-921b-7f4e8b4373c6" containerID="1cf73a78abc574fcd9ab5d34937fd405d8ea74de7b2c04d9595ec6692931b433" exitCode=0 Feb 17 17:00:18 crc kubenswrapper[4808]: I0217 17:00:18.443283 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jrqlg" event={"ID":"3e83d8af-25d4-4332-921b-7f4e8b4373c6","Type":"ContainerDied","Data":"1cf73a78abc574fcd9ab5d34937fd405d8ea74de7b2c04d9595ec6692931b433"} Feb 17 17:00:18 crc kubenswrapper[4808]: I0217 17:00:18.443316 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jrqlg" event={"ID":"3e83d8af-25d4-4332-921b-7f4e8b4373c6","Type":"ContainerDied","Data":"ce300cc7efbbfaf7ea087a5e466967ad1bec84cde0e6b17839e7b09b820d7cd6"} Feb 17 17:00:18 crc kubenswrapper[4808]: I0217 17:00:18.443333 4808 scope.go:117] "RemoveContainer" containerID="1cf73a78abc574fcd9ab5d34937fd405d8ea74de7b2c04d9595ec6692931b433" Feb 17 17:00:18 crc kubenswrapper[4808]: I0217 17:00:18.443451 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jrqlg" Feb 17 17:00:18 crc kubenswrapper[4808]: I0217 17:00:18.468019 4808 scope.go:117] "RemoveContainer" containerID="76f01c5c36a5224959dfdedf23a07830accee10e090dfb6a907075bc920bbd21" Feb 17 17:00:18 crc kubenswrapper[4808]: I0217 17:00:18.486376 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jrqlg"] Feb 17 17:00:18 crc kubenswrapper[4808]: I0217 17:00:18.496606 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-jrqlg"] Feb 17 17:00:18 crc kubenswrapper[4808]: I0217 17:00:18.509674 4808 scope.go:117] "RemoveContainer" containerID="3799d28c7a608a801a3f204853db0abffaef6f609e58ac97e901828d128b6262" Feb 17 17:00:18 crc kubenswrapper[4808]: I0217 17:00:18.556359 4808 scope.go:117] "RemoveContainer" containerID="1cf73a78abc574fcd9ab5d34937fd405d8ea74de7b2c04d9595ec6692931b433" Feb 17 17:00:18 crc kubenswrapper[4808]: E0217 17:00:18.556961 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1cf73a78abc574fcd9ab5d34937fd405d8ea74de7b2c04d9595ec6692931b433\": container with ID starting with 1cf73a78abc574fcd9ab5d34937fd405d8ea74de7b2c04d9595ec6692931b433 not found: ID does not exist" containerID="1cf73a78abc574fcd9ab5d34937fd405d8ea74de7b2c04d9595ec6692931b433" Feb 17 17:00:18 crc kubenswrapper[4808]: I0217 17:00:18.557005 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1cf73a78abc574fcd9ab5d34937fd405d8ea74de7b2c04d9595ec6692931b433"} err="failed to get container status \"1cf73a78abc574fcd9ab5d34937fd405d8ea74de7b2c04d9595ec6692931b433\": rpc error: code = NotFound desc = could not find container \"1cf73a78abc574fcd9ab5d34937fd405d8ea74de7b2c04d9595ec6692931b433\": container with ID starting with 1cf73a78abc574fcd9ab5d34937fd405d8ea74de7b2c04d9595ec6692931b433 not found: ID does not exist" Feb 17 17:00:18 crc kubenswrapper[4808]: I0217 17:00:18.557037 4808 scope.go:117] "RemoveContainer" containerID="76f01c5c36a5224959dfdedf23a07830accee10e090dfb6a907075bc920bbd21" Feb 17 17:00:18 crc kubenswrapper[4808]: E0217 17:00:18.561756 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"76f01c5c36a5224959dfdedf23a07830accee10e090dfb6a907075bc920bbd21\": container with ID starting with 76f01c5c36a5224959dfdedf23a07830accee10e090dfb6a907075bc920bbd21 not found: ID does not exist" containerID="76f01c5c36a5224959dfdedf23a07830accee10e090dfb6a907075bc920bbd21" Feb 17 17:00:18 crc kubenswrapper[4808]: I0217 17:00:18.561825 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76f01c5c36a5224959dfdedf23a07830accee10e090dfb6a907075bc920bbd21"} err="failed to get container status \"76f01c5c36a5224959dfdedf23a07830accee10e090dfb6a907075bc920bbd21\": rpc error: code = NotFound desc = could not find container \"76f01c5c36a5224959dfdedf23a07830accee10e090dfb6a907075bc920bbd21\": container with ID starting with 76f01c5c36a5224959dfdedf23a07830accee10e090dfb6a907075bc920bbd21 not found: ID does not exist" Feb 17 17:00:18 crc kubenswrapper[4808]: I0217 17:00:18.561877 4808 scope.go:117] "RemoveContainer" containerID="3799d28c7a608a801a3f204853db0abffaef6f609e58ac97e901828d128b6262" Feb 17 17:00:18 crc kubenswrapper[4808]: E0217 17:00:18.562334 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3799d28c7a608a801a3f204853db0abffaef6f609e58ac97e901828d128b6262\": container with ID starting with 3799d28c7a608a801a3f204853db0abffaef6f609e58ac97e901828d128b6262 not found: ID does not exist" containerID="3799d28c7a608a801a3f204853db0abffaef6f609e58ac97e901828d128b6262" Feb 17 17:00:18 crc kubenswrapper[4808]: I0217 17:00:18.562383 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3799d28c7a608a801a3f204853db0abffaef6f609e58ac97e901828d128b6262"} err="failed to get container status \"3799d28c7a608a801a3f204853db0abffaef6f609e58ac97e901828d128b6262\": rpc error: code = NotFound desc = could not find container \"3799d28c7a608a801a3f204853db0abffaef6f609e58ac97e901828d128b6262\": container with ID starting with 3799d28c7a608a801a3f204853db0abffaef6f609e58ac97e901828d128b6262 not found: ID does not exist" Feb 17 17:00:18 crc kubenswrapper[4808]: I0217 17:00:18.646763 4808 scope.go:117] "RemoveContainer" containerID="af2c8b60da9d5276edbe2e0351b8e1093617fb76e21f063ad9744c8103bb6313" Feb 17 17:00:19 crc kubenswrapper[4808]: I0217 17:00:19.118265 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7hsbw" Feb 17 17:00:19 crc kubenswrapper[4808]: I0217 17:00:19.118755 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-7hsbw" Feb 17 17:00:19 crc kubenswrapper[4808]: I0217 17:00:19.165076 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e83d8af-25d4-4332-921b-7f4e8b4373c6" path="/var/lib/kubelet/pods/3e83d8af-25d4-4332-921b-7f4e8b4373c6/volumes" Feb 17 17:00:20 crc kubenswrapper[4808]: I0217 17:00:20.614520 4808 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7hsbw" podUID="0b80afd2-f4bc-40fe-9082-9f8db573476c" containerName="registry-server" probeResult="failure" output=< Feb 17 17:00:20 crc kubenswrapper[4808]: timeout: failed to connect service ":50051" within 1s Feb 17 17:00:20 crc kubenswrapper[4808]: > Feb 17 17:00:25 crc kubenswrapper[4808]: E0217 17:00:25.148743 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:00:28 crc kubenswrapper[4808]: I0217 17:00:28.146400 4808 scope.go:117] "RemoveContainer" containerID="7fbe8df1c68f978d3698bd74ae49612c95a40d103c6fa3bdaa17006e991ad2e5" Feb 17 17:00:28 crc kubenswrapper[4808]: I0217 17:00:28.540308 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" event={"ID":"ca38b6e7-b21c-453d-8b6c-a163dac84b35","Type":"ContainerStarted","Data":"1c02b3c7aae9a1c0d42f9daaaf983a7832eab0de1b546cc54ac3397eb20c3c2a"} Feb 17 17:00:29 crc kubenswrapper[4808]: E0217 17:00:29.179820 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:00:29 crc kubenswrapper[4808]: I0217 17:00:29.206298 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7hsbw" Feb 17 17:00:29 crc kubenswrapper[4808]: I0217 17:00:29.265123 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7hsbw" Feb 17 17:00:29 crc kubenswrapper[4808]: I0217 17:00:29.453825 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7hsbw"] Feb 17 17:00:30 crc kubenswrapper[4808]: I0217 17:00:30.561477 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-7hsbw" podUID="0b80afd2-f4bc-40fe-9082-9f8db573476c" containerName="registry-server" containerID="cri-o://1708d7b0d3eb7e0941e2a134e49dd13a3649ddc50b2e62db6277f45786ecf0a9" gracePeriod=2 Feb 17 17:00:31 crc kubenswrapper[4808]: I0217 17:00:31.581340 4808 generic.go:334] "Generic (PLEG): container finished" podID="0b80afd2-f4bc-40fe-9082-9f8db573476c" containerID="1708d7b0d3eb7e0941e2a134e49dd13a3649ddc50b2e62db6277f45786ecf0a9" exitCode=0 Feb 17 17:00:31 crc kubenswrapper[4808]: I0217 17:00:31.581461 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7hsbw" event={"ID":"0b80afd2-f4bc-40fe-9082-9f8db573476c","Type":"ContainerDied","Data":"1708d7b0d3eb7e0941e2a134e49dd13a3649ddc50b2e62db6277f45786ecf0a9"} Feb 17 17:00:31 crc kubenswrapper[4808]: I0217 17:00:31.906448 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7hsbw" Feb 17 17:00:31 crc kubenswrapper[4808]: I0217 17:00:31.996471 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b80afd2-f4bc-40fe-9082-9f8db573476c-utilities\") pod \"0b80afd2-f4bc-40fe-9082-9f8db573476c\" (UID: \"0b80afd2-f4bc-40fe-9082-9f8db573476c\") " Feb 17 17:00:31 crc kubenswrapper[4808]: I0217 17:00:31.996613 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8t7k7\" (UniqueName: \"kubernetes.io/projected/0b80afd2-f4bc-40fe-9082-9f8db573476c-kube-api-access-8t7k7\") pod \"0b80afd2-f4bc-40fe-9082-9f8db573476c\" (UID: \"0b80afd2-f4bc-40fe-9082-9f8db573476c\") " Feb 17 17:00:31 crc kubenswrapper[4808]: I0217 17:00:31.996769 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b80afd2-f4bc-40fe-9082-9f8db573476c-catalog-content\") pod \"0b80afd2-f4bc-40fe-9082-9f8db573476c\" (UID: \"0b80afd2-f4bc-40fe-9082-9f8db573476c\") " Feb 17 17:00:31 crc kubenswrapper[4808]: I0217 17:00:31.997408 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b80afd2-f4bc-40fe-9082-9f8db573476c-utilities" (OuterVolumeSpecName: "utilities") pod "0b80afd2-f4bc-40fe-9082-9f8db573476c" (UID: "0b80afd2-f4bc-40fe-9082-9f8db573476c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:00:32 crc kubenswrapper[4808]: I0217 17:00:32.012533 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b80afd2-f4bc-40fe-9082-9f8db573476c-kube-api-access-8t7k7" (OuterVolumeSpecName: "kube-api-access-8t7k7") pod "0b80afd2-f4bc-40fe-9082-9f8db573476c" (UID: "0b80afd2-f4bc-40fe-9082-9f8db573476c"). InnerVolumeSpecName "kube-api-access-8t7k7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:00:32 crc kubenswrapper[4808]: I0217 17:00:32.098462 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8t7k7\" (UniqueName: \"kubernetes.io/projected/0b80afd2-f4bc-40fe-9082-9f8db573476c-kube-api-access-8t7k7\") on node \"crc\" DevicePath \"\"" Feb 17 17:00:32 crc kubenswrapper[4808]: I0217 17:00:32.098495 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b80afd2-f4bc-40fe-9082-9f8db573476c-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 17:00:32 crc kubenswrapper[4808]: I0217 17:00:32.136853 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b80afd2-f4bc-40fe-9082-9f8db573476c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0b80afd2-f4bc-40fe-9082-9f8db573476c" (UID: "0b80afd2-f4bc-40fe-9082-9f8db573476c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:00:32 crc kubenswrapper[4808]: I0217 17:00:32.199662 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b80afd2-f4bc-40fe-9082-9f8db573476c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 17:00:32 crc kubenswrapper[4808]: I0217 17:00:32.592372 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7hsbw" event={"ID":"0b80afd2-f4bc-40fe-9082-9f8db573476c","Type":"ContainerDied","Data":"318fa89bc3edd094bbe66b4e0345273e686b3e18d39970e04a57723871357c51"} Feb 17 17:00:32 crc kubenswrapper[4808]: I0217 17:00:32.592447 4808 scope.go:117] "RemoveContainer" containerID="1708d7b0d3eb7e0941e2a134e49dd13a3649ddc50b2e62db6277f45786ecf0a9" Feb 17 17:00:32 crc kubenswrapper[4808]: I0217 17:00:32.592454 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7hsbw" Feb 17 17:00:32 crc kubenswrapper[4808]: I0217 17:00:32.626053 4808 scope.go:117] "RemoveContainer" containerID="36ad0d790006e5a1ec22dff95061c4149c581b4e4339f62b424674bff8ee3dea" Feb 17 17:00:32 crc kubenswrapper[4808]: I0217 17:00:32.662742 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7hsbw"] Feb 17 17:00:32 crc kubenswrapper[4808]: I0217 17:00:32.663648 4808 scope.go:117] "RemoveContainer" containerID="96f1f271d2bd07ead3d1f83bebbdbbb97452db459ce59a3b4676fb385cc8c17e" Feb 17 17:00:32 crc kubenswrapper[4808]: I0217 17:00:32.670711 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-7hsbw"] Feb 17 17:00:33 crc kubenswrapper[4808]: I0217 17:00:33.159860 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b80afd2-f4bc-40fe-9082-9f8db573476c" path="/var/lib/kubelet/pods/0b80afd2-f4bc-40fe-9082-9f8db573476c/volumes" Feb 17 17:00:37 crc kubenswrapper[4808]: E0217 17:00:37.292609 4808 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 17:00:37 crc kubenswrapper[4808]: E0217 17:00:37.293387 4808 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 17:00:37 crc kubenswrapper[4808]: E0217 17:00:37.293638 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nfchb4h678h649h5fbh664h79h7fh666h5bfh68h565h555h59dh5b6h5bfh66ch645h547h5cbh549h9fh58bh5d4hcfh78h68chc7h5ch67dhc7h5b4q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rjgf2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(2876084b-7055-449d-9ddb-447d3a515d80): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 17:00:37 crc kubenswrapper[4808]: E0217 17:00:37.294956 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:00:44 crc kubenswrapper[4808]: E0217 17:00:44.148722 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:00:45 crc kubenswrapper[4808]: I0217 17:00:45.776804 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="ade81c90-5cdf-45d4-ad2f-52a3514e1596" containerName="galera" probeResult="failure" output="command timed out" Feb 17 17:00:48 crc kubenswrapper[4808]: E0217 17:00:48.148138 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:00:57 crc kubenswrapper[4808]: E0217 17:00:57.157942 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:01:00 crc kubenswrapper[4808]: I0217 17:01:00.165051 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29522461-f5wx2"] Feb 17 17:01:00 crc kubenswrapper[4808]: E0217 17:01:00.166260 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e83d8af-25d4-4332-921b-7f4e8b4373c6" containerName="extract-utilities" Feb 17 17:01:00 crc kubenswrapper[4808]: I0217 17:01:00.166289 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e83d8af-25d4-4332-921b-7f4e8b4373c6" containerName="extract-utilities" Feb 17 17:01:00 crc kubenswrapper[4808]: E0217 17:01:00.166310 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e83d8af-25d4-4332-921b-7f4e8b4373c6" containerName="extract-content" Feb 17 17:01:00 crc kubenswrapper[4808]: I0217 17:01:00.166321 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e83d8af-25d4-4332-921b-7f4e8b4373c6" containerName="extract-content" Feb 17 17:01:00 crc kubenswrapper[4808]: E0217 17:01:00.166348 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b80afd2-f4bc-40fe-9082-9f8db573476c" containerName="extract-content" Feb 17 17:01:00 crc kubenswrapper[4808]: I0217 17:01:00.166361 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b80afd2-f4bc-40fe-9082-9f8db573476c" containerName="extract-content" Feb 17 17:01:00 crc kubenswrapper[4808]: E0217 17:01:00.166396 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b80afd2-f4bc-40fe-9082-9f8db573476c" containerName="extract-utilities" Feb 17 17:01:00 crc kubenswrapper[4808]: I0217 17:01:00.166409 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b80afd2-f4bc-40fe-9082-9f8db573476c" containerName="extract-utilities" Feb 17 17:01:00 crc kubenswrapper[4808]: E0217 17:01:00.166438 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b80afd2-f4bc-40fe-9082-9f8db573476c" containerName="registry-server" Feb 17 17:01:00 crc kubenswrapper[4808]: I0217 17:01:00.166449 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b80afd2-f4bc-40fe-9082-9f8db573476c" containerName="registry-server" Feb 17 17:01:00 crc kubenswrapper[4808]: E0217 17:01:00.166466 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e83d8af-25d4-4332-921b-7f4e8b4373c6" containerName="registry-server" Feb 17 17:01:00 crc kubenswrapper[4808]: I0217 17:01:00.166477 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e83d8af-25d4-4332-921b-7f4e8b4373c6" containerName="registry-server" Feb 17 17:01:00 crc kubenswrapper[4808]: I0217 17:01:00.166800 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e83d8af-25d4-4332-921b-7f4e8b4373c6" containerName="registry-server" Feb 17 17:01:00 crc kubenswrapper[4808]: I0217 17:01:00.166847 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b80afd2-f4bc-40fe-9082-9f8db573476c" containerName="registry-server" Feb 17 17:01:00 crc kubenswrapper[4808]: I0217 17:01:00.168017 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29522461-f5wx2" Feb 17 17:01:00 crc kubenswrapper[4808]: I0217 17:01:00.182006 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29522461-f5wx2"] Feb 17 17:01:00 crc kubenswrapper[4808]: I0217 17:01:00.360465 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d443f775-9b53-4aaf-bcda-68aed8d88e84-config-data\") pod \"keystone-cron-29522461-f5wx2\" (UID: \"d443f775-9b53-4aaf-bcda-68aed8d88e84\") " pod="openstack/keystone-cron-29522461-f5wx2" Feb 17 17:01:00 crc kubenswrapper[4808]: I0217 17:01:00.360548 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d443f775-9b53-4aaf-bcda-68aed8d88e84-fernet-keys\") pod \"keystone-cron-29522461-f5wx2\" (UID: \"d443f775-9b53-4aaf-bcda-68aed8d88e84\") " pod="openstack/keystone-cron-29522461-f5wx2" Feb 17 17:01:00 crc kubenswrapper[4808]: I0217 17:01:00.360894 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvcvk\" (UniqueName: \"kubernetes.io/projected/d443f775-9b53-4aaf-bcda-68aed8d88e84-kube-api-access-jvcvk\") pod \"keystone-cron-29522461-f5wx2\" (UID: \"d443f775-9b53-4aaf-bcda-68aed8d88e84\") " pod="openstack/keystone-cron-29522461-f5wx2" Feb 17 17:01:00 crc kubenswrapper[4808]: I0217 17:01:00.360995 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d443f775-9b53-4aaf-bcda-68aed8d88e84-combined-ca-bundle\") pod \"keystone-cron-29522461-f5wx2\" (UID: \"d443f775-9b53-4aaf-bcda-68aed8d88e84\") " pod="openstack/keystone-cron-29522461-f5wx2" Feb 17 17:01:00 crc kubenswrapper[4808]: I0217 17:01:00.463232 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d443f775-9b53-4aaf-bcda-68aed8d88e84-combined-ca-bundle\") pod \"keystone-cron-29522461-f5wx2\" (UID: \"d443f775-9b53-4aaf-bcda-68aed8d88e84\") " pod="openstack/keystone-cron-29522461-f5wx2" Feb 17 17:01:00 crc kubenswrapper[4808]: I0217 17:01:00.463408 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d443f775-9b53-4aaf-bcda-68aed8d88e84-config-data\") pod \"keystone-cron-29522461-f5wx2\" (UID: \"d443f775-9b53-4aaf-bcda-68aed8d88e84\") " pod="openstack/keystone-cron-29522461-f5wx2" Feb 17 17:01:00 crc kubenswrapper[4808]: I0217 17:01:00.463454 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d443f775-9b53-4aaf-bcda-68aed8d88e84-fernet-keys\") pod \"keystone-cron-29522461-f5wx2\" (UID: \"d443f775-9b53-4aaf-bcda-68aed8d88e84\") " pod="openstack/keystone-cron-29522461-f5wx2" Feb 17 17:01:00 crc kubenswrapper[4808]: I0217 17:01:00.463549 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvcvk\" (UniqueName: \"kubernetes.io/projected/d443f775-9b53-4aaf-bcda-68aed8d88e84-kube-api-access-jvcvk\") pod \"keystone-cron-29522461-f5wx2\" (UID: \"d443f775-9b53-4aaf-bcda-68aed8d88e84\") " pod="openstack/keystone-cron-29522461-f5wx2" Feb 17 17:01:00 crc kubenswrapper[4808]: I0217 17:01:00.470110 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d443f775-9b53-4aaf-bcda-68aed8d88e84-config-data\") pod \"keystone-cron-29522461-f5wx2\" (UID: \"d443f775-9b53-4aaf-bcda-68aed8d88e84\") " pod="openstack/keystone-cron-29522461-f5wx2" Feb 17 17:01:00 crc kubenswrapper[4808]: I0217 17:01:00.471422 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d443f775-9b53-4aaf-bcda-68aed8d88e84-combined-ca-bundle\") pod \"keystone-cron-29522461-f5wx2\" (UID: \"d443f775-9b53-4aaf-bcda-68aed8d88e84\") " pod="openstack/keystone-cron-29522461-f5wx2" Feb 17 17:01:00 crc kubenswrapper[4808]: I0217 17:01:00.472205 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d443f775-9b53-4aaf-bcda-68aed8d88e84-fernet-keys\") pod \"keystone-cron-29522461-f5wx2\" (UID: \"d443f775-9b53-4aaf-bcda-68aed8d88e84\") " pod="openstack/keystone-cron-29522461-f5wx2" Feb 17 17:01:00 crc kubenswrapper[4808]: I0217 17:01:00.495181 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvcvk\" (UniqueName: \"kubernetes.io/projected/d443f775-9b53-4aaf-bcda-68aed8d88e84-kube-api-access-jvcvk\") pod \"keystone-cron-29522461-f5wx2\" (UID: \"d443f775-9b53-4aaf-bcda-68aed8d88e84\") " pod="openstack/keystone-cron-29522461-f5wx2" Feb 17 17:01:00 crc kubenswrapper[4808]: I0217 17:01:00.510057 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29522461-f5wx2" Feb 17 17:01:01 crc kubenswrapper[4808]: I0217 17:01:01.040937 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29522461-f5wx2"] Feb 17 17:01:01 crc kubenswrapper[4808]: W0217 17:01:01.044531 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd443f775_9b53_4aaf_bcda_68aed8d88e84.slice/crio-2998cde3e89c7b720e4f65d35b80963dde294a36d0acbf064b36c6b3f7621882 WatchSource:0}: Error finding container 2998cde3e89c7b720e4f65d35b80963dde294a36d0acbf064b36c6b3f7621882: Status 404 returned error can't find the container with id 2998cde3e89c7b720e4f65d35b80963dde294a36d0acbf064b36c6b3f7621882 Feb 17 17:01:01 crc kubenswrapper[4808]: E0217 17:01:01.167051 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:01:01 crc kubenswrapper[4808]: I0217 17:01:01.901651 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29522461-f5wx2" event={"ID":"d443f775-9b53-4aaf-bcda-68aed8d88e84","Type":"ContainerStarted","Data":"006837e83c0d08aa480ea6f3d7c1d67333a0c2ed67bca87f005ebff08eb39d6a"} Feb 17 17:01:01 crc kubenswrapper[4808]: I0217 17:01:01.901702 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29522461-f5wx2" event={"ID":"d443f775-9b53-4aaf-bcda-68aed8d88e84","Type":"ContainerStarted","Data":"2998cde3e89c7b720e4f65d35b80963dde294a36d0acbf064b36c6b3f7621882"} Feb 17 17:01:01 crc kubenswrapper[4808]: I0217 17:01:01.925301 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29522461-f5wx2" podStartSLOduration=1.92528509 podStartE2EDuration="1.92528509s" podCreationTimestamp="2026-02-17 17:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 17:01:01.915034241 +0000 UTC m=+4025.431393314" watchObservedRunningTime="2026-02-17 17:01:01.92528509 +0000 UTC m=+4025.441644153" Feb 17 17:01:03 crc kubenswrapper[4808]: I0217 17:01:03.921476 4808 generic.go:334] "Generic (PLEG): container finished" podID="d443f775-9b53-4aaf-bcda-68aed8d88e84" containerID="006837e83c0d08aa480ea6f3d7c1d67333a0c2ed67bca87f005ebff08eb39d6a" exitCode=0 Feb 17 17:01:03 crc kubenswrapper[4808]: I0217 17:01:03.921602 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29522461-f5wx2" event={"ID":"d443f775-9b53-4aaf-bcda-68aed8d88e84","Type":"ContainerDied","Data":"006837e83c0d08aa480ea6f3d7c1d67333a0c2ed67bca87f005ebff08eb39d6a"} Feb 17 17:01:05 crc kubenswrapper[4808]: I0217 17:01:05.357906 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29522461-f5wx2" Feb 17 17:01:05 crc kubenswrapper[4808]: I0217 17:01:05.470338 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jvcvk\" (UniqueName: \"kubernetes.io/projected/d443f775-9b53-4aaf-bcda-68aed8d88e84-kube-api-access-jvcvk\") pod \"d443f775-9b53-4aaf-bcda-68aed8d88e84\" (UID: \"d443f775-9b53-4aaf-bcda-68aed8d88e84\") " Feb 17 17:01:05 crc kubenswrapper[4808]: I0217 17:01:05.470765 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d443f775-9b53-4aaf-bcda-68aed8d88e84-config-data\") pod \"d443f775-9b53-4aaf-bcda-68aed8d88e84\" (UID: \"d443f775-9b53-4aaf-bcda-68aed8d88e84\") " Feb 17 17:01:05 crc kubenswrapper[4808]: I0217 17:01:05.470955 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d443f775-9b53-4aaf-bcda-68aed8d88e84-fernet-keys\") pod \"d443f775-9b53-4aaf-bcda-68aed8d88e84\" (UID: \"d443f775-9b53-4aaf-bcda-68aed8d88e84\") " Feb 17 17:01:05 crc kubenswrapper[4808]: I0217 17:01:05.471144 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d443f775-9b53-4aaf-bcda-68aed8d88e84-combined-ca-bundle\") pod \"d443f775-9b53-4aaf-bcda-68aed8d88e84\" (UID: \"d443f775-9b53-4aaf-bcda-68aed8d88e84\") " Feb 17 17:01:05 crc kubenswrapper[4808]: I0217 17:01:05.476744 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d443f775-9b53-4aaf-bcda-68aed8d88e84-kube-api-access-jvcvk" (OuterVolumeSpecName: "kube-api-access-jvcvk") pod "d443f775-9b53-4aaf-bcda-68aed8d88e84" (UID: "d443f775-9b53-4aaf-bcda-68aed8d88e84"). InnerVolumeSpecName "kube-api-access-jvcvk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:01:05 crc kubenswrapper[4808]: I0217 17:01:05.479240 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d443f775-9b53-4aaf-bcda-68aed8d88e84-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "d443f775-9b53-4aaf-bcda-68aed8d88e84" (UID: "d443f775-9b53-4aaf-bcda-68aed8d88e84"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:01:05 crc kubenswrapper[4808]: I0217 17:01:05.498689 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d443f775-9b53-4aaf-bcda-68aed8d88e84-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d443f775-9b53-4aaf-bcda-68aed8d88e84" (UID: "d443f775-9b53-4aaf-bcda-68aed8d88e84"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:01:05 crc kubenswrapper[4808]: I0217 17:01:05.519797 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d443f775-9b53-4aaf-bcda-68aed8d88e84-config-data" (OuterVolumeSpecName: "config-data") pod "d443f775-9b53-4aaf-bcda-68aed8d88e84" (UID: "d443f775-9b53-4aaf-bcda-68aed8d88e84"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:01:05 crc kubenswrapper[4808]: I0217 17:01:05.574722 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jvcvk\" (UniqueName: \"kubernetes.io/projected/d443f775-9b53-4aaf-bcda-68aed8d88e84-kube-api-access-jvcvk\") on node \"crc\" DevicePath \"\"" Feb 17 17:01:05 crc kubenswrapper[4808]: I0217 17:01:05.574768 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d443f775-9b53-4aaf-bcda-68aed8d88e84-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 17:01:05 crc kubenswrapper[4808]: I0217 17:01:05.574783 4808 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d443f775-9b53-4aaf-bcda-68aed8d88e84-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 17 17:01:05 crc kubenswrapper[4808]: I0217 17:01:05.574793 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d443f775-9b53-4aaf-bcda-68aed8d88e84-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 17:01:05 crc kubenswrapper[4808]: I0217 17:01:05.938361 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29522461-f5wx2" event={"ID":"d443f775-9b53-4aaf-bcda-68aed8d88e84","Type":"ContainerDied","Data":"2998cde3e89c7b720e4f65d35b80963dde294a36d0acbf064b36c6b3f7621882"} Feb 17 17:01:05 crc kubenswrapper[4808]: I0217 17:01:05.938715 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2998cde3e89c7b720e4f65d35b80963dde294a36d0acbf064b36c6b3f7621882" Feb 17 17:01:05 crc kubenswrapper[4808]: I0217 17:01:05.938436 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29522461-f5wx2" Feb 17 17:01:12 crc kubenswrapper[4808]: E0217 17:01:12.147800 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:01:13 crc kubenswrapper[4808]: E0217 17:01:13.147000 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:01:24 crc kubenswrapper[4808]: E0217 17:01:24.147406 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:01:25 crc kubenswrapper[4808]: E0217 17:01:25.147899 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:01:37 crc kubenswrapper[4808]: E0217 17:01:37.156144 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:01:38 crc kubenswrapper[4808]: E0217 17:01:38.148698 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:01:51 crc kubenswrapper[4808]: E0217 17:01:51.148698 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:01:52 crc kubenswrapper[4808]: E0217 17:01:52.149479 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:02:00 crc kubenswrapper[4808]: E0217 17:02:00.184069 4808 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.039s" Feb 17 17:02:03 crc kubenswrapper[4808]: E0217 17:02:03.149045 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:02:03 crc kubenswrapper[4808]: E0217 17:02:03.149045 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:02:16 crc kubenswrapper[4808]: E0217 17:02:16.149441 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:02:17 crc kubenswrapper[4808]: E0217 17:02:17.159728 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:02:29 crc kubenswrapper[4808]: E0217 17:02:29.149065 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:02:30 crc kubenswrapper[4808]: E0217 17:02:30.147649 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:02:43 crc kubenswrapper[4808]: E0217 17:02:43.148689 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:02:44 crc kubenswrapper[4808]: E0217 17:02:44.147421 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:02:51 crc kubenswrapper[4808]: I0217 17:02:51.591987 4808 patch_prober.go:28] interesting pod/machine-config-daemon-k8v8k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 17:02:51 crc kubenswrapper[4808]: I0217 17:02:51.593534 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 17:02:55 crc kubenswrapper[4808]: E0217 17:02:55.148508 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:02:58 crc kubenswrapper[4808]: E0217 17:02:58.149851 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:03:10 crc kubenswrapper[4808]: E0217 17:03:10.148954 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:03:10 crc kubenswrapper[4808]: I0217 17:03:10.808217 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-sx45k"] Feb 17 17:03:10 crc kubenswrapper[4808]: E0217 17:03:10.809008 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d443f775-9b53-4aaf-bcda-68aed8d88e84" containerName="keystone-cron" Feb 17 17:03:10 crc kubenswrapper[4808]: I0217 17:03:10.809038 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="d443f775-9b53-4aaf-bcda-68aed8d88e84" containerName="keystone-cron" Feb 17 17:03:10 crc kubenswrapper[4808]: I0217 17:03:10.809366 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="d443f775-9b53-4aaf-bcda-68aed8d88e84" containerName="keystone-cron" Feb 17 17:03:10 crc kubenswrapper[4808]: I0217 17:03:10.812040 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sx45k" Feb 17 17:03:10 crc kubenswrapper[4808]: I0217 17:03:10.848719 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sx45k"] Feb 17 17:03:10 crc kubenswrapper[4808]: I0217 17:03:10.850592 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f017987c-650c-47b4-a33f-3ab1dfb8c281-catalog-content\") pod \"community-operators-sx45k\" (UID: \"f017987c-650c-47b4-a33f-3ab1dfb8c281\") " pod="openshift-marketplace/community-operators-sx45k" Feb 17 17:03:10 crc kubenswrapper[4808]: I0217 17:03:10.850739 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f017987c-650c-47b4-a33f-3ab1dfb8c281-utilities\") pod \"community-operators-sx45k\" (UID: \"f017987c-650c-47b4-a33f-3ab1dfb8c281\") " pod="openshift-marketplace/community-operators-sx45k" Feb 17 17:03:10 crc kubenswrapper[4808]: I0217 17:03:10.850795 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gx6bw\" (UniqueName: \"kubernetes.io/projected/f017987c-650c-47b4-a33f-3ab1dfb8c281-kube-api-access-gx6bw\") pod \"community-operators-sx45k\" (UID: \"f017987c-650c-47b4-a33f-3ab1dfb8c281\") " pod="openshift-marketplace/community-operators-sx45k" Feb 17 17:03:10 crc kubenswrapper[4808]: I0217 17:03:10.953353 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f017987c-650c-47b4-a33f-3ab1dfb8c281-utilities\") pod \"community-operators-sx45k\" (UID: \"f017987c-650c-47b4-a33f-3ab1dfb8c281\") " pod="openshift-marketplace/community-operators-sx45k" Feb 17 17:03:10 crc kubenswrapper[4808]: I0217 17:03:10.953452 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gx6bw\" (UniqueName: \"kubernetes.io/projected/f017987c-650c-47b4-a33f-3ab1dfb8c281-kube-api-access-gx6bw\") pod \"community-operators-sx45k\" (UID: \"f017987c-650c-47b4-a33f-3ab1dfb8c281\") " pod="openshift-marketplace/community-operators-sx45k" Feb 17 17:03:10 crc kubenswrapper[4808]: I0217 17:03:10.953783 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f017987c-650c-47b4-a33f-3ab1dfb8c281-catalog-content\") pod \"community-operators-sx45k\" (UID: \"f017987c-650c-47b4-a33f-3ab1dfb8c281\") " pod="openshift-marketplace/community-operators-sx45k" Feb 17 17:03:10 crc kubenswrapper[4808]: I0217 17:03:10.954508 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f017987c-650c-47b4-a33f-3ab1dfb8c281-utilities\") pod \"community-operators-sx45k\" (UID: \"f017987c-650c-47b4-a33f-3ab1dfb8c281\") " pod="openshift-marketplace/community-operators-sx45k" Feb 17 17:03:10 crc kubenswrapper[4808]: I0217 17:03:10.954774 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f017987c-650c-47b4-a33f-3ab1dfb8c281-catalog-content\") pod \"community-operators-sx45k\" (UID: \"f017987c-650c-47b4-a33f-3ab1dfb8c281\") " pod="openshift-marketplace/community-operators-sx45k" Feb 17 17:03:10 crc kubenswrapper[4808]: I0217 17:03:10.989130 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gx6bw\" (UniqueName: \"kubernetes.io/projected/f017987c-650c-47b4-a33f-3ab1dfb8c281-kube-api-access-gx6bw\") pod \"community-operators-sx45k\" (UID: \"f017987c-650c-47b4-a33f-3ab1dfb8c281\") " pod="openshift-marketplace/community-operators-sx45k" Feb 17 17:03:11 crc kubenswrapper[4808]: I0217 17:03:11.144485 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sx45k" Feb 17 17:03:11 crc kubenswrapper[4808]: E0217 17:03:11.156818 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:03:12 crc kubenswrapper[4808]: I0217 17:03:12.280209 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sx45k"] Feb 17 17:03:12 crc kubenswrapper[4808]: I0217 17:03:12.987815 4808 generic.go:334] "Generic (PLEG): container finished" podID="f017987c-650c-47b4-a33f-3ab1dfb8c281" containerID="f57848df42df8a0a7bedb5e002dc8de9f940f80a89cff87d4a3a68a99da5540f" exitCode=0 Feb 17 17:03:12 crc kubenswrapper[4808]: I0217 17:03:12.987904 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sx45k" event={"ID":"f017987c-650c-47b4-a33f-3ab1dfb8c281","Type":"ContainerDied","Data":"f57848df42df8a0a7bedb5e002dc8de9f940f80a89cff87d4a3a68a99da5540f"} Feb 17 17:03:12 crc kubenswrapper[4808]: I0217 17:03:12.988141 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sx45k" event={"ID":"f017987c-650c-47b4-a33f-3ab1dfb8c281","Type":"ContainerStarted","Data":"59455c074c8369d9c1bdabb7113ee733d5f53d53a6ad636a052a2f8f11ed7c86"} Feb 17 17:03:14 crc kubenswrapper[4808]: I0217 17:03:14.001315 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sx45k" event={"ID":"f017987c-650c-47b4-a33f-3ab1dfb8c281","Type":"ContainerStarted","Data":"7f75761c4ebd95e9d96977aa4e7c82db76794278f1710e9142cc48d27aa32c09"} Feb 17 17:03:15 crc kubenswrapper[4808]: I0217 17:03:15.012465 4808 generic.go:334] "Generic (PLEG): container finished" podID="f017987c-650c-47b4-a33f-3ab1dfb8c281" containerID="7f75761c4ebd95e9d96977aa4e7c82db76794278f1710e9142cc48d27aa32c09" exitCode=0 Feb 17 17:03:15 crc kubenswrapper[4808]: I0217 17:03:15.012510 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sx45k" event={"ID":"f017987c-650c-47b4-a33f-3ab1dfb8c281","Type":"ContainerDied","Data":"7f75761c4ebd95e9d96977aa4e7c82db76794278f1710e9142cc48d27aa32c09"} Feb 17 17:03:16 crc kubenswrapper[4808]: I0217 17:03:16.023737 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sx45k" event={"ID":"f017987c-650c-47b4-a33f-3ab1dfb8c281","Type":"ContainerStarted","Data":"8894e31e04c0172f7d7f363415fe9ef78ac9e3fef99150ff177cb908671993cc"} Feb 17 17:03:16 crc kubenswrapper[4808]: I0217 17:03:16.040295 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-sx45k" podStartSLOduration=3.3967107690000002 podStartE2EDuration="6.040276864s" podCreationTimestamp="2026-02-17 17:03:10 +0000 UTC" firstStartedPulling="2026-02-17 17:03:12.990090933 +0000 UTC m=+4156.506450006" lastFinishedPulling="2026-02-17 17:03:15.633657018 +0000 UTC m=+4159.150016101" observedRunningTime="2026-02-17 17:03:16.038457474 +0000 UTC m=+4159.554816577" watchObservedRunningTime="2026-02-17 17:03:16.040276864 +0000 UTC m=+4159.556635947" Feb 17 17:03:21 crc kubenswrapper[4808]: I0217 17:03:21.145738 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-sx45k" Feb 17 17:03:21 crc kubenswrapper[4808]: I0217 17:03:21.146299 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-sx45k" Feb 17 17:03:21 crc kubenswrapper[4808]: I0217 17:03:21.193244 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-sx45k" Feb 17 17:03:21 crc kubenswrapper[4808]: I0217 17:03:21.592796 4808 patch_prober.go:28] interesting pod/machine-config-daemon-k8v8k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 17:03:21 crc kubenswrapper[4808]: I0217 17:03:21.592885 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 17:03:22 crc kubenswrapper[4808]: I0217 17:03:22.118117 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-sx45k" Feb 17 17:03:22 crc kubenswrapper[4808]: E0217 17:03:22.147991 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:03:22 crc kubenswrapper[4808]: I0217 17:03:22.174082 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sx45k"] Feb 17 17:03:24 crc kubenswrapper[4808]: I0217 17:03:24.106416 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-sx45k" podUID="f017987c-650c-47b4-a33f-3ab1dfb8c281" containerName="registry-server" containerID="cri-o://8894e31e04c0172f7d7f363415fe9ef78ac9e3fef99150ff177cb908671993cc" gracePeriod=2 Feb 17 17:03:24 crc kubenswrapper[4808]: E0217 17:03:24.146813 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:03:24 crc kubenswrapper[4808]: I0217 17:03:24.746164 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sx45k" Feb 17 17:03:24 crc kubenswrapper[4808]: I0217 17:03:24.852356 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gx6bw\" (UniqueName: \"kubernetes.io/projected/f017987c-650c-47b4-a33f-3ab1dfb8c281-kube-api-access-gx6bw\") pod \"f017987c-650c-47b4-a33f-3ab1dfb8c281\" (UID: \"f017987c-650c-47b4-a33f-3ab1dfb8c281\") " Feb 17 17:03:24 crc kubenswrapper[4808]: I0217 17:03:24.852460 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f017987c-650c-47b4-a33f-3ab1dfb8c281-utilities\") pod \"f017987c-650c-47b4-a33f-3ab1dfb8c281\" (UID: \"f017987c-650c-47b4-a33f-3ab1dfb8c281\") " Feb 17 17:03:24 crc kubenswrapper[4808]: I0217 17:03:24.852657 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f017987c-650c-47b4-a33f-3ab1dfb8c281-catalog-content\") pod \"f017987c-650c-47b4-a33f-3ab1dfb8c281\" (UID: \"f017987c-650c-47b4-a33f-3ab1dfb8c281\") " Feb 17 17:03:24 crc kubenswrapper[4808]: I0217 17:03:24.854051 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f017987c-650c-47b4-a33f-3ab1dfb8c281-utilities" (OuterVolumeSpecName: "utilities") pod "f017987c-650c-47b4-a33f-3ab1dfb8c281" (UID: "f017987c-650c-47b4-a33f-3ab1dfb8c281"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:03:24 crc kubenswrapper[4808]: I0217 17:03:24.857866 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f017987c-650c-47b4-a33f-3ab1dfb8c281-kube-api-access-gx6bw" (OuterVolumeSpecName: "kube-api-access-gx6bw") pod "f017987c-650c-47b4-a33f-3ab1dfb8c281" (UID: "f017987c-650c-47b4-a33f-3ab1dfb8c281"). InnerVolumeSpecName "kube-api-access-gx6bw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:03:24 crc kubenswrapper[4808]: I0217 17:03:24.955126 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gx6bw\" (UniqueName: \"kubernetes.io/projected/f017987c-650c-47b4-a33f-3ab1dfb8c281-kube-api-access-gx6bw\") on node \"crc\" DevicePath \"\"" Feb 17 17:03:24 crc kubenswrapper[4808]: I0217 17:03:24.955159 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f017987c-650c-47b4-a33f-3ab1dfb8c281-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 17:03:25 crc kubenswrapper[4808]: I0217 17:03:25.118286 4808 generic.go:334] "Generic (PLEG): container finished" podID="f017987c-650c-47b4-a33f-3ab1dfb8c281" containerID="8894e31e04c0172f7d7f363415fe9ef78ac9e3fef99150ff177cb908671993cc" exitCode=0 Feb 17 17:03:25 crc kubenswrapper[4808]: I0217 17:03:25.118347 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sx45k" event={"ID":"f017987c-650c-47b4-a33f-3ab1dfb8c281","Type":"ContainerDied","Data":"8894e31e04c0172f7d7f363415fe9ef78ac9e3fef99150ff177cb908671993cc"} Feb 17 17:03:25 crc kubenswrapper[4808]: I0217 17:03:25.118369 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sx45k" Feb 17 17:03:25 crc kubenswrapper[4808]: I0217 17:03:25.118398 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sx45k" event={"ID":"f017987c-650c-47b4-a33f-3ab1dfb8c281","Type":"ContainerDied","Data":"59455c074c8369d9c1bdabb7113ee733d5f53d53a6ad636a052a2f8f11ed7c86"} Feb 17 17:03:25 crc kubenswrapper[4808]: I0217 17:03:25.118417 4808 scope.go:117] "RemoveContainer" containerID="8894e31e04c0172f7d7f363415fe9ef78ac9e3fef99150ff177cb908671993cc" Feb 17 17:03:25 crc kubenswrapper[4808]: I0217 17:03:25.145055 4808 scope.go:117] "RemoveContainer" containerID="7f75761c4ebd95e9d96977aa4e7c82db76794278f1710e9142cc48d27aa32c09" Feb 17 17:03:25 crc kubenswrapper[4808]: I0217 17:03:25.168267 4808 scope.go:117] "RemoveContainer" containerID="f57848df42df8a0a7bedb5e002dc8de9f940f80a89cff87d4a3a68a99da5540f" Feb 17 17:03:25 crc kubenswrapper[4808]: I0217 17:03:25.174752 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f017987c-650c-47b4-a33f-3ab1dfb8c281-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f017987c-650c-47b4-a33f-3ab1dfb8c281" (UID: "f017987c-650c-47b4-a33f-3ab1dfb8c281"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:03:25 crc kubenswrapper[4808]: I0217 17:03:25.251165 4808 scope.go:117] "RemoveContainer" containerID="8894e31e04c0172f7d7f363415fe9ef78ac9e3fef99150ff177cb908671993cc" Feb 17 17:03:25 crc kubenswrapper[4808]: E0217 17:03:25.252238 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8894e31e04c0172f7d7f363415fe9ef78ac9e3fef99150ff177cb908671993cc\": container with ID starting with 8894e31e04c0172f7d7f363415fe9ef78ac9e3fef99150ff177cb908671993cc not found: ID does not exist" containerID="8894e31e04c0172f7d7f363415fe9ef78ac9e3fef99150ff177cb908671993cc" Feb 17 17:03:25 crc kubenswrapper[4808]: I0217 17:03:25.252278 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8894e31e04c0172f7d7f363415fe9ef78ac9e3fef99150ff177cb908671993cc"} err="failed to get container status \"8894e31e04c0172f7d7f363415fe9ef78ac9e3fef99150ff177cb908671993cc\": rpc error: code = NotFound desc = could not find container \"8894e31e04c0172f7d7f363415fe9ef78ac9e3fef99150ff177cb908671993cc\": container with ID starting with 8894e31e04c0172f7d7f363415fe9ef78ac9e3fef99150ff177cb908671993cc not found: ID does not exist" Feb 17 17:03:25 crc kubenswrapper[4808]: I0217 17:03:25.252326 4808 scope.go:117] "RemoveContainer" containerID="7f75761c4ebd95e9d96977aa4e7c82db76794278f1710e9142cc48d27aa32c09" Feb 17 17:03:25 crc kubenswrapper[4808]: E0217 17:03:25.252876 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f75761c4ebd95e9d96977aa4e7c82db76794278f1710e9142cc48d27aa32c09\": container with ID starting with 7f75761c4ebd95e9d96977aa4e7c82db76794278f1710e9142cc48d27aa32c09 not found: ID does not exist" containerID="7f75761c4ebd95e9d96977aa4e7c82db76794278f1710e9142cc48d27aa32c09" Feb 17 17:03:25 crc kubenswrapper[4808]: I0217 17:03:25.252931 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f75761c4ebd95e9d96977aa4e7c82db76794278f1710e9142cc48d27aa32c09"} err="failed to get container status \"7f75761c4ebd95e9d96977aa4e7c82db76794278f1710e9142cc48d27aa32c09\": rpc error: code = NotFound desc = could not find container \"7f75761c4ebd95e9d96977aa4e7c82db76794278f1710e9142cc48d27aa32c09\": container with ID starting with 7f75761c4ebd95e9d96977aa4e7c82db76794278f1710e9142cc48d27aa32c09 not found: ID does not exist" Feb 17 17:03:25 crc kubenswrapper[4808]: I0217 17:03:25.252968 4808 scope.go:117] "RemoveContainer" containerID="f57848df42df8a0a7bedb5e002dc8de9f940f80a89cff87d4a3a68a99da5540f" Feb 17 17:03:25 crc kubenswrapper[4808]: E0217 17:03:25.253354 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f57848df42df8a0a7bedb5e002dc8de9f940f80a89cff87d4a3a68a99da5540f\": container with ID starting with f57848df42df8a0a7bedb5e002dc8de9f940f80a89cff87d4a3a68a99da5540f not found: ID does not exist" containerID="f57848df42df8a0a7bedb5e002dc8de9f940f80a89cff87d4a3a68a99da5540f" Feb 17 17:03:25 crc kubenswrapper[4808]: I0217 17:03:25.253390 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f57848df42df8a0a7bedb5e002dc8de9f940f80a89cff87d4a3a68a99da5540f"} err="failed to get container status \"f57848df42df8a0a7bedb5e002dc8de9f940f80a89cff87d4a3a68a99da5540f\": rpc error: code = NotFound desc = could not find container \"f57848df42df8a0a7bedb5e002dc8de9f940f80a89cff87d4a3a68a99da5540f\": container with ID starting with f57848df42df8a0a7bedb5e002dc8de9f940f80a89cff87d4a3a68a99da5540f not found: ID does not exist" Feb 17 17:03:25 crc kubenswrapper[4808]: I0217 17:03:25.261994 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f017987c-650c-47b4-a33f-3ab1dfb8c281-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 17:03:25 crc kubenswrapper[4808]: I0217 17:03:25.458366 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sx45k"] Feb 17 17:03:25 crc kubenswrapper[4808]: I0217 17:03:25.473782 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-sx45k"] Feb 17 17:03:27 crc kubenswrapper[4808]: I0217 17:03:27.160355 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f017987c-650c-47b4-a33f-3ab1dfb8c281" path="/var/lib/kubelet/pods/f017987c-650c-47b4-a33f-3ab1dfb8c281/volumes" Feb 17 17:03:33 crc kubenswrapper[4808]: E0217 17:03:33.147595 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:03:37 crc kubenswrapper[4808]: E0217 17:03:37.168725 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:03:44 crc kubenswrapper[4808]: E0217 17:03:44.148502 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:03:51 crc kubenswrapper[4808]: E0217 17:03:51.148114 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:03:51 crc kubenswrapper[4808]: I0217 17:03:51.592621 4808 patch_prober.go:28] interesting pod/machine-config-daemon-k8v8k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 17:03:51 crc kubenswrapper[4808]: I0217 17:03:51.592971 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 17:03:51 crc kubenswrapper[4808]: I0217 17:03:51.593021 4808 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" Feb 17 17:03:51 crc kubenswrapper[4808]: I0217 17:03:51.593820 4808 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1c02b3c7aae9a1c0d42f9daaaf983a7832eab0de1b546cc54ac3397eb20c3c2a"} pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 17:03:51 crc kubenswrapper[4808]: I0217 17:03:51.593883 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" containerID="cri-o://1c02b3c7aae9a1c0d42f9daaaf983a7832eab0de1b546cc54ac3397eb20c3c2a" gracePeriod=600 Feb 17 17:03:52 crc kubenswrapper[4808]: I0217 17:03:52.461202 4808 generic.go:334] "Generic (PLEG): container finished" podID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerID="1c02b3c7aae9a1c0d42f9daaaf983a7832eab0de1b546cc54ac3397eb20c3c2a" exitCode=0 Feb 17 17:03:52 crc kubenswrapper[4808]: I0217 17:03:52.461293 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" event={"ID":"ca38b6e7-b21c-453d-8b6c-a163dac84b35","Type":"ContainerDied","Data":"1c02b3c7aae9a1c0d42f9daaaf983a7832eab0de1b546cc54ac3397eb20c3c2a"} Feb 17 17:03:52 crc kubenswrapper[4808]: I0217 17:03:52.461559 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" event={"ID":"ca38b6e7-b21c-453d-8b6c-a163dac84b35","Type":"ContainerStarted","Data":"8c4199e704474ea94fecd76ffd4e953c14d6c8288f54377aa2b3edb555caf82d"} Feb 17 17:03:52 crc kubenswrapper[4808]: I0217 17:03:52.461644 4808 scope.go:117] "RemoveContainer" containerID="7fbe8df1c68f978d3698bd74ae49612c95a40d103c6fa3bdaa17006e991ad2e5" Feb 17 17:03:57 crc kubenswrapper[4808]: E0217 17:03:57.162256 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:04:04 crc kubenswrapper[4808]: E0217 17:04:04.148288 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:04:10 crc kubenswrapper[4808]: E0217 17:04:10.148974 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:04:17 crc kubenswrapper[4808]: E0217 17:04:17.155460 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:04:25 crc kubenswrapper[4808]: E0217 17:04:25.149363 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:04:30 crc kubenswrapper[4808]: E0217 17:04:30.149192 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:04:36 crc kubenswrapper[4808]: E0217 17:04:36.147708 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:04:42 crc kubenswrapper[4808]: E0217 17:04:42.148494 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:04:50 crc kubenswrapper[4808]: E0217 17:04:50.149835 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:04:57 crc kubenswrapper[4808]: E0217 17:04:57.164547 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:05:04 crc kubenswrapper[4808]: E0217 17:05:04.148649 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:05:10 crc kubenswrapper[4808]: I0217 17:05:10.162422 4808 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 17:05:10 crc kubenswrapper[4808]: E0217 17:05:10.269165 4808 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 17 17:05:10 crc kubenswrapper[4808]: E0217 17:05:10.269224 4808 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 17 17:05:10 crc kubenswrapper[4808]: E0217 17:05:10.269366 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fnd2x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-zl7nk_openstack(a4b182d0-48fc-4487-b7ad-18f7803a4d4c): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 17:05:10 crc kubenswrapper[4808]: E0217 17:05:10.270684 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:05:17 crc kubenswrapper[4808]: E0217 17:05:17.166045 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:05:24 crc kubenswrapper[4808]: E0217 17:05:24.150001 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:05:30 crc kubenswrapper[4808]: E0217 17:05:30.147642 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:05:39 crc kubenswrapper[4808]: E0217 17:05:39.148022 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:05:43 crc kubenswrapper[4808]: E0217 17:05:43.279537 4808 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 17:05:43 crc kubenswrapper[4808]: E0217 17:05:43.280207 4808 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 17:05:43 crc kubenswrapper[4808]: E0217 17:05:43.280361 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nfchb4h678h649h5fbh664h79h7fh666h5bfh68h565h555h59dh5b6h5bfh66ch645h547h5cbh549h9fh58bh5d4hcfh78h68chc7h5ch67dhc7h5b4q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rjgf2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(2876084b-7055-449d-9ddb-447d3a515d80): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 17:05:43 crc kubenswrapper[4808]: E0217 17:05:43.281623 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:05:53 crc kubenswrapper[4808]: E0217 17:05:53.147427 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:05:58 crc kubenswrapper[4808]: E0217 17:05:58.149511 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:06:08 crc kubenswrapper[4808]: E0217 17:06:08.148726 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:06:13 crc kubenswrapper[4808]: E0217 17:06:13.156118 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:06:17 crc kubenswrapper[4808]: I0217 17:06:17.938106 4808 generic.go:334] "Generic (PLEG): container finished" podID="6fa90ca1-9ae4-4cce-a41f-640f2629ccfd" containerID="6287c9af3f8fc5a9bacd7d967c6c0711a69d46294cccb346aa34f674145f916b" exitCode=2 Feb 17 17:06:17 crc kubenswrapper[4808]: I0217 17:06:17.938151 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zzjwk" event={"ID":"6fa90ca1-9ae4-4cce-a41f-640f2629ccfd","Type":"ContainerDied","Data":"6287c9af3f8fc5a9bacd7d967c6c0711a69d46294cccb346aa34f674145f916b"} Feb 17 17:06:19 crc kubenswrapper[4808]: I0217 17:06:19.878147 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zzjwk" Feb 17 17:06:19 crc kubenswrapper[4808]: I0217 17:06:19.962038 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zzjwk" event={"ID":"6fa90ca1-9ae4-4cce-a41f-640f2629ccfd","Type":"ContainerDied","Data":"7ccbd48b8c6ddd33e393b5cc60c189b1890685479c8bc28981b9cf1783cd1867"} Feb 17 17:06:19 crc kubenswrapper[4808]: I0217 17:06:19.962114 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7ccbd48b8c6ddd33e393b5cc60c189b1890685479c8bc28981b9cf1783cd1867" Feb 17 17:06:19 crc kubenswrapper[4808]: I0217 17:06:19.962118 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zzjwk" Feb 17 17:06:19 crc kubenswrapper[4808]: I0217 17:06:19.993281 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6fa90ca1-9ae4-4cce-a41f-640f2629ccfd-inventory\") pod \"6fa90ca1-9ae4-4cce-a41f-640f2629ccfd\" (UID: \"6fa90ca1-9ae4-4cce-a41f-640f2629ccfd\") " Feb 17 17:06:19 crc kubenswrapper[4808]: I0217 17:06:19.993419 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94ggj\" (UniqueName: \"kubernetes.io/projected/6fa90ca1-9ae4-4cce-a41f-640f2629ccfd-kube-api-access-94ggj\") pod \"6fa90ca1-9ae4-4cce-a41f-640f2629ccfd\" (UID: \"6fa90ca1-9ae4-4cce-a41f-640f2629ccfd\") " Feb 17 17:06:19 crc kubenswrapper[4808]: I0217 17:06:19.993496 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6fa90ca1-9ae4-4cce-a41f-640f2629ccfd-ssh-key-openstack-edpm-ipam\") pod \"6fa90ca1-9ae4-4cce-a41f-640f2629ccfd\" (UID: \"6fa90ca1-9ae4-4cce-a41f-640f2629ccfd\") " Feb 17 17:06:20 crc kubenswrapper[4808]: I0217 17:06:20.000185 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fa90ca1-9ae4-4cce-a41f-640f2629ccfd-kube-api-access-94ggj" (OuterVolumeSpecName: "kube-api-access-94ggj") pod "6fa90ca1-9ae4-4cce-a41f-640f2629ccfd" (UID: "6fa90ca1-9ae4-4cce-a41f-640f2629ccfd"). InnerVolumeSpecName "kube-api-access-94ggj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:06:20 crc kubenswrapper[4808]: I0217 17:06:20.036841 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fa90ca1-9ae4-4cce-a41f-640f2629ccfd-inventory" (OuterVolumeSpecName: "inventory") pod "6fa90ca1-9ae4-4cce-a41f-640f2629ccfd" (UID: "6fa90ca1-9ae4-4cce-a41f-640f2629ccfd"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:06:20 crc kubenswrapper[4808]: I0217 17:06:20.053630 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fa90ca1-9ae4-4cce-a41f-640f2629ccfd-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "6fa90ca1-9ae4-4cce-a41f-640f2629ccfd" (UID: "6fa90ca1-9ae4-4cce-a41f-640f2629ccfd"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:06:20 crc kubenswrapper[4808]: I0217 17:06:20.096388 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-94ggj\" (UniqueName: \"kubernetes.io/projected/6fa90ca1-9ae4-4cce-a41f-640f2629ccfd-kube-api-access-94ggj\") on node \"crc\" DevicePath \"\"" Feb 17 17:06:20 crc kubenswrapper[4808]: I0217 17:06:20.096422 4808 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6fa90ca1-9ae4-4cce-a41f-640f2629ccfd-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 17:06:20 crc kubenswrapper[4808]: I0217 17:06:20.096437 4808 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6fa90ca1-9ae4-4cce-a41f-640f2629ccfd-inventory\") on node \"crc\" DevicePath \"\"" Feb 17 17:06:21 crc kubenswrapper[4808]: E0217 17:06:21.148143 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:06:21 crc kubenswrapper[4808]: I0217 17:06:21.592599 4808 patch_prober.go:28] interesting pod/machine-config-daemon-k8v8k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 17:06:21 crc kubenswrapper[4808]: I0217 17:06:21.592685 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 17:06:24 crc kubenswrapper[4808]: E0217 17:06:24.151185 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:06:33 crc kubenswrapper[4808]: E0217 17:06:33.148014 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:06:36 crc kubenswrapper[4808]: E0217 17:06:36.148860 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:06:44 crc kubenswrapper[4808]: E0217 17:06:44.148593 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:06:49 crc kubenswrapper[4808]: E0217 17:06:49.148736 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:06:51 crc kubenswrapper[4808]: I0217 17:06:51.592664 4808 patch_prober.go:28] interesting pod/machine-config-daemon-k8v8k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 17:06:51 crc kubenswrapper[4808]: I0217 17:06:51.593074 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 17:06:57 crc kubenswrapper[4808]: E0217 17:06:57.155340 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:07:01 crc kubenswrapper[4808]: E0217 17:07:01.148106 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:07:12 crc kubenswrapper[4808]: E0217 17:07:12.148429 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:07:12 crc kubenswrapper[4808]: E0217 17:07:12.148564 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:07:21 crc kubenswrapper[4808]: I0217 17:07:21.591922 4808 patch_prober.go:28] interesting pod/machine-config-daemon-k8v8k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 17:07:21 crc kubenswrapper[4808]: I0217 17:07:21.592431 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 17:07:21 crc kubenswrapper[4808]: I0217 17:07:21.592471 4808 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" Feb 17 17:07:21 crc kubenswrapper[4808]: I0217 17:07:21.593033 4808 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8c4199e704474ea94fecd76ffd4e953c14d6c8288f54377aa2b3edb555caf82d"} pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 17:07:21 crc kubenswrapper[4808]: I0217 17:07:21.593093 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" containerID="cri-o://8c4199e704474ea94fecd76ffd4e953c14d6c8288f54377aa2b3edb555caf82d" gracePeriod=600 Feb 17 17:07:21 crc kubenswrapper[4808]: E0217 17:07:21.721541 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 17:07:22 crc kubenswrapper[4808]: I0217 17:07:22.557530 4808 generic.go:334] "Generic (PLEG): container finished" podID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerID="8c4199e704474ea94fecd76ffd4e953c14d6c8288f54377aa2b3edb555caf82d" exitCode=0 Feb 17 17:07:22 crc kubenswrapper[4808]: I0217 17:07:22.557595 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" event={"ID":"ca38b6e7-b21c-453d-8b6c-a163dac84b35","Type":"ContainerDied","Data":"8c4199e704474ea94fecd76ffd4e953c14d6c8288f54377aa2b3edb555caf82d"} Feb 17 17:07:22 crc kubenswrapper[4808]: I0217 17:07:22.557876 4808 scope.go:117] "RemoveContainer" containerID="1c02b3c7aae9a1c0d42f9daaaf983a7832eab0de1b546cc54ac3397eb20c3c2a" Feb 17 17:07:22 crc kubenswrapper[4808]: I0217 17:07:22.558729 4808 scope.go:117] "RemoveContainer" containerID="8c4199e704474ea94fecd76ffd4e953c14d6c8288f54377aa2b3edb555caf82d" Feb 17 17:07:22 crc kubenswrapper[4808]: E0217 17:07:22.559122 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 17:07:24 crc kubenswrapper[4808]: E0217 17:07:24.147746 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:07:26 crc kubenswrapper[4808]: E0217 17:07:26.147834 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:07:34 crc kubenswrapper[4808]: I0217 17:07:34.146705 4808 scope.go:117] "RemoveContainer" containerID="8c4199e704474ea94fecd76ffd4e953c14d6c8288f54377aa2b3edb555caf82d" Feb 17 17:07:34 crc kubenswrapper[4808]: E0217 17:07:34.147522 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 17:07:38 crc kubenswrapper[4808]: E0217 17:07:38.149010 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:07:39 crc kubenswrapper[4808]: E0217 17:07:39.147286 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:07:48 crc kubenswrapper[4808]: I0217 17:07:48.146281 4808 scope.go:117] "RemoveContainer" containerID="8c4199e704474ea94fecd76ffd4e953c14d6c8288f54377aa2b3edb555caf82d" Feb 17 17:07:48 crc kubenswrapper[4808]: E0217 17:07:48.147307 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 17:07:51 crc kubenswrapper[4808]: E0217 17:07:51.148883 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:07:54 crc kubenswrapper[4808]: E0217 17:07:54.148611 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:08:02 crc kubenswrapper[4808]: I0217 17:08:02.146912 4808 scope.go:117] "RemoveContainer" containerID="8c4199e704474ea94fecd76ffd4e953c14d6c8288f54377aa2b3edb555caf82d" Feb 17 17:08:02 crc kubenswrapper[4808]: E0217 17:08:02.148049 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 17:08:03 crc kubenswrapper[4808]: E0217 17:08:03.149718 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:08:05 crc kubenswrapper[4808]: E0217 17:08:05.149082 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:08:15 crc kubenswrapper[4808]: E0217 17:08:15.148817 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:08:16 crc kubenswrapper[4808]: I0217 17:08:16.145396 4808 scope.go:117] "RemoveContainer" containerID="8c4199e704474ea94fecd76ffd4e953c14d6c8288f54377aa2b3edb555caf82d" Feb 17 17:08:16 crc kubenswrapper[4808]: E0217 17:08:16.145954 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 17:08:19 crc kubenswrapper[4808]: E0217 17:08:19.148714 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:08:26 crc kubenswrapper[4808]: E0217 17:08:26.149721 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:08:28 crc kubenswrapper[4808]: I0217 17:08:28.145755 4808 scope.go:117] "RemoveContainer" containerID="8c4199e704474ea94fecd76ffd4e953c14d6c8288f54377aa2b3edb555caf82d" Feb 17 17:08:28 crc kubenswrapper[4808]: E0217 17:08:28.147172 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 17:08:30 crc kubenswrapper[4808]: E0217 17:08:30.147938 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:08:40 crc kubenswrapper[4808]: E0217 17:08:40.147564 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:08:42 crc kubenswrapper[4808]: I0217 17:08:42.145922 4808 scope.go:117] "RemoveContainer" containerID="8c4199e704474ea94fecd76ffd4e953c14d6c8288f54377aa2b3edb555caf82d" Feb 17 17:08:42 crc kubenswrapper[4808]: E0217 17:08:42.146648 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 17:08:45 crc kubenswrapper[4808]: E0217 17:08:45.150931 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:08:52 crc kubenswrapper[4808]: E0217 17:08:52.148139 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:08:53 crc kubenswrapper[4808]: I0217 17:08:53.146900 4808 scope.go:117] "RemoveContainer" containerID="8c4199e704474ea94fecd76ffd4e953c14d6c8288f54377aa2b3edb555caf82d" Feb 17 17:08:53 crc kubenswrapper[4808]: E0217 17:08:53.147539 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 17:08:59 crc kubenswrapper[4808]: E0217 17:08:59.149396 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:09:05 crc kubenswrapper[4808]: E0217 17:09:05.148722 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:09:08 crc kubenswrapper[4808]: I0217 17:09:08.147071 4808 scope.go:117] "RemoveContainer" containerID="8c4199e704474ea94fecd76ffd4e953c14d6c8288f54377aa2b3edb555caf82d" Feb 17 17:09:08 crc kubenswrapper[4808]: E0217 17:09:08.148210 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 17:09:10 crc kubenswrapper[4808]: E0217 17:09:10.148835 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:09:18 crc kubenswrapper[4808]: E0217 17:09:18.147951 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:09:20 crc kubenswrapper[4808]: I0217 17:09:20.147175 4808 scope.go:117] "RemoveContainer" containerID="8c4199e704474ea94fecd76ffd4e953c14d6c8288f54377aa2b3edb555caf82d" Feb 17 17:09:20 crc kubenswrapper[4808]: E0217 17:09:20.148090 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 17:09:22 crc kubenswrapper[4808]: E0217 17:09:22.149384 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:09:24 crc kubenswrapper[4808]: I0217 17:09:24.633315 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-cmmcg"] Feb 17 17:09:24 crc kubenswrapper[4808]: E0217 17:09:24.634138 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f017987c-650c-47b4-a33f-3ab1dfb8c281" containerName="extract-content" Feb 17 17:09:24 crc kubenswrapper[4808]: I0217 17:09:24.634609 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="f017987c-650c-47b4-a33f-3ab1dfb8c281" containerName="extract-content" Feb 17 17:09:24 crc kubenswrapper[4808]: E0217 17:09:24.634629 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fa90ca1-9ae4-4cce-a41f-640f2629ccfd" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 17 17:09:24 crc kubenswrapper[4808]: I0217 17:09:24.634640 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fa90ca1-9ae4-4cce-a41f-640f2629ccfd" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 17 17:09:24 crc kubenswrapper[4808]: E0217 17:09:24.634659 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f017987c-650c-47b4-a33f-3ab1dfb8c281" containerName="registry-server" Feb 17 17:09:24 crc kubenswrapper[4808]: I0217 17:09:24.634669 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="f017987c-650c-47b4-a33f-3ab1dfb8c281" containerName="registry-server" Feb 17 17:09:24 crc kubenswrapper[4808]: E0217 17:09:24.634703 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f017987c-650c-47b4-a33f-3ab1dfb8c281" containerName="extract-utilities" Feb 17 17:09:24 crc kubenswrapper[4808]: I0217 17:09:24.634713 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="f017987c-650c-47b4-a33f-3ab1dfb8c281" containerName="extract-utilities" Feb 17 17:09:24 crc kubenswrapper[4808]: I0217 17:09:24.634975 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="f017987c-650c-47b4-a33f-3ab1dfb8c281" containerName="registry-server" Feb 17 17:09:24 crc kubenswrapper[4808]: I0217 17:09:24.635004 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fa90ca1-9ae4-4cce-a41f-640f2629ccfd" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 17 17:09:24 crc kubenswrapper[4808]: I0217 17:09:24.636924 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cmmcg" Feb 17 17:09:24 crc kubenswrapper[4808]: I0217 17:09:24.649178 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cmmcg"] Feb 17 17:09:24 crc kubenswrapper[4808]: I0217 17:09:24.752444 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/550853e0-a7b5-406d-bb66-8d36cb6f5f68-catalog-content\") pod \"certified-operators-cmmcg\" (UID: \"550853e0-a7b5-406d-bb66-8d36cb6f5f68\") " pod="openshift-marketplace/certified-operators-cmmcg" Feb 17 17:09:24 crc kubenswrapper[4808]: I0217 17:09:24.752604 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/550853e0-a7b5-406d-bb66-8d36cb6f5f68-utilities\") pod \"certified-operators-cmmcg\" (UID: \"550853e0-a7b5-406d-bb66-8d36cb6f5f68\") " pod="openshift-marketplace/certified-operators-cmmcg" Feb 17 17:09:24 crc kubenswrapper[4808]: I0217 17:09:24.752654 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4l87z\" (UniqueName: \"kubernetes.io/projected/550853e0-a7b5-406d-bb66-8d36cb6f5f68-kube-api-access-4l87z\") pod \"certified-operators-cmmcg\" (UID: \"550853e0-a7b5-406d-bb66-8d36cb6f5f68\") " pod="openshift-marketplace/certified-operators-cmmcg" Feb 17 17:09:24 crc kubenswrapper[4808]: I0217 17:09:24.854736 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4l87z\" (UniqueName: \"kubernetes.io/projected/550853e0-a7b5-406d-bb66-8d36cb6f5f68-kube-api-access-4l87z\") pod \"certified-operators-cmmcg\" (UID: \"550853e0-a7b5-406d-bb66-8d36cb6f5f68\") " pod="openshift-marketplace/certified-operators-cmmcg" Feb 17 17:09:24 crc kubenswrapper[4808]: I0217 17:09:24.854896 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/550853e0-a7b5-406d-bb66-8d36cb6f5f68-catalog-content\") pod \"certified-operators-cmmcg\" (UID: \"550853e0-a7b5-406d-bb66-8d36cb6f5f68\") " pod="openshift-marketplace/certified-operators-cmmcg" Feb 17 17:09:24 crc kubenswrapper[4808]: I0217 17:09:24.854992 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/550853e0-a7b5-406d-bb66-8d36cb6f5f68-utilities\") pod \"certified-operators-cmmcg\" (UID: \"550853e0-a7b5-406d-bb66-8d36cb6f5f68\") " pod="openshift-marketplace/certified-operators-cmmcg" Feb 17 17:09:24 crc kubenswrapper[4808]: I0217 17:09:24.855479 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/550853e0-a7b5-406d-bb66-8d36cb6f5f68-catalog-content\") pod \"certified-operators-cmmcg\" (UID: \"550853e0-a7b5-406d-bb66-8d36cb6f5f68\") " pod="openshift-marketplace/certified-operators-cmmcg" Feb 17 17:09:24 crc kubenswrapper[4808]: I0217 17:09:24.855631 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/550853e0-a7b5-406d-bb66-8d36cb6f5f68-utilities\") pod \"certified-operators-cmmcg\" (UID: \"550853e0-a7b5-406d-bb66-8d36cb6f5f68\") " pod="openshift-marketplace/certified-operators-cmmcg" Feb 17 17:09:24 crc kubenswrapper[4808]: I0217 17:09:24.874681 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4l87z\" (UniqueName: \"kubernetes.io/projected/550853e0-a7b5-406d-bb66-8d36cb6f5f68-kube-api-access-4l87z\") pod \"certified-operators-cmmcg\" (UID: \"550853e0-a7b5-406d-bb66-8d36cb6f5f68\") " pod="openshift-marketplace/certified-operators-cmmcg" Feb 17 17:09:24 crc kubenswrapper[4808]: I0217 17:09:24.957817 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cmmcg" Feb 17 17:09:25 crc kubenswrapper[4808]: I0217 17:09:25.460215 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cmmcg"] Feb 17 17:09:25 crc kubenswrapper[4808]: I0217 17:09:25.802004 4808 generic.go:334] "Generic (PLEG): container finished" podID="550853e0-a7b5-406d-bb66-8d36cb6f5f68" containerID="95360faf15a43a71492ec59485780f7ccc2c340008cbe9cd1290386950042006" exitCode=0 Feb 17 17:09:25 crc kubenswrapper[4808]: I0217 17:09:25.802199 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cmmcg" event={"ID":"550853e0-a7b5-406d-bb66-8d36cb6f5f68","Type":"ContainerDied","Data":"95360faf15a43a71492ec59485780f7ccc2c340008cbe9cd1290386950042006"} Feb 17 17:09:25 crc kubenswrapper[4808]: I0217 17:09:25.802358 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cmmcg" event={"ID":"550853e0-a7b5-406d-bb66-8d36cb6f5f68","Type":"ContainerStarted","Data":"ccb25eff53406cd3afddc36977de47347d5a7b3c88fba33ec6d8a31f7f5dacae"} Feb 17 17:09:26 crc kubenswrapper[4808]: I0217 17:09:26.812118 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cmmcg" event={"ID":"550853e0-a7b5-406d-bb66-8d36cb6f5f68","Type":"ContainerStarted","Data":"94b5d753a0a096569c2160152c6c886a79cb280fdf23304f6e7125a8a857d9fd"} Feb 17 17:09:27 crc kubenswrapper[4808]: I0217 17:09:27.825997 4808 generic.go:334] "Generic (PLEG): container finished" podID="550853e0-a7b5-406d-bb66-8d36cb6f5f68" containerID="94b5d753a0a096569c2160152c6c886a79cb280fdf23304f6e7125a8a857d9fd" exitCode=0 Feb 17 17:09:27 crc kubenswrapper[4808]: I0217 17:09:27.826124 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cmmcg" event={"ID":"550853e0-a7b5-406d-bb66-8d36cb6f5f68","Type":"ContainerDied","Data":"94b5d753a0a096569c2160152c6c886a79cb280fdf23304f6e7125a8a857d9fd"} Feb 17 17:09:28 crc kubenswrapper[4808]: I0217 17:09:28.840492 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cmmcg" event={"ID":"550853e0-a7b5-406d-bb66-8d36cb6f5f68","Type":"ContainerStarted","Data":"1a56b8819cd1fca726eb4c3fcdf5e0ccd7077e7a706d8ae0b0fe5468028f65ee"} Feb 17 17:09:28 crc kubenswrapper[4808]: I0217 17:09:28.861789 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-cmmcg" podStartSLOduration=2.433383673 podStartE2EDuration="4.8617717s" podCreationTimestamp="2026-02-17 17:09:24 +0000 UTC" firstStartedPulling="2026-02-17 17:09:25.81439615 +0000 UTC m=+4529.330755223" lastFinishedPulling="2026-02-17 17:09:28.242784177 +0000 UTC m=+4531.759143250" observedRunningTime="2026-02-17 17:09:28.854491893 +0000 UTC m=+4532.370850976" watchObservedRunningTime="2026-02-17 17:09:28.8617717 +0000 UTC m=+4532.378130773" Feb 17 17:09:32 crc kubenswrapper[4808]: E0217 17:09:32.148773 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:09:33 crc kubenswrapper[4808]: I0217 17:09:33.146346 4808 scope.go:117] "RemoveContainer" containerID="8c4199e704474ea94fecd76ffd4e953c14d6c8288f54377aa2b3edb555caf82d" Feb 17 17:09:33 crc kubenswrapper[4808]: E0217 17:09:33.146669 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 17:09:33 crc kubenswrapper[4808]: E0217 17:09:33.148196 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:09:34 crc kubenswrapper[4808]: I0217 17:09:34.958619 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-cmmcg" Feb 17 17:09:34 crc kubenswrapper[4808]: I0217 17:09:34.958970 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-cmmcg" Feb 17 17:09:35 crc kubenswrapper[4808]: I0217 17:09:35.003858 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-cmmcg" Feb 17 17:09:35 crc kubenswrapper[4808]: I0217 17:09:35.986289 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-cmmcg" Feb 17 17:09:36 crc kubenswrapper[4808]: I0217 17:09:36.039403 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cmmcg"] Feb 17 17:09:37 crc kubenswrapper[4808]: I0217 17:09:37.953370 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-cmmcg" podUID="550853e0-a7b5-406d-bb66-8d36cb6f5f68" containerName="registry-server" containerID="cri-o://1a56b8819cd1fca726eb4c3fcdf5e0ccd7077e7a706d8ae0b0fe5468028f65ee" gracePeriod=2 Feb 17 17:09:38 crc kubenswrapper[4808]: I0217 17:09:38.621898 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cmmcg" Feb 17 17:09:38 crc kubenswrapper[4808]: I0217 17:09:38.642306 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4l87z\" (UniqueName: \"kubernetes.io/projected/550853e0-a7b5-406d-bb66-8d36cb6f5f68-kube-api-access-4l87z\") pod \"550853e0-a7b5-406d-bb66-8d36cb6f5f68\" (UID: \"550853e0-a7b5-406d-bb66-8d36cb6f5f68\") " Feb 17 17:09:38 crc kubenswrapper[4808]: I0217 17:09:38.642548 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/550853e0-a7b5-406d-bb66-8d36cb6f5f68-catalog-content\") pod \"550853e0-a7b5-406d-bb66-8d36cb6f5f68\" (UID: \"550853e0-a7b5-406d-bb66-8d36cb6f5f68\") " Feb 17 17:09:38 crc kubenswrapper[4808]: I0217 17:09:38.642719 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/550853e0-a7b5-406d-bb66-8d36cb6f5f68-utilities\") pod \"550853e0-a7b5-406d-bb66-8d36cb6f5f68\" (UID: \"550853e0-a7b5-406d-bb66-8d36cb6f5f68\") " Feb 17 17:09:38 crc kubenswrapper[4808]: I0217 17:09:38.643993 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/550853e0-a7b5-406d-bb66-8d36cb6f5f68-utilities" (OuterVolumeSpecName: "utilities") pod "550853e0-a7b5-406d-bb66-8d36cb6f5f68" (UID: "550853e0-a7b5-406d-bb66-8d36cb6f5f68"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:09:38 crc kubenswrapper[4808]: I0217 17:09:38.656915 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/550853e0-a7b5-406d-bb66-8d36cb6f5f68-kube-api-access-4l87z" (OuterVolumeSpecName: "kube-api-access-4l87z") pod "550853e0-a7b5-406d-bb66-8d36cb6f5f68" (UID: "550853e0-a7b5-406d-bb66-8d36cb6f5f68"). InnerVolumeSpecName "kube-api-access-4l87z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:09:38 crc kubenswrapper[4808]: I0217 17:09:38.745935 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4l87z\" (UniqueName: \"kubernetes.io/projected/550853e0-a7b5-406d-bb66-8d36cb6f5f68-kube-api-access-4l87z\") on node \"crc\" DevicePath \"\"" Feb 17 17:09:38 crc kubenswrapper[4808]: I0217 17:09:38.745974 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/550853e0-a7b5-406d-bb66-8d36cb6f5f68-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 17:09:38 crc kubenswrapper[4808]: I0217 17:09:38.747917 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/550853e0-a7b5-406d-bb66-8d36cb6f5f68-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "550853e0-a7b5-406d-bb66-8d36cb6f5f68" (UID: "550853e0-a7b5-406d-bb66-8d36cb6f5f68"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:09:38 crc kubenswrapper[4808]: I0217 17:09:38.849349 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/550853e0-a7b5-406d-bb66-8d36cb6f5f68-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 17:09:38 crc kubenswrapper[4808]: I0217 17:09:38.963893 4808 generic.go:334] "Generic (PLEG): container finished" podID="550853e0-a7b5-406d-bb66-8d36cb6f5f68" containerID="1a56b8819cd1fca726eb4c3fcdf5e0ccd7077e7a706d8ae0b0fe5468028f65ee" exitCode=0 Feb 17 17:09:38 crc kubenswrapper[4808]: I0217 17:09:38.963946 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cmmcg" Feb 17 17:09:38 crc kubenswrapper[4808]: I0217 17:09:38.963946 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cmmcg" event={"ID":"550853e0-a7b5-406d-bb66-8d36cb6f5f68","Type":"ContainerDied","Data":"1a56b8819cd1fca726eb4c3fcdf5e0ccd7077e7a706d8ae0b0fe5468028f65ee"} Feb 17 17:09:38 crc kubenswrapper[4808]: I0217 17:09:38.964773 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cmmcg" event={"ID":"550853e0-a7b5-406d-bb66-8d36cb6f5f68","Type":"ContainerDied","Data":"ccb25eff53406cd3afddc36977de47347d5a7b3c88fba33ec6d8a31f7f5dacae"} Feb 17 17:09:38 crc kubenswrapper[4808]: I0217 17:09:38.964796 4808 scope.go:117] "RemoveContainer" containerID="1a56b8819cd1fca726eb4c3fcdf5e0ccd7077e7a706d8ae0b0fe5468028f65ee" Feb 17 17:09:38 crc kubenswrapper[4808]: I0217 17:09:38.991157 4808 scope.go:117] "RemoveContainer" containerID="94b5d753a0a096569c2160152c6c886a79cb280fdf23304f6e7125a8a857d9fd" Feb 17 17:09:39 crc kubenswrapper[4808]: I0217 17:09:39.014312 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cmmcg"] Feb 17 17:09:39 crc kubenswrapper[4808]: I0217 17:09:39.024961 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-cmmcg"] Feb 17 17:09:39 crc kubenswrapper[4808]: I0217 17:09:39.031598 4808 scope.go:117] "RemoveContainer" containerID="95360faf15a43a71492ec59485780f7ccc2c340008cbe9cd1290386950042006" Feb 17 17:09:39 crc kubenswrapper[4808]: I0217 17:09:39.077526 4808 scope.go:117] "RemoveContainer" containerID="1a56b8819cd1fca726eb4c3fcdf5e0ccd7077e7a706d8ae0b0fe5468028f65ee" Feb 17 17:09:39 crc kubenswrapper[4808]: E0217 17:09:39.077932 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a56b8819cd1fca726eb4c3fcdf5e0ccd7077e7a706d8ae0b0fe5468028f65ee\": container with ID starting with 1a56b8819cd1fca726eb4c3fcdf5e0ccd7077e7a706d8ae0b0fe5468028f65ee not found: ID does not exist" containerID="1a56b8819cd1fca726eb4c3fcdf5e0ccd7077e7a706d8ae0b0fe5468028f65ee" Feb 17 17:09:39 crc kubenswrapper[4808]: I0217 17:09:39.077977 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a56b8819cd1fca726eb4c3fcdf5e0ccd7077e7a706d8ae0b0fe5468028f65ee"} err="failed to get container status \"1a56b8819cd1fca726eb4c3fcdf5e0ccd7077e7a706d8ae0b0fe5468028f65ee\": rpc error: code = NotFound desc = could not find container \"1a56b8819cd1fca726eb4c3fcdf5e0ccd7077e7a706d8ae0b0fe5468028f65ee\": container with ID starting with 1a56b8819cd1fca726eb4c3fcdf5e0ccd7077e7a706d8ae0b0fe5468028f65ee not found: ID does not exist" Feb 17 17:09:39 crc kubenswrapper[4808]: I0217 17:09:39.078003 4808 scope.go:117] "RemoveContainer" containerID="94b5d753a0a096569c2160152c6c886a79cb280fdf23304f6e7125a8a857d9fd" Feb 17 17:09:39 crc kubenswrapper[4808]: E0217 17:09:39.078423 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"94b5d753a0a096569c2160152c6c886a79cb280fdf23304f6e7125a8a857d9fd\": container with ID starting with 94b5d753a0a096569c2160152c6c886a79cb280fdf23304f6e7125a8a857d9fd not found: ID does not exist" containerID="94b5d753a0a096569c2160152c6c886a79cb280fdf23304f6e7125a8a857d9fd" Feb 17 17:09:39 crc kubenswrapper[4808]: I0217 17:09:39.078540 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94b5d753a0a096569c2160152c6c886a79cb280fdf23304f6e7125a8a857d9fd"} err="failed to get container status \"94b5d753a0a096569c2160152c6c886a79cb280fdf23304f6e7125a8a857d9fd\": rpc error: code = NotFound desc = could not find container \"94b5d753a0a096569c2160152c6c886a79cb280fdf23304f6e7125a8a857d9fd\": container with ID starting with 94b5d753a0a096569c2160152c6c886a79cb280fdf23304f6e7125a8a857d9fd not found: ID does not exist" Feb 17 17:09:39 crc kubenswrapper[4808]: I0217 17:09:39.078637 4808 scope.go:117] "RemoveContainer" containerID="95360faf15a43a71492ec59485780f7ccc2c340008cbe9cd1290386950042006" Feb 17 17:09:39 crc kubenswrapper[4808]: E0217 17:09:39.079095 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95360faf15a43a71492ec59485780f7ccc2c340008cbe9cd1290386950042006\": container with ID starting with 95360faf15a43a71492ec59485780f7ccc2c340008cbe9cd1290386950042006 not found: ID does not exist" containerID="95360faf15a43a71492ec59485780f7ccc2c340008cbe9cd1290386950042006" Feb 17 17:09:39 crc kubenswrapper[4808]: I0217 17:09:39.079136 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95360faf15a43a71492ec59485780f7ccc2c340008cbe9cd1290386950042006"} err="failed to get container status \"95360faf15a43a71492ec59485780f7ccc2c340008cbe9cd1290386950042006\": rpc error: code = NotFound desc = could not find container \"95360faf15a43a71492ec59485780f7ccc2c340008cbe9cd1290386950042006\": container with ID starting with 95360faf15a43a71492ec59485780f7ccc2c340008cbe9cd1290386950042006 not found: ID does not exist" Feb 17 17:09:39 crc kubenswrapper[4808]: I0217 17:09:39.159267 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="550853e0-a7b5-406d-bb66-8d36cb6f5f68" path="/var/lib/kubelet/pods/550853e0-a7b5-406d-bb66-8d36cb6f5f68/volumes" Feb 17 17:09:47 crc kubenswrapper[4808]: I0217 17:09:47.162657 4808 scope.go:117] "RemoveContainer" containerID="8c4199e704474ea94fecd76ffd4e953c14d6c8288f54377aa2b3edb555caf82d" Feb 17 17:09:47 crc kubenswrapper[4808]: E0217 17:09:47.163453 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 17:09:47 crc kubenswrapper[4808]: E0217 17:09:47.167815 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:09:48 crc kubenswrapper[4808]: E0217 17:09:48.147099 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:10:01 crc kubenswrapper[4808]: E0217 17:10:01.148392 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:10:02 crc kubenswrapper[4808]: I0217 17:10:02.146409 4808 scope.go:117] "RemoveContainer" containerID="8c4199e704474ea94fecd76ffd4e953c14d6c8288f54377aa2b3edb555caf82d" Feb 17 17:10:02 crc kubenswrapper[4808]: E0217 17:10:02.146742 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 17:10:02 crc kubenswrapper[4808]: E0217 17:10:02.147728 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:10:13 crc kubenswrapper[4808]: I0217 17:10:13.147834 4808 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 17:10:13 crc kubenswrapper[4808]: E0217 17:10:13.290334 4808 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 17 17:10:13 crc kubenswrapper[4808]: E0217 17:10:13.290399 4808 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 17 17:10:13 crc kubenswrapper[4808]: E0217 17:10:13.290539 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fnd2x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-zl7nk_openstack(a4b182d0-48fc-4487-b7ad-18f7803a4d4c): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 17:10:13 crc kubenswrapper[4808]: E0217 17:10:13.292651 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:10:14 crc kubenswrapper[4808]: E0217 17:10:14.146851 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:10:15 crc kubenswrapper[4808]: I0217 17:10:15.145803 4808 scope.go:117] "RemoveContainer" containerID="8c4199e704474ea94fecd76ffd4e953c14d6c8288f54377aa2b3edb555caf82d" Feb 17 17:10:15 crc kubenswrapper[4808]: E0217 17:10:15.146391 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 17:10:26 crc kubenswrapper[4808]: E0217 17:10:26.149142 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:10:26 crc kubenswrapper[4808]: E0217 17:10:26.150465 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:10:29 crc kubenswrapper[4808]: I0217 17:10:29.145549 4808 scope.go:117] "RemoveContainer" containerID="8c4199e704474ea94fecd76ffd4e953c14d6c8288f54377aa2b3edb555caf82d" Feb 17 17:10:29 crc kubenswrapper[4808]: E0217 17:10:29.146292 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 17:10:35 crc kubenswrapper[4808]: I0217 17:10:35.969759 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-f2w9x"] Feb 17 17:10:35 crc kubenswrapper[4808]: E0217 17:10:35.970863 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="550853e0-a7b5-406d-bb66-8d36cb6f5f68" containerName="registry-server" Feb 17 17:10:35 crc kubenswrapper[4808]: I0217 17:10:35.970885 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="550853e0-a7b5-406d-bb66-8d36cb6f5f68" containerName="registry-server" Feb 17 17:10:35 crc kubenswrapper[4808]: E0217 17:10:35.970906 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="550853e0-a7b5-406d-bb66-8d36cb6f5f68" containerName="extract-utilities" Feb 17 17:10:35 crc kubenswrapper[4808]: I0217 17:10:35.970916 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="550853e0-a7b5-406d-bb66-8d36cb6f5f68" containerName="extract-utilities" Feb 17 17:10:35 crc kubenswrapper[4808]: E0217 17:10:35.970946 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="550853e0-a7b5-406d-bb66-8d36cb6f5f68" containerName="extract-content" Feb 17 17:10:35 crc kubenswrapper[4808]: I0217 17:10:35.970956 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="550853e0-a7b5-406d-bb66-8d36cb6f5f68" containerName="extract-content" Feb 17 17:10:35 crc kubenswrapper[4808]: I0217 17:10:35.971213 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="550853e0-a7b5-406d-bb66-8d36cb6f5f68" containerName="registry-server" Feb 17 17:10:35 crc kubenswrapper[4808]: I0217 17:10:35.973105 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f2w9x" Feb 17 17:10:35 crc kubenswrapper[4808]: I0217 17:10:35.989398 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-f2w9x"] Feb 17 17:10:36 crc kubenswrapper[4808]: I0217 17:10:36.072149 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zssm\" (UniqueName: \"kubernetes.io/projected/64128c02-3c74-41f3-bcdf-81c9026732ea-kube-api-access-4zssm\") pod \"redhat-marketplace-f2w9x\" (UID: \"64128c02-3c74-41f3-bcdf-81c9026732ea\") " pod="openshift-marketplace/redhat-marketplace-f2w9x" Feb 17 17:10:36 crc kubenswrapper[4808]: I0217 17:10:36.072253 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64128c02-3c74-41f3-bcdf-81c9026732ea-utilities\") pod \"redhat-marketplace-f2w9x\" (UID: \"64128c02-3c74-41f3-bcdf-81c9026732ea\") " pod="openshift-marketplace/redhat-marketplace-f2w9x" Feb 17 17:10:36 crc kubenswrapper[4808]: I0217 17:10:36.072381 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64128c02-3c74-41f3-bcdf-81c9026732ea-catalog-content\") pod \"redhat-marketplace-f2w9x\" (UID: \"64128c02-3c74-41f3-bcdf-81c9026732ea\") " pod="openshift-marketplace/redhat-marketplace-f2w9x" Feb 17 17:10:36 crc kubenswrapper[4808]: I0217 17:10:36.174309 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64128c02-3c74-41f3-bcdf-81c9026732ea-utilities\") pod \"redhat-marketplace-f2w9x\" (UID: \"64128c02-3c74-41f3-bcdf-81c9026732ea\") " pod="openshift-marketplace/redhat-marketplace-f2w9x" Feb 17 17:10:36 crc kubenswrapper[4808]: I0217 17:10:36.174422 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64128c02-3c74-41f3-bcdf-81c9026732ea-catalog-content\") pod \"redhat-marketplace-f2w9x\" (UID: \"64128c02-3c74-41f3-bcdf-81c9026732ea\") " pod="openshift-marketplace/redhat-marketplace-f2w9x" Feb 17 17:10:36 crc kubenswrapper[4808]: I0217 17:10:36.174512 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zssm\" (UniqueName: \"kubernetes.io/projected/64128c02-3c74-41f3-bcdf-81c9026732ea-kube-api-access-4zssm\") pod \"redhat-marketplace-f2w9x\" (UID: \"64128c02-3c74-41f3-bcdf-81c9026732ea\") " pod="openshift-marketplace/redhat-marketplace-f2w9x" Feb 17 17:10:36 crc kubenswrapper[4808]: I0217 17:10:36.174884 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64128c02-3c74-41f3-bcdf-81c9026732ea-utilities\") pod \"redhat-marketplace-f2w9x\" (UID: \"64128c02-3c74-41f3-bcdf-81c9026732ea\") " pod="openshift-marketplace/redhat-marketplace-f2w9x" Feb 17 17:10:36 crc kubenswrapper[4808]: I0217 17:10:36.174997 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64128c02-3c74-41f3-bcdf-81c9026732ea-catalog-content\") pod \"redhat-marketplace-f2w9x\" (UID: \"64128c02-3c74-41f3-bcdf-81c9026732ea\") " pod="openshift-marketplace/redhat-marketplace-f2w9x" Feb 17 17:10:36 crc kubenswrapper[4808]: I0217 17:10:36.192942 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zssm\" (UniqueName: \"kubernetes.io/projected/64128c02-3c74-41f3-bcdf-81c9026732ea-kube-api-access-4zssm\") pod \"redhat-marketplace-f2w9x\" (UID: \"64128c02-3c74-41f3-bcdf-81c9026732ea\") " pod="openshift-marketplace/redhat-marketplace-f2w9x" Feb 17 17:10:36 crc kubenswrapper[4808]: I0217 17:10:36.303984 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f2w9x" Feb 17 17:10:36 crc kubenswrapper[4808]: I0217 17:10:36.804316 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-f2w9x"] Feb 17 17:10:37 crc kubenswrapper[4808]: I0217 17:10:37.511506 4808 generic.go:334] "Generic (PLEG): container finished" podID="64128c02-3c74-41f3-bcdf-81c9026732ea" containerID="aa60336fee41c015063cc250ca6ff139627382ec60ae1f64c76d7bc307a3dd39" exitCode=0 Feb 17 17:10:37 crc kubenswrapper[4808]: I0217 17:10:37.511562 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f2w9x" event={"ID":"64128c02-3c74-41f3-bcdf-81c9026732ea","Type":"ContainerDied","Data":"aa60336fee41c015063cc250ca6ff139627382ec60ae1f64c76d7bc307a3dd39"} Feb 17 17:10:37 crc kubenswrapper[4808]: I0217 17:10:37.511836 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f2w9x" event={"ID":"64128c02-3c74-41f3-bcdf-81c9026732ea","Type":"ContainerStarted","Data":"79d1d6122d8cbc81beb9e1ded7f118ba4058e37cb6329f8f23299756afedfa1e"} Feb 17 17:10:39 crc kubenswrapper[4808]: E0217 17:10:39.147526 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:10:39 crc kubenswrapper[4808]: I0217 17:10:39.529700 4808 generic.go:334] "Generic (PLEG): container finished" podID="64128c02-3c74-41f3-bcdf-81c9026732ea" containerID="807e02a1ff4264542ae56a6ca4ff858c09eb0c4a3460e96e70dd6f7236fa11ce" exitCode=0 Feb 17 17:10:39 crc kubenswrapper[4808]: I0217 17:10:39.529744 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f2w9x" event={"ID":"64128c02-3c74-41f3-bcdf-81c9026732ea","Type":"ContainerDied","Data":"807e02a1ff4264542ae56a6ca4ff858c09eb0c4a3460e96e70dd6f7236fa11ce"} Feb 17 17:10:40 crc kubenswrapper[4808]: I0217 17:10:40.542858 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f2w9x" event={"ID":"64128c02-3c74-41f3-bcdf-81c9026732ea","Type":"ContainerStarted","Data":"8febff6d9d2d99f90502520cc294ac104041c05c0bd7d3ee690e5d9b10c2d051"} Feb 17 17:10:40 crc kubenswrapper[4808]: I0217 17:10:40.567334 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-f2w9x" podStartSLOduration=3.153401244 podStartE2EDuration="5.567315742s" podCreationTimestamp="2026-02-17 17:10:35 +0000 UTC" firstStartedPulling="2026-02-17 17:10:37.513890956 +0000 UTC m=+4601.030250029" lastFinishedPulling="2026-02-17 17:10:39.927805454 +0000 UTC m=+4603.444164527" observedRunningTime="2026-02-17 17:10:40.562745968 +0000 UTC m=+4604.079105061" watchObservedRunningTime="2026-02-17 17:10:40.567315742 +0000 UTC m=+4604.083674825" Feb 17 17:10:41 crc kubenswrapper[4808]: I0217 17:10:41.161038 4808 scope.go:117] "RemoveContainer" containerID="8c4199e704474ea94fecd76ffd4e953c14d6c8288f54377aa2b3edb555caf82d" Feb 17 17:10:41 crc kubenswrapper[4808]: E0217 17:10:41.161637 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 17:10:41 crc kubenswrapper[4808]: E0217 17:10:41.162865 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:10:42 crc kubenswrapper[4808]: I0217 17:10:42.967705 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-55v9n"] Feb 17 17:10:42 crc kubenswrapper[4808]: I0217 17:10:42.970695 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-55v9n" Feb 17 17:10:42 crc kubenswrapper[4808]: I0217 17:10:42.990105 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-55v9n"] Feb 17 17:10:43 crc kubenswrapper[4808]: I0217 17:10:43.035122 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6ce41d2-581f-4deb-96e5-feccc71efa4f-catalog-content\") pod \"redhat-operators-55v9n\" (UID: \"e6ce41d2-581f-4deb-96e5-feccc71efa4f\") " pod="openshift-marketplace/redhat-operators-55v9n" Feb 17 17:10:43 crc kubenswrapper[4808]: I0217 17:10:43.035313 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6ce41d2-581f-4deb-96e5-feccc71efa4f-utilities\") pod \"redhat-operators-55v9n\" (UID: \"e6ce41d2-581f-4deb-96e5-feccc71efa4f\") " pod="openshift-marketplace/redhat-operators-55v9n" Feb 17 17:10:43 crc kubenswrapper[4808]: I0217 17:10:43.035377 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wxzr\" (UniqueName: \"kubernetes.io/projected/e6ce41d2-581f-4deb-96e5-feccc71efa4f-kube-api-access-7wxzr\") pod \"redhat-operators-55v9n\" (UID: \"e6ce41d2-581f-4deb-96e5-feccc71efa4f\") " pod="openshift-marketplace/redhat-operators-55v9n" Feb 17 17:10:43 crc kubenswrapper[4808]: I0217 17:10:43.137193 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7wxzr\" (UniqueName: \"kubernetes.io/projected/e6ce41d2-581f-4deb-96e5-feccc71efa4f-kube-api-access-7wxzr\") pod \"redhat-operators-55v9n\" (UID: \"e6ce41d2-581f-4deb-96e5-feccc71efa4f\") " pod="openshift-marketplace/redhat-operators-55v9n" Feb 17 17:10:43 crc kubenswrapper[4808]: I0217 17:10:43.137354 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6ce41d2-581f-4deb-96e5-feccc71efa4f-catalog-content\") pod \"redhat-operators-55v9n\" (UID: \"e6ce41d2-581f-4deb-96e5-feccc71efa4f\") " pod="openshift-marketplace/redhat-operators-55v9n" Feb 17 17:10:43 crc kubenswrapper[4808]: I0217 17:10:43.137480 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6ce41d2-581f-4deb-96e5-feccc71efa4f-utilities\") pod \"redhat-operators-55v9n\" (UID: \"e6ce41d2-581f-4deb-96e5-feccc71efa4f\") " pod="openshift-marketplace/redhat-operators-55v9n" Feb 17 17:10:43 crc kubenswrapper[4808]: I0217 17:10:43.137999 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6ce41d2-581f-4deb-96e5-feccc71efa4f-catalog-content\") pod \"redhat-operators-55v9n\" (UID: \"e6ce41d2-581f-4deb-96e5-feccc71efa4f\") " pod="openshift-marketplace/redhat-operators-55v9n" Feb 17 17:10:43 crc kubenswrapper[4808]: I0217 17:10:43.138086 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6ce41d2-581f-4deb-96e5-feccc71efa4f-utilities\") pod \"redhat-operators-55v9n\" (UID: \"e6ce41d2-581f-4deb-96e5-feccc71efa4f\") " pod="openshift-marketplace/redhat-operators-55v9n" Feb 17 17:10:43 crc kubenswrapper[4808]: I0217 17:10:43.678500 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7wxzr\" (UniqueName: \"kubernetes.io/projected/e6ce41d2-581f-4deb-96e5-feccc71efa4f-kube-api-access-7wxzr\") pod \"redhat-operators-55v9n\" (UID: \"e6ce41d2-581f-4deb-96e5-feccc71efa4f\") " pod="openshift-marketplace/redhat-operators-55v9n" Feb 17 17:10:43 crc kubenswrapper[4808]: I0217 17:10:43.907499 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-55v9n" Feb 17 17:10:44 crc kubenswrapper[4808]: I0217 17:10:44.420774 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-55v9n"] Feb 17 17:10:44 crc kubenswrapper[4808]: I0217 17:10:44.577704 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-55v9n" event={"ID":"e6ce41d2-581f-4deb-96e5-feccc71efa4f","Type":"ContainerStarted","Data":"9920e8f68f09cf8f04ed794e724f8c7d31d4890592a660c4b9c794b5f1f573ac"} Feb 17 17:10:45 crc kubenswrapper[4808]: I0217 17:10:45.588626 4808 generic.go:334] "Generic (PLEG): container finished" podID="e6ce41d2-581f-4deb-96e5-feccc71efa4f" containerID="8cf6f7d4eabc01eb6246e1b5cdad2eb514395905564a946054ac6d48157187a8" exitCode=0 Feb 17 17:10:45 crc kubenswrapper[4808]: I0217 17:10:45.589078 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-55v9n" event={"ID":"e6ce41d2-581f-4deb-96e5-feccc71efa4f","Type":"ContainerDied","Data":"8cf6f7d4eabc01eb6246e1b5cdad2eb514395905564a946054ac6d48157187a8"} Feb 17 17:10:46 crc kubenswrapper[4808]: I0217 17:10:46.304120 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-f2w9x" Feb 17 17:10:46 crc kubenswrapper[4808]: I0217 17:10:46.304410 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-f2w9x" Feb 17 17:10:46 crc kubenswrapper[4808]: I0217 17:10:46.363675 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-f2w9x" Feb 17 17:10:46 crc kubenswrapper[4808]: I0217 17:10:46.598858 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-55v9n" event={"ID":"e6ce41d2-581f-4deb-96e5-feccc71efa4f","Type":"ContainerStarted","Data":"fc5855197c0256c75450a3f0185d1f9d8e380293721c0c4bafa7140615b1b980"} Feb 17 17:10:46 crc kubenswrapper[4808]: I0217 17:10:46.651462 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-f2w9x" Feb 17 17:10:48 crc kubenswrapper[4808]: I0217 17:10:48.756304 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-f2w9x"] Feb 17 17:10:48 crc kubenswrapper[4808]: I0217 17:10:48.757835 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-f2w9x" podUID="64128c02-3c74-41f3-bcdf-81c9026732ea" containerName="registry-server" containerID="cri-o://8febff6d9d2d99f90502520cc294ac104041c05c0bd7d3ee690e5d9b10c2d051" gracePeriod=2 Feb 17 17:10:50 crc kubenswrapper[4808]: I0217 17:10:50.633619 4808 generic.go:334] "Generic (PLEG): container finished" podID="64128c02-3c74-41f3-bcdf-81c9026732ea" containerID="8febff6d9d2d99f90502520cc294ac104041c05c0bd7d3ee690e5d9b10c2d051" exitCode=0 Feb 17 17:10:50 crc kubenswrapper[4808]: I0217 17:10:50.634149 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f2w9x" event={"ID":"64128c02-3c74-41f3-bcdf-81c9026732ea","Type":"ContainerDied","Data":"8febff6d9d2d99f90502520cc294ac104041c05c0bd7d3ee690e5d9b10c2d051"} Feb 17 17:10:50 crc kubenswrapper[4808]: I0217 17:10:50.635507 4808 generic.go:334] "Generic (PLEG): container finished" podID="e6ce41d2-581f-4deb-96e5-feccc71efa4f" containerID="fc5855197c0256c75450a3f0185d1f9d8e380293721c0c4bafa7140615b1b980" exitCode=0 Feb 17 17:10:50 crc kubenswrapper[4808]: I0217 17:10:50.635534 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-55v9n" event={"ID":"e6ce41d2-581f-4deb-96e5-feccc71efa4f","Type":"ContainerDied","Data":"fc5855197c0256c75450a3f0185d1f9d8e380293721c0c4bafa7140615b1b980"} Feb 17 17:10:51 crc kubenswrapper[4808]: I0217 17:10:51.211663 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f2w9x" Feb 17 17:10:51 crc kubenswrapper[4808]: I0217 17:10:51.330705 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4zssm\" (UniqueName: \"kubernetes.io/projected/64128c02-3c74-41f3-bcdf-81c9026732ea-kube-api-access-4zssm\") pod \"64128c02-3c74-41f3-bcdf-81c9026732ea\" (UID: \"64128c02-3c74-41f3-bcdf-81c9026732ea\") " Feb 17 17:10:51 crc kubenswrapper[4808]: I0217 17:10:51.330798 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64128c02-3c74-41f3-bcdf-81c9026732ea-utilities\") pod \"64128c02-3c74-41f3-bcdf-81c9026732ea\" (UID: \"64128c02-3c74-41f3-bcdf-81c9026732ea\") " Feb 17 17:10:51 crc kubenswrapper[4808]: I0217 17:10:51.330872 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64128c02-3c74-41f3-bcdf-81c9026732ea-catalog-content\") pod \"64128c02-3c74-41f3-bcdf-81c9026732ea\" (UID: \"64128c02-3c74-41f3-bcdf-81c9026732ea\") " Feb 17 17:10:51 crc kubenswrapper[4808]: I0217 17:10:51.332695 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/64128c02-3c74-41f3-bcdf-81c9026732ea-utilities" (OuterVolumeSpecName: "utilities") pod "64128c02-3c74-41f3-bcdf-81c9026732ea" (UID: "64128c02-3c74-41f3-bcdf-81c9026732ea"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:10:51 crc kubenswrapper[4808]: I0217 17:10:51.338965 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64128c02-3c74-41f3-bcdf-81c9026732ea-kube-api-access-4zssm" (OuterVolumeSpecName: "kube-api-access-4zssm") pod "64128c02-3c74-41f3-bcdf-81c9026732ea" (UID: "64128c02-3c74-41f3-bcdf-81c9026732ea"). InnerVolumeSpecName "kube-api-access-4zssm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:10:51 crc kubenswrapper[4808]: I0217 17:10:51.351142 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/64128c02-3c74-41f3-bcdf-81c9026732ea-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "64128c02-3c74-41f3-bcdf-81c9026732ea" (UID: "64128c02-3c74-41f3-bcdf-81c9026732ea"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:10:51 crc kubenswrapper[4808]: I0217 17:10:51.433718 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64128c02-3c74-41f3-bcdf-81c9026732ea-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 17:10:51 crc kubenswrapper[4808]: I0217 17:10:51.434172 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4zssm\" (UniqueName: \"kubernetes.io/projected/64128c02-3c74-41f3-bcdf-81c9026732ea-kube-api-access-4zssm\") on node \"crc\" DevicePath \"\"" Feb 17 17:10:51 crc kubenswrapper[4808]: I0217 17:10:51.434187 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64128c02-3c74-41f3-bcdf-81c9026732ea-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 17:10:51 crc kubenswrapper[4808]: I0217 17:10:51.653871 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-55v9n" event={"ID":"e6ce41d2-581f-4deb-96e5-feccc71efa4f","Type":"ContainerStarted","Data":"b393b8e3494270ed30cac2372010ca50c57807eb489f7c59fdf9fe1bcc69cb6c"} Feb 17 17:10:51 crc kubenswrapper[4808]: I0217 17:10:51.674166 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f2w9x" event={"ID":"64128c02-3c74-41f3-bcdf-81c9026732ea","Type":"ContainerDied","Data":"79d1d6122d8cbc81beb9e1ded7f118ba4058e37cb6329f8f23299756afedfa1e"} Feb 17 17:10:51 crc kubenswrapper[4808]: I0217 17:10:51.674223 4808 scope.go:117] "RemoveContainer" containerID="8febff6d9d2d99f90502520cc294ac104041c05c0bd7d3ee690e5d9b10c2d051" Feb 17 17:10:51 crc kubenswrapper[4808]: I0217 17:10:51.674245 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f2w9x" Feb 17 17:10:51 crc kubenswrapper[4808]: I0217 17:10:51.686586 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-55v9n" podStartSLOduration=4.250954671 podStartE2EDuration="9.686550102s" podCreationTimestamp="2026-02-17 17:10:42 +0000 UTC" firstStartedPulling="2026-02-17 17:10:45.590882405 +0000 UTC m=+4609.107241478" lastFinishedPulling="2026-02-17 17:10:51.026477836 +0000 UTC m=+4614.542836909" observedRunningTime="2026-02-17 17:10:51.675030619 +0000 UTC m=+4615.191389692" watchObservedRunningTime="2026-02-17 17:10:51.686550102 +0000 UTC m=+4615.202909175" Feb 17 17:10:51 crc kubenswrapper[4808]: I0217 17:10:51.700421 4808 scope.go:117] "RemoveContainer" containerID="807e02a1ff4264542ae56a6ca4ff858c09eb0c4a3460e96e70dd6f7236fa11ce" Feb 17 17:10:51 crc kubenswrapper[4808]: I0217 17:10:51.708022 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-f2w9x"] Feb 17 17:10:51 crc kubenswrapper[4808]: I0217 17:10:51.716545 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-f2w9x"] Feb 17 17:10:51 crc kubenswrapper[4808]: I0217 17:10:51.734772 4808 scope.go:117] "RemoveContainer" containerID="aa60336fee41c015063cc250ca6ff139627382ec60ae1f64c76d7bc307a3dd39" Feb 17 17:10:52 crc kubenswrapper[4808]: E0217 17:10:52.272688 4808 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 17:10:52 crc kubenswrapper[4808]: E0217 17:10:52.272755 4808 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 17:10:52 crc kubenswrapper[4808]: E0217 17:10:52.272893 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nfchb4h678h649h5fbh664h79h7fh666h5bfh68h565h555h59dh5b6h5bfh66ch645h547h5cbh549h9fh58bh5d4hcfh78h68chc7h5ch67dhc7h5b4q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rjgf2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(2876084b-7055-449d-9ddb-447d3a515d80): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 17:10:52 crc kubenswrapper[4808]: E0217 17:10:52.274136 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:10:53 crc kubenswrapper[4808]: I0217 17:10:53.159941 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64128c02-3c74-41f3-bcdf-81c9026732ea" path="/var/lib/kubelet/pods/64128c02-3c74-41f3-bcdf-81c9026732ea/volumes" Feb 17 17:10:53 crc kubenswrapper[4808]: I0217 17:10:53.908064 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-55v9n" Feb 17 17:10:53 crc kubenswrapper[4808]: I0217 17:10:53.908115 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-55v9n" Feb 17 17:10:54 crc kubenswrapper[4808]: I0217 17:10:54.146235 4808 scope.go:117] "RemoveContainer" containerID="8c4199e704474ea94fecd76ffd4e953c14d6c8288f54377aa2b3edb555caf82d" Feb 17 17:10:54 crc kubenswrapper[4808]: E0217 17:10:54.146774 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 17:10:54 crc kubenswrapper[4808]: I0217 17:10:54.958626 4808 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-55v9n" podUID="e6ce41d2-581f-4deb-96e5-feccc71efa4f" containerName="registry-server" probeResult="failure" output=< Feb 17 17:10:54 crc kubenswrapper[4808]: timeout: failed to connect service ":50051" within 1s Feb 17 17:10:54 crc kubenswrapper[4808]: > Feb 17 17:10:55 crc kubenswrapper[4808]: E0217 17:10:55.147648 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:11:04 crc kubenswrapper[4808]: E0217 17:11:04.150097 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:11:04 crc kubenswrapper[4808]: I0217 17:11:04.416661 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-55v9n" Feb 17 17:11:04 crc kubenswrapper[4808]: I0217 17:11:04.481666 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-55v9n" Feb 17 17:11:04 crc kubenswrapper[4808]: I0217 17:11:04.667786 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-55v9n"] Feb 17 17:11:05 crc kubenswrapper[4808]: I0217 17:11:05.807349 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-55v9n" podUID="e6ce41d2-581f-4deb-96e5-feccc71efa4f" containerName="registry-server" containerID="cri-o://b393b8e3494270ed30cac2372010ca50c57807eb489f7c59fdf9fe1bcc69cb6c" gracePeriod=2 Feb 17 17:11:06 crc kubenswrapper[4808]: I0217 17:11:06.145393 4808 scope.go:117] "RemoveContainer" containerID="8c4199e704474ea94fecd76ffd4e953c14d6c8288f54377aa2b3edb555caf82d" Feb 17 17:11:06 crc kubenswrapper[4808]: E0217 17:11:06.146066 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 17:11:06 crc kubenswrapper[4808]: I0217 17:11:06.445666 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-55v9n" Feb 17 17:11:06 crc kubenswrapper[4808]: I0217 17:11:06.553187 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7wxzr\" (UniqueName: \"kubernetes.io/projected/e6ce41d2-581f-4deb-96e5-feccc71efa4f-kube-api-access-7wxzr\") pod \"e6ce41d2-581f-4deb-96e5-feccc71efa4f\" (UID: \"e6ce41d2-581f-4deb-96e5-feccc71efa4f\") " Feb 17 17:11:06 crc kubenswrapper[4808]: I0217 17:11:06.553252 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6ce41d2-581f-4deb-96e5-feccc71efa4f-catalog-content\") pod \"e6ce41d2-581f-4deb-96e5-feccc71efa4f\" (UID: \"e6ce41d2-581f-4deb-96e5-feccc71efa4f\") " Feb 17 17:11:06 crc kubenswrapper[4808]: I0217 17:11:06.553308 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6ce41d2-581f-4deb-96e5-feccc71efa4f-utilities\") pod \"e6ce41d2-581f-4deb-96e5-feccc71efa4f\" (UID: \"e6ce41d2-581f-4deb-96e5-feccc71efa4f\") " Feb 17 17:11:06 crc kubenswrapper[4808]: I0217 17:11:06.554462 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6ce41d2-581f-4deb-96e5-feccc71efa4f-utilities" (OuterVolumeSpecName: "utilities") pod "e6ce41d2-581f-4deb-96e5-feccc71efa4f" (UID: "e6ce41d2-581f-4deb-96e5-feccc71efa4f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:11:06 crc kubenswrapper[4808]: I0217 17:11:06.559775 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6ce41d2-581f-4deb-96e5-feccc71efa4f-kube-api-access-7wxzr" (OuterVolumeSpecName: "kube-api-access-7wxzr") pod "e6ce41d2-581f-4deb-96e5-feccc71efa4f" (UID: "e6ce41d2-581f-4deb-96e5-feccc71efa4f"). InnerVolumeSpecName "kube-api-access-7wxzr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:11:06 crc kubenswrapper[4808]: I0217 17:11:06.656260 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7wxzr\" (UniqueName: \"kubernetes.io/projected/e6ce41d2-581f-4deb-96e5-feccc71efa4f-kube-api-access-7wxzr\") on node \"crc\" DevicePath \"\"" Feb 17 17:11:06 crc kubenswrapper[4808]: I0217 17:11:06.656488 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6ce41d2-581f-4deb-96e5-feccc71efa4f-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 17:11:06 crc kubenswrapper[4808]: I0217 17:11:06.683405 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6ce41d2-581f-4deb-96e5-feccc71efa4f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e6ce41d2-581f-4deb-96e5-feccc71efa4f" (UID: "e6ce41d2-581f-4deb-96e5-feccc71efa4f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:11:06 crc kubenswrapper[4808]: I0217 17:11:06.758952 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6ce41d2-581f-4deb-96e5-feccc71efa4f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 17:11:06 crc kubenswrapper[4808]: I0217 17:11:06.818599 4808 generic.go:334] "Generic (PLEG): container finished" podID="e6ce41d2-581f-4deb-96e5-feccc71efa4f" containerID="b393b8e3494270ed30cac2372010ca50c57807eb489f7c59fdf9fe1bcc69cb6c" exitCode=0 Feb 17 17:11:06 crc kubenswrapper[4808]: I0217 17:11:06.818648 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-55v9n" event={"ID":"e6ce41d2-581f-4deb-96e5-feccc71efa4f","Type":"ContainerDied","Data":"b393b8e3494270ed30cac2372010ca50c57807eb489f7c59fdf9fe1bcc69cb6c"} Feb 17 17:11:06 crc kubenswrapper[4808]: I0217 17:11:06.818657 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-55v9n" Feb 17 17:11:06 crc kubenswrapper[4808]: I0217 17:11:06.818684 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-55v9n" event={"ID":"e6ce41d2-581f-4deb-96e5-feccc71efa4f","Type":"ContainerDied","Data":"9920e8f68f09cf8f04ed794e724f8c7d31d4890592a660c4b9c794b5f1f573ac"} Feb 17 17:11:06 crc kubenswrapper[4808]: I0217 17:11:06.818720 4808 scope.go:117] "RemoveContainer" containerID="b393b8e3494270ed30cac2372010ca50c57807eb489f7c59fdf9fe1bcc69cb6c" Feb 17 17:11:06 crc kubenswrapper[4808]: I0217 17:11:06.843913 4808 scope.go:117] "RemoveContainer" containerID="fc5855197c0256c75450a3f0185d1f9d8e380293721c0c4bafa7140615b1b980" Feb 17 17:11:06 crc kubenswrapper[4808]: I0217 17:11:06.861079 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-55v9n"] Feb 17 17:11:06 crc kubenswrapper[4808]: I0217 17:11:06.877152 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-55v9n"] Feb 17 17:11:06 crc kubenswrapper[4808]: I0217 17:11:06.895988 4808 scope.go:117] "RemoveContainer" containerID="8cf6f7d4eabc01eb6246e1b5cdad2eb514395905564a946054ac6d48157187a8" Feb 17 17:11:06 crc kubenswrapper[4808]: I0217 17:11:06.928498 4808 scope.go:117] "RemoveContainer" containerID="b393b8e3494270ed30cac2372010ca50c57807eb489f7c59fdf9fe1bcc69cb6c" Feb 17 17:11:06 crc kubenswrapper[4808]: E0217 17:11:06.929002 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b393b8e3494270ed30cac2372010ca50c57807eb489f7c59fdf9fe1bcc69cb6c\": container with ID starting with b393b8e3494270ed30cac2372010ca50c57807eb489f7c59fdf9fe1bcc69cb6c not found: ID does not exist" containerID="b393b8e3494270ed30cac2372010ca50c57807eb489f7c59fdf9fe1bcc69cb6c" Feb 17 17:11:06 crc kubenswrapper[4808]: I0217 17:11:06.929042 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b393b8e3494270ed30cac2372010ca50c57807eb489f7c59fdf9fe1bcc69cb6c"} err="failed to get container status \"b393b8e3494270ed30cac2372010ca50c57807eb489f7c59fdf9fe1bcc69cb6c\": rpc error: code = NotFound desc = could not find container \"b393b8e3494270ed30cac2372010ca50c57807eb489f7c59fdf9fe1bcc69cb6c\": container with ID starting with b393b8e3494270ed30cac2372010ca50c57807eb489f7c59fdf9fe1bcc69cb6c not found: ID does not exist" Feb 17 17:11:06 crc kubenswrapper[4808]: I0217 17:11:06.929068 4808 scope.go:117] "RemoveContainer" containerID="fc5855197c0256c75450a3f0185d1f9d8e380293721c0c4bafa7140615b1b980" Feb 17 17:11:06 crc kubenswrapper[4808]: E0217 17:11:06.929429 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc5855197c0256c75450a3f0185d1f9d8e380293721c0c4bafa7140615b1b980\": container with ID starting with fc5855197c0256c75450a3f0185d1f9d8e380293721c0c4bafa7140615b1b980 not found: ID does not exist" containerID="fc5855197c0256c75450a3f0185d1f9d8e380293721c0c4bafa7140615b1b980" Feb 17 17:11:06 crc kubenswrapper[4808]: I0217 17:11:06.929475 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc5855197c0256c75450a3f0185d1f9d8e380293721c0c4bafa7140615b1b980"} err="failed to get container status \"fc5855197c0256c75450a3f0185d1f9d8e380293721c0c4bafa7140615b1b980\": rpc error: code = NotFound desc = could not find container \"fc5855197c0256c75450a3f0185d1f9d8e380293721c0c4bafa7140615b1b980\": container with ID starting with fc5855197c0256c75450a3f0185d1f9d8e380293721c0c4bafa7140615b1b980 not found: ID does not exist" Feb 17 17:11:06 crc kubenswrapper[4808]: I0217 17:11:06.929516 4808 scope.go:117] "RemoveContainer" containerID="8cf6f7d4eabc01eb6246e1b5cdad2eb514395905564a946054ac6d48157187a8" Feb 17 17:11:06 crc kubenswrapper[4808]: E0217 17:11:06.929842 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8cf6f7d4eabc01eb6246e1b5cdad2eb514395905564a946054ac6d48157187a8\": container with ID starting with 8cf6f7d4eabc01eb6246e1b5cdad2eb514395905564a946054ac6d48157187a8 not found: ID does not exist" containerID="8cf6f7d4eabc01eb6246e1b5cdad2eb514395905564a946054ac6d48157187a8" Feb 17 17:11:06 crc kubenswrapper[4808]: I0217 17:11:06.929865 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8cf6f7d4eabc01eb6246e1b5cdad2eb514395905564a946054ac6d48157187a8"} err="failed to get container status \"8cf6f7d4eabc01eb6246e1b5cdad2eb514395905564a946054ac6d48157187a8\": rpc error: code = NotFound desc = could not find container \"8cf6f7d4eabc01eb6246e1b5cdad2eb514395905564a946054ac6d48157187a8\": container with ID starting with 8cf6f7d4eabc01eb6246e1b5cdad2eb514395905564a946054ac6d48157187a8 not found: ID does not exist" Feb 17 17:11:07 crc kubenswrapper[4808]: I0217 17:11:07.158142 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6ce41d2-581f-4deb-96e5-feccc71efa4f" path="/var/lib/kubelet/pods/e6ce41d2-581f-4deb-96e5-feccc71efa4f/volumes" Feb 17 17:11:09 crc kubenswrapper[4808]: E0217 17:11:09.148420 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:11:18 crc kubenswrapper[4808]: I0217 17:11:18.145748 4808 scope.go:117] "RemoveContainer" containerID="8c4199e704474ea94fecd76ffd4e953c14d6c8288f54377aa2b3edb555caf82d" Feb 17 17:11:18 crc kubenswrapper[4808]: E0217 17:11:18.146523 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 17:11:18 crc kubenswrapper[4808]: E0217 17:11:18.148946 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:11:24 crc kubenswrapper[4808]: E0217 17:11:24.148025 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:11:30 crc kubenswrapper[4808]: E0217 17:11:30.147829 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:11:33 crc kubenswrapper[4808]: I0217 17:11:33.145989 4808 scope.go:117] "RemoveContainer" containerID="8c4199e704474ea94fecd76ffd4e953c14d6c8288f54377aa2b3edb555caf82d" Feb 17 17:11:33 crc kubenswrapper[4808]: E0217 17:11:33.146645 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 17:11:37 crc kubenswrapper[4808]: I0217 17:11:37.033789 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n8rxl"] Feb 17 17:11:37 crc kubenswrapper[4808]: E0217 17:11:37.034530 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6ce41d2-581f-4deb-96e5-feccc71efa4f" containerName="extract-utilities" Feb 17 17:11:37 crc kubenswrapper[4808]: I0217 17:11:37.034545 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6ce41d2-581f-4deb-96e5-feccc71efa4f" containerName="extract-utilities" Feb 17 17:11:37 crc kubenswrapper[4808]: E0217 17:11:37.034560 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6ce41d2-581f-4deb-96e5-feccc71efa4f" containerName="registry-server" Feb 17 17:11:37 crc kubenswrapper[4808]: I0217 17:11:37.034566 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6ce41d2-581f-4deb-96e5-feccc71efa4f" containerName="registry-server" Feb 17 17:11:37 crc kubenswrapper[4808]: E0217 17:11:37.034589 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64128c02-3c74-41f3-bcdf-81c9026732ea" containerName="extract-utilities" Feb 17 17:11:37 crc kubenswrapper[4808]: I0217 17:11:37.034596 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="64128c02-3c74-41f3-bcdf-81c9026732ea" containerName="extract-utilities" Feb 17 17:11:37 crc kubenswrapper[4808]: E0217 17:11:37.034610 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6ce41d2-581f-4deb-96e5-feccc71efa4f" containerName="extract-content" Feb 17 17:11:37 crc kubenswrapper[4808]: I0217 17:11:37.034615 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6ce41d2-581f-4deb-96e5-feccc71efa4f" containerName="extract-content" Feb 17 17:11:37 crc kubenswrapper[4808]: E0217 17:11:37.034632 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64128c02-3c74-41f3-bcdf-81c9026732ea" containerName="registry-server" Feb 17 17:11:37 crc kubenswrapper[4808]: I0217 17:11:37.034639 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="64128c02-3c74-41f3-bcdf-81c9026732ea" containerName="registry-server" Feb 17 17:11:37 crc kubenswrapper[4808]: E0217 17:11:37.034657 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64128c02-3c74-41f3-bcdf-81c9026732ea" containerName="extract-content" Feb 17 17:11:37 crc kubenswrapper[4808]: I0217 17:11:37.034663 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="64128c02-3c74-41f3-bcdf-81c9026732ea" containerName="extract-content" Feb 17 17:11:37 crc kubenswrapper[4808]: I0217 17:11:37.034923 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="64128c02-3c74-41f3-bcdf-81c9026732ea" containerName="registry-server" Feb 17 17:11:37 crc kubenswrapper[4808]: I0217 17:11:37.034952 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6ce41d2-581f-4deb-96e5-feccc71efa4f" containerName="registry-server" Feb 17 17:11:37 crc kubenswrapper[4808]: I0217 17:11:37.035882 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n8rxl" Feb 17 17:11:37 crc kubenswrapper[4808]: I0217 17:11:37.041748 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-gpcsv" Feb 17 17:11:37 crc kubenswrapper[4808]: I0217 17:11:37.041803 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 17 17:11:37 crc kubenswrapper[4808]: I0217 17:11:37.042521 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 17 17:11:37 crc kubenswrapper[4808]: I0217 17:11:37.045241 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 17 17:11:37 crc kubenswrapper[4808]: I0217 17:11:37.053752 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n8rxl"] Feb 17 17:11:37 crc kubenswrapper[4808]: E0217 17:11:37.178464 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:11:37 crc kubenswrapper[4808]: I0217 17:11:37.194488 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6mwd\" (UniqueName: \"kubernetes.io/projected/8b75e2b3-ab6a-4088-897b-7a11da62a654-kube-api-access-w6mwd\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-n8rxl\" (UID: \"8b75e2b3-ab6a-4088-897b-7a11da62a654\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n8rxl" Feb 17 17:11:37 crc kubenswrapper[4808]: I0217 17:11:37.194560 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8b75e2b3-ab6a-4088-897b-7a11da62a654-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-n8rxl\" (UID: \"8b75e2b3-ab6a-4088-897b-7a11da62a654\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n8rxl" Feb 17 17:11:37 crc kubenswrapper[4808]: I0217 17:11:37.194663 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8b75e2b3-ab6a-4088-897b-7a11da62a654-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-n8rxl\" (UID: \"8b75e2b3-ab6a-4088-897b-7a11da62a654\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n8rxl" Feb 17 17:11:37 crc kubenswrapper[4808]: I0217 17:11:37.297263 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6mwd\" (UniqueName: \"kubernetes.io/projected/8b75e2b3-ab6a-4088-897b-7a11da62a654-kube-api-access-w6mwd\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-n8rxl\" (UID: \"8b75e2b3-ab6a-4088-897b-7a11da62a654\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n8rxl" Feb 17 17:11:37 crc kubenswrapper[4808]: I0217 17:11:37.297360 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8b75e2b3-ab6a-4088-897b-7a11da62a654-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-n8rxl\" (UID: \"8b75e2b3-ab6a-4088-897b-7a11da62a654\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n8rxl" Feb 17 17:11:37 crc kubenswrapper[4808]: I0217 17:11:37.297472 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8b75e2b3-ab6a-4088-897b-7a11da62a654-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-n8rxl\" (UID: \"8b75e2b3-ab6a-4088-897b-7a11da62a654\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n8rxl" Feb 17 17:11:37 crc kubenswrapper[4808]: I0217 17:11:37.303496 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8b75e2b3-ab6a-4088-897b-7a11da62a654-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-n8rxl\" (UID: \"8b75e2b3-ab6a-4088-897b-7a11da62a654\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n8rxl" Feb 17 17:11:37 crc kubenswrapper[4808]: I0217 17:11:37.305093 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8b75e2b3-ab6a-4088-897b-7a11da62a654-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-n8rxl\" (UID: \"8b75e2b3-ab6a-4088-897b-7a11da62a654\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n8rxl" Feb 17 17:11:37 crc kubenswrapper[4808]: I0217 17:11:37.314768 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6mwd\" (UniqueName: \"kubernetes.io/projected/8b75e2b3-ab6a-4088-897b-7a11da62a654-kube-api-access-w6mwd\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-n8rxl\" (UID: \"8b75e2b3-ab6a-4088-897b-7a11da62a654\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n8rxl" Feb 17 17:11:37 crc kubenswrapper[4808]: I0217 17:11:37.359674 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n8rxl" Feb 17 17:11:37 crc kubenswrapper[4808]: I0217 17:11:37.886225 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n8rxl"] Feb 17 17:11:38 crc kubenswrapper[4808]: I0217 17:11:38.136461 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n8rxl" event={"ID":"8b75e2b3-ab6a-4088-897b-7a11da62a654","Type":"ContainerStarted","Data":"51b0c8c29ac10b4d9baa4163a7a8c609d16873c474ab44261d797cf1ed54691b"} Feb 17 17:11:39 crc kubenswrapper[4808]: I0217 17:11:39.158480 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n8rxl" event={"ID":"8b75e2b3-ab6a-4088-897b-7a11da62a654","Type":"ContainerStarted","Data":"567a499a540dcc4f77c295be8cc3ad41d4b2ef5fffbee3f75374436d200ff856"} Feb 17 17:11:39 crc kubenswrapper[4808]: I0217 17:11:39.168982 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n8rxl" podStartSLOduration=1.72798361 podStartE2EDuration="2.168957646s" podCreationTimestamp="2026-02-17 17:11:37 +0000 UTC" firstStartedPulling="2026-02-17 17:11:37.893193339 +0000 UTC m=+4661.409552402" lastFinishedPulling="2026-02-17 17:11:38.334167365 +0000 UTC m=+4661.850526438" observedRunningTime="2026-02-17 17:11:39.158449541 +0000 UTC m=+4662.674808634" watchObservedRunningTime="2026-02-17 17:11:39.168957646 +0000 UTC m=+4662.685316729" Feb 17 17:11:44 crc kubenswrapper[4808]: I0217 17:11:44.146029 4808 scope.go:117] "RemoveContainer" containerID="8c4199e704474ea94fecd76ffd4e953c14d6c8288f54377aa2b3edb555caf82d" Feb 17 17:11:44 crc kubenswrapper[4808]: E0217 17:11:44.147013 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 17:11:44 crc kubenswrapper[4808]: E0217 17:11:44.148750 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:11:52 crc kubenswrapper[4808]: E0217 17:11:52.148913 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:11:58 crc kubenswrapper[4808]: I0217 17:11:58.145465 4808 scope.go:117] "RemoveContainer" containerID="8c4199e704474ea94fecd76ffd4e953c14d6c8288f54377aa2b3edb555caf82d" Feb 17 17:11:58 crc kubenswrapper[4808]: E0217 17:11:58.146272 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 17:11:59 crc kubenswrapper[4808]: E0217 17:11:59.148347 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:12:03 crc kubenswrapper[4808]: E0217 17:12:03.149174 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:12:10 crc kubenswrapper[4808]: I0217 17:12:10.145965 4808 scope.go:117] "RemoveContainer" containerID="8c4199e704474ea94fecd76ffd4e953c14d6c8288f54377aa2b3edb555caf82d" Feb 17 17:12:10 crc kubenswrapper[4808]: E0217 17:12:10.146718 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 17:12:11 crc kubenswrapper[4808]: E0217 17:12:11.148981 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:12:16 crc kubenswrapper[4808]: E0217 17:12:16.147852 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:12:21 crc kubenswrapper[4808]: I0217 17:12:21.145369 4808 scope.go:117] "RemoveContainer" containerID="8c4199e704474ea94fecd76ffd4e953c14d6c8288f54377aa2b3edb555caf82d" Feb 17 17:12:21 crc kubenswrapper[4808]: E0217 17:12:21.146269 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 17:12:24 crc kubenswrapper[4808]: E0217 17:12:24.149513 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:12:29 crc kubenswrapper[4808]: E0217 17:12:29.150256 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:12:34 crc kubenswrapper[4808]: I0217 17:12:34.146659 4808 scope.go:117] "RemoveContainer" containerID="8c4199e704474ea94fecd76ffd4e953c14d6c8288f54377aa2b3edb555caf82d" Feb 17 17:12:35 crc kubenswrapper[4808]: I0217 17:12:35.720845 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" event={"ID":"ca38b6e7-b21c-453d-8b6c-a163dac84b35","Type":"ContainerStarted","Data":"58bcbbc2c5e0ad864e56ef85b7ac0fac1bf31a5ac704070c7ce20d28c92d2ac6"} Feb 17 17:12:38 crc kubenswrapper[4808]: E0217 17:12:38.149180 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:12:41 crc kubenswrapper[4808]: E0217 17:12:41.148321 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:12:50 crc kubenswrapper[4808]: E0217 17:12:50.150514 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:12:52 crc kubenswrapper[4808]: E0217 17:12:52.147417 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:13:03 crc kubenswrapper[4808]: E0217 17:13:03.147286 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:13:05 crc kubenswrapper[4808]: E0217 17:13:05.147563 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:13:14 crc kubenswrapper[4808]: E0217 17:13:14.148894 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:13:17 crc kubenswrapper[4808]: E0217 17:13:17.157714 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:13:26 crc kubenswrapper[4808]: E0217 17:13:26.147024 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:13:31 crc kubenswrapper[4808]: E0217 17:13:31.150470 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:13:36 crc kubenswrapper[4808]: I0217 17:13:36.679868 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-zvffh"] Feb 17 17:13:36 crc kubenswrapper[4808]: I0217 17:13:36.683125 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zvffh" Feb 17 17:13:36 crc kubenswrapper[4808]: I0217 17:13:36.692812 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zvffh"] Feb 17 17:13:36 crc kubenswrapper[4808]: I0217 17:13:36.859752 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4mhv\" (UniqueName: \"kubernetes.io/projected/ca6bd2a4-d763-4e62-987d-a92c0b70ab23-kube-api-access-q4mhv\") pod \"community-operators-zvffh\" (UID: \"ca6bd2a4-d763-4e62-987d-a92c0b70ab23\") " pod="openshift-marketplace/community-operators-zvffh" Feb 17 17:13:36 crc kubenswrapper[4808]: I0217 17:13:36.859793 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca6bd2a4-d763-4e62-987d-a92c0b70ab23-utilities\") pod \"community-operators-zvffh\" (UID: \"ca6bd2a4-d763-4e62-987d-a92c0b70ab23\") " pod="openshift-marketplace/community-operators-zvffh" Feb 17 17:13:36 crc kubenswrapper[4808]: I0217 17:13:36.859981 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca6bd2a4-d763-4e62-987d-a92c0b70ab23-catalog-content\") pod \"community-operators-zvffh\" (UID: \"ca6bd2a4-d763-4e62-987d-a92c0b70ab23\") " pod="openshift-marketplace/community-operators-zvffh" Feb 17 17:13:36 crc kubenswrapper[4808]: I0217 17:13:36.962944 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca6bd2a4-d763-4e62-987d-a92c0b70ab23-catalog-content\") pod \"community-operators-zvffh\" (UID: \"ca6bd2a4-d763-4e62-987d-a92c0b70ab23\") " pod="openshift-marketplace/community-operators-zvffh" Feb 17 17:13:36 crc kubenswrapper[4808]: I0217 17:13:36.963278 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4mhv\" (UniqueName: \"kubernetes.io/projected/ca6bd2a4-d763-4e62-987d-a92c0b70ab23-kube-api-access-q4mhv\") pod \"community-operators-zvffh\" (UID: \"ca6bd2a4-d763-4e62-987d-a92c0b70ab23\") " pod="openshift-marketplace/community-operators-zvffh" Feb 17 17:13:36 crc kubenswrapper[4808]: I0217 17:13:36.963326 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca6bd2a4-d763-4e62-987d-a92c0b70ab23-utilities\") pod \"community-operators-zvffh\" (UID: \"ca6bd2a4-d763-4e62-987d-a92c0b70ab23\") " pod="openshift-marketplace/community-operators-zvffh" Feb 17 17:13:36 crc kubenswrapper[4808]: I0217 17:13:36.963501 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca6bd2a4-d763-4e62-987d-a92c0b70ab23-catalog-content\") pod \"community-operators-zvffh\" (UID: \"ca6bd2a4-d763-4e62-987d-a92c0b70ab23\") " pod="openshift-marketplace/community-operators-zvffh" Feb 17 17:13:36 crc kubenswrapper[4808]: I0217 17:13:36.963780 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca6bd2a4-d763-4e62-987d-a92c0b70ab23-utilities\") pod \"community-operators-zvffh\" (UID: \"ca6bd2a4-d763-4e62-987d-a92c0b70ab23\") " pod="openshift-marketplace/community-operators-zvffh" Feb 17 17:13:37 crc kubenswrapper[4808]: I0217 17:13:37.176496 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4mhv\" (UniqueName: \"kubernetes.io/projected/ca6bd2a4-d763-4e62-987d-a92c0b70ab23-kube-api-access-q4mhv\") pod \"community-operators-zvffh\" (UID: \"ca6bd2a4-d763-4e62-987d-a92c0b70ab23\") " pod="openshift-marketplace/community-operators-zvffh" Feb 17 17:13:37 crc kubenswrapper[4808]: I0217 17:13:37.307252 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zvffh" Feb 17 17:13:37 crc kubenswrapper[4808]: I0217 17:13:37.786287 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zvffh"] Feb 17 17:13:38 crc kubenswrapper[4808]: I0217 17:13:38.344850 4808 generic.go:334] "Generic (PLEG): container finished" podID="ca6bd2a4-d763-4e62-987d-a92c0b70ab23" containerID="1e4954d8d53e14d207092fd1ff1c65c599e51163b8bd26a9c95ae2fa67233929" exitCode=0 Feb 17 17:13:38 crc kubenswrapper[4808]: I0217 17:13:38.344912 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zvffh" event={"ID":"ca6bd2a4-d763-4e62-987d-a92c0b70ab23","Type":"ContainerDied","Data":"1e4954d8d53e14d207092fd1ff1c65c599e51163b8bd26a9c95ae2fa67233929"} Feb 17 17:13:38 crc kubenswrapper[4808]: I0217 17:13:38.345074 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zvffh" event={"ID":"ca6bd2a4-d763-4e62-987d-a92c0b70ab23","Type":"ContainerStarted","Data":"625c4fef6bea75edcad876b08bf830934120d670591765b411b4986a0f6c1872"} Feb 17 17:13:40 crc kubenswrapper[4808]: E0217 17:13:40.148423 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:13:40 crc kubenswrapper[4808]: I0217 17:13:40.370417 4808 generic.go:334] "Generic (PLEG): container finished" podID="ca6bd2a4-d763-4e62-987d-a92c0b70ab23" containerID="25f65877411676a4ea1e8683f40495c1e0d393dae14df9e0e54782496ba8b60d" exitCode=0 Feb 17 17:13:40 crc kubenswrapper[4808]: I0217 17:13:40.370495 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zvffh" event={"ID":"ca6bd2a4-d763-4e62-987d-a92c0b70ab23","Type":"ContainerDied","Data":"25f65877411676a4ea1e8683f40495c1e0d393dae14df9e0e54782496ba8b60d"} Feb 17 17:13:41 crc kubenswrapper[4808]: I0217 17:13:41.381477 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zvffh" event={"ID":"ca6bd2a4-d763-4e62-987d-a92c0b70ab23","Type":"ContainerStarted","Data":"5777be6f307edad954fa4106dc511251ac2aab53db20b017d9c79b97cda20768"} Feb 17 17:13:41 crc kubenswrapper[4808]: I0217 17:13:41.406938 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-zvffh" podStartSLOduration=3.005038174 podStartE2EDuration="5.406917335s" podCreationTimestamp="2026-02-17 17:13:36 +0000 UTC" firstStartedPulling="2026-02-17 17:13:38.34726305 +0000 UTC m=+4781.863622133" lastFinishedPulling="2026-02-17 17:13:40.749142221 +0000 UTC m=+4784.265501294" observedRunningTime="2026-02-17 17:13:41.397099588 +0000 UTC m=+4784.913458671" watchObservedRunningTime="2026-02-17 17:13:41.406917335 +0000 UTC m=+4784.923276418" Feb 17 17:13:45 crc kubenswrapper[4808]: E0217 17:13:45.148928 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:13:47 crc kubenswrapper[4808]: I0217 17:13:47.307920 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-zvffh" Feb 17 17:13:47 crc kubenswrapper[4808]: I0217 17:13:47.308244 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-zvffh" Feb 17 17:13:47 crc kubenswrapper[4808]: I0217 17:13:47.366837 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-zvffh" Feb 17 17:13:47 crc kubenswrapper[4808]: I0217 17:13:47.488022 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-zvffh" Feb 17 17:13:47 crc kubenswrapper[4808]: I0217 17:13:47.607257 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zvffh"] Feb 17 17:13:49 crc kubenswrapper[4808]: I0217 17:13:49.462339 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-zvffh" podUID="ca6bd2a4-d763-4e62-987d-a92c0b70ab23" containerName="registry-server" containerID="cri-o://5777be6f307edad954fa4106dc511251ac2aab53db20b017d9c79b97cda20768" gracePeriod=2 Feb 17 17:13:50 crc kubenswrapper[4808]: I0217 17:13:50.001588 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zvffh" Feb 17 17:13:50 crc kubenswrapper[4808]: I0217 17:13:50.148313 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca6bd2a4-d763-4e62-987d-a92c0b70ab23-utilities\") pod \"ca6bd2a4-d763-4e62-987d-a92c0b70ab23\" (UID: \"ca6bd2a4-d763-4e62-987d-a92c0b70ab23\") " Feb 17 17:13:50 crc kubenswrapper[4808]: I0217 17:13:50.148369 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca6bd2a4-d763-4e62-987d-a92c0b70ab23-catalog-content\") pod \"ca6bd2a4-d763-4e62-987d-a92c0b70ab23\" (UID: \"ca6bd2a4-d763-4e62-987d-a92c0b70ab23\") " Feb 17 17:13:50 crc kubenswrapper[4808]: I0217 17:13:50.148462 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4mhv\" (UniqueName: \"kubernetes.io/projected/ca6bd2a4-d763-4e62-987d-a92c0b70ab23-kube-api-access-q4mhv\") pod \"ca6bd2a4-d763-4e62-987d-a92c0b70ab23\" (UID: \"ca6bd2a4-d763-4e62-987d-a92c0b70ab23\") " Feb 17 17:13:50 crc kubenswrapper[4808]: I0217 17:13:50.149284 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca6bd2a4-d763-4e62-987d-a92c0b70ab23-utilities" (OuterVolumeSpecName: "utilities") pod "ca6bd2a4-d763-4e62-987d-a92c0b70ab23" (UID: "ca6bd2a4-d763-4e62-987d-a92c0b70ab23"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:13:50 crc kubenswrapper[4808]: I0217 17:13:50.154314 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca6bd2a4-d763-4e62-987d-a92c0b70ab23-kube-api-access-q4mhv" (OuterVolumeSpecName: "kube-api-access-q4mhv") pod "ca6bd2a4-d763-4e62-987d-a92c0b70ab23" (UID: "ca6bd2a4-d763-4e62-987d-a92c0b70ab23"). InnerVolumeSpecName "kube-api-access-q4mhv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:13:50 crc kubenswrapper[4808]: I0217 17:13:50.252736 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca6bd2a4-d763-4e62-987d-a92c0b70ab23-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 17:13:50 crc kubenswrapper[4808]: I0217 17:13:50.252796 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q4mhv\" (UniqueName: \"kubernetes.io/projected/ca6bd2a4-d763-4e62-987d-a92c0b70ab23-kube-api-access-q4mhv\") on node \"crc\" DevicePath \"\"" Feb 17 17:13:50 crc kubenswrapper[4808]: I0217 17:13:50.472445 4808 generic.go:334] "Generic (PLEG): container finished" podID="ca6bd2a4-d763-4e62-987d-a92c0b70ab23" containerID="5777be6f307edad954fa4106dc511251ac2aab53db20b017d9c79b97cda20768" exitCode=0 Feb 17 17:13:50 crc kubenswrapper[4808]: I0217 17:13:50.472489 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zvffh" event={"ID":"ca6bd2a4-d763-4e62-987d-a92c0b70ab23","Type":"ContainerDied","Data":"5777be6f307edad954fa4106dc511251ac2aab53db20b017d9c79b97cda20768"} Feb 17 17:13:50 crc kubenswrapper[4808]: I0217 17:13:50.472501 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zvffh" Feb 17 17:13:50 crc kubenswrapper[4808]: I0217 17:13:50.472548 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zvffh" event={"ID":"ca6bd2a4-d763-4e62-987d-a92c0b70ab23","Type":"ContainerDied","Data":"625c4fef6bea75edcad876b08bf830934120d670591765b411b4986a0f6c1872"} Feb 17 17:13:50 crc kubenswrapper[4808]: I0217 17:13:50.472589 4808 scope.go:117] "RemoveContainer" containerID="5777be6f307edad954fa4106dc511251ac2aab53db20b017d9c79b97cda20768" Feb 17 17:13:50 crc kubenswrapper[4808]: I0217 17:13:50.489981 4808 scope.go:117] "RemoveContainer" containerID="25f65877411676a4ea1e8683f40495c1e0d393dae14df9e0e54782496ba8b60d" Feb 17 17:13:50 crc kubenswrapper[4808]: I0217 17:13:50.510731 4808 scope.go:117] "RemoveContainer" containerID="1e4954d8d53e14d207092fd1ff1c65c599e51163b8bd26a9c95ae2fa67233929" Feb 17 17:13:50 crc kubenswrapper[4808]: I0217 17:13:50.573617 4808 scope.go:117] "RemoveContainer" containerID="5777be6f307edad954fa4106dc511251ac2aab53db20b017d9c79b97cda20768" Feb 17 17:13:50 crc kubenswrapper[4808]: E0217 17:13:50.574065 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5777be6f307edad954fa4106dc511251ac2aab53db20b017d9c79b97cda20768\": container with ID starting with 5777be6f307edad954fa4106dc511251ac2aab53db20b017d9c79b97cda20768 not found: ID does not exist" containerID="5777be6f307edad954fa4106dc511251ac2aab53db20b017d9c79b97cda20768" Feb 17 17:13:50 crc kubenswrapper[4808]: I0217 17:13:50.574101 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5777be6f307edad954fa4106dc511251ac2aab53db20b017d9c79b97cda20768"} err="failed to get container status \"5777be6f307edad954fa4106dc511251ac2aab53db20b017d9c79b97cda20768\": rpc error: code = NotFound desc = could not find container \"5777be6f307edad954fa4106dc511251ac2aab53db20b017d9c79b97cda20768\": container with ID starting with 5777be6f307edad954fa4106dc511251ac2aab53db20b017d9c79b97cda20768 not found: ID does not exist" Feb 17 17:13:50 crc kubenswrapper[4808]: I0217 17:13:50.574128 4808 scope.go:117] "RemoveContainer" containerID="25f65877411676a4ea1e8683f40495c1e0d393dae14df9e0e54782496ba8b60d" Feb 17 17:13:50 crc kubenswrapper[4808]: E0217 17:13:50.574376 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"25f65877411676a4ea1e8683f40495c1e0d393dae14df9e0e54782496ba8b60d\": container with ID starting with 25f65877411676a4ea1e8683f40495c1e0d393dae14df9e0e54782496ba8b60d not found: ID does not exist" containerID="25f65877411676a4ea1e8683f40495c1e0d393dae14df9e0e54782496ba8b60d" Feb 17 17:13:50 crc kubenswrapper[4808]: I0217 17:13:50.574400 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25f65877411676a4ea1e8683f40495c1e0d393dae14df9e0e54782496ba8b60d"} err="failed to get container status \"25f65877411676a4ea1e8683f40495c1e0d393dae14df9e0e54782496ba8b60d\": rpc error: code = NotFound desc = could not find container \"25f65877411676a4ea1e8683f40495c1e0d393dae14df9e0e54782496ba8b60d\": container with ID starting with 25f65877411676a4ea1e8683f40495c1e0d393dae14df9e0e54782496ba8b60d not found: ID does not exist" Feb 17 17:13:50 crc kubenswrapper[4808]: I0217 17:13:50.574414 4808 scope.go:117] "RemoveContainer" containerID="1e4954d8d53e14d207092fd1ff1c65c599e51163b8bd26a9c95ae2fa67233929" Feb 17 17:13:50 crc kubenswrapper[4808]: E0217 17:13:50.574718 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e4954d8d53e14d207092fd1ff1c65c599e51163b8bd26a9c95ae2fa67233929\": container with ID starting with 1e4954d8d53e14d207092fd1ff1c65c599e51163b8bd26a9c95ae2fa67233929 not found: ID does not exist" containerID="1e4954d8d53e14d207092fd1ff1c65c599e51163b8bd26a9c95ae2fa67233929" Feb 17 17:13:50 crc kubenswrapper[4808]: I0217 17:13:50.574768 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e4954d8d53e14d207092fd1ff1c65c599e51163b8bd26a9c95ae2fa67233929"} err="failed to get container status \"1e4954d8d53e14d207092fd1ff1c65c599e51163b8bd26a9c95ae2fa67233929\": rpc error: code = NotFound desc = could not find container \"1e4954d8d53e14d207092fd1ff1c65c599e51163b8bd26a9c95ae2fa67233929\": container with ID starting with 1e4954d8d53e14d207092fd1ff1c65c599e51163b8bd26a9c95ae2fa67233929 not found: ID does not exist" Feb 17 17:13:50 crc kubenswrapper[4808]: I0217 17:13:50.953675 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca6bd2a4-d763-4e62-987d-a92c0b70ab23-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ca6bd2a4-d763-4e62-987d-a92c0b70ab23" (UID: "ca6bd2a4-d763-4e62-987d-a92c0b70ab23"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:13:50 crc kubenswrapper[4808]: I0217 17:13:50.978114 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca6bd2a4-d763-4e62-987d-a92c0b70ab23-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 17:13:51 crc kubenswrapper[4808]: I0217 17:13:51.165027 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zvffh"] Feb 17 17:13:51 crc kubenswrapper[4808]: I0217 17:13:51.165061 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-zvffh"] Feb 17 17:13:53 crc kubenswrapper[4808]: I0217 17:13:53.157197 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca6bd2a4-d763-4e62-987d-a92c0b70ab23" path="/var/lib/kubelet/pods/ca6bd2a4-d763-4e62-987d-a92c0b70ab23/volumes" Feb 17 17:13:55 crc kubenswrapper[4808]: E0217 17:13:55.148228 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:13:59 crc kubenswrapper[4808]: E0217 17:13:59.151377 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:14:10 crc kubenswrapper[4808]: E0217 17:14:10.160064 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:14:12 crc kubenswrapper[4808]: E0217 17:14:12.183276 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:14:24 crc kubenswrapper[4808]: E0217 17:14:24.149098 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:14:25 crc kubenswrapper[4808]: E0217 17:14:25.149216 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:14:37 crc kubenswrapper[4808]: E0217 17:14:37.160565 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:14:38 crc kubenswrapper[4808]: E0217 17:14:38.148865 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:14:48 crc kubenswrapper[4808]: E0217 17:14:48.148884 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:14:51 crc kubenswrapper[4808]: E0217 17:14:51.148119 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:14:51 crc kubenswrapper[4808]: I0217 17:14:51.592595 4808 patch_prober.go:28] interesting pod/machine-config-daemon-k8v8k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 17:14:51 crc kubenswrapper[4808]: I0217 17:14:51.592876 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 17:14:59 crc kubenswrapper[4808]: E0217 17:14:59.148952 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:15:00 crc kubenswrapper[4808]: I0217 17:15:00.160825 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522475-hrfk9"] Feb 17 17:15:00 crc kubenswrapper[4808]: E0217 17:15:00.161675 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca6bd2a4-d763-4e62-987d-a92c0b70ab23" containerName="extract-content" Feb 17 17:15:00 crc kubenswrapper[4808]: I0217 17:15:00.161693 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca6bd2a4-d763-4e62-987d-a92c0b70ab23" containerName="extract-content" Feb 17 17:15:00 crc kubenswrapper[4808]: E0217 17:15:00.161706 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca6bd2a4-d763-4e62-987d-a92c0b70ab23" containerName="registry-server" Feb 17 17:15:00 crc kubenswrapper[4808]: I0217 17:15:00.161713 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca6bd2a4-d763-4e62-987d-a92c0b70ab23" containerName="registry-server" Feb 17 17:15:00 crc kubenswrapper[4808]: E0217 17:15:00.161735 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca6bd2a4-d763-4e62-987d-a92c0b70ab23" containerName="extract-utilities" Feb 17 17:15:00 crc kubenswrapper[4808]: I0217 17:15:00.161743 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca6bd2a4-d763-4e62-987d-a92c0b70ab23" containerName="extract-utilities" Feb 17 17:15:00 crc kubenswrapper[4808]: I0217 17:15:00.162112 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca6bd2a4-d763-4e62-987d-a92c0b70ab23" containerName="registry-server" Feb 17 17:15:00 crc kubenswrapper[4808]: I0217 17:15:00.163061 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522475-hrfk9" Feb 17 17:15:00 crc kubenswrapper[4808]: I0217 17:15:00.164936 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 17 17:15:00 crc kubenswrapper[4808]: I0217 17:15:00.165090 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 17 17:15:00 crc kubenswrapper[4808]: I0217 17:15:00.171059 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522475-hrfk9"] Feb 17 17:15:00 crc kubenswrapper[4808]: I0217 17:15:00.248454 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7ac3bf12-5c8e-40fe-b51b-c7629260bbd6-secret-volume\") pod \"collect-profiles-29522475-hrfk9\" (UID: \"7ac3bf12-5c8e-40fe-b51b-c7629260bbd6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522475-hrfk9" Feb 17 17:15:00 crc kubenswrapper[4808]: I0217 17:15:00.248527 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7ac3bf12-5c8e-40fe-b51b-c7629260bbd6-config-volume\") pod \"collect-profiles-29522475-hrfk9\" (UID: \"7ac3bf12-5c8e-40fe-b51b-c7629260bbd6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522475-hrfk9" Feb 17 17:15:00 crc kubenswrapper[4808]: I0217 17:15:00.248555 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzc8h\" (UniqueName: \"kubernetes.io/projected/7ac3bf12-5c8e-40fe-b51b-c7629260bbd6-kube-api-access-tzc8h\") pod \"collect-profiles-29522475-hrfk9\" (UID: \"7ac3bf12-5c8e-40fe-b51b-c7629260bbd6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522475-hrfk9" Feb 17 17:15:00 crc kubenswrapper[4808]: I0217 17:15:00.350352 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7ac3bf12-5c8e-40fe-b51b-c7629260bbd6-secret-volume\") pod \"collect-profiles-29522475-hrfk9\" (UID: \"7ac3bf12-5c8e-40fe-b51b-c7629260bbd6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522475-hrfk9" Feb 17 17:15:00 crc kubenswrapper[4808]: I0217 17:15:00.350463 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7ac3bf12-5c8e-40fe-b51b-c7629260bbd6-config-volume\") pod \"collect-profiles-29522475-hrfk9\" (UID: \"7ac3bf12-5c8e-40fe-b51b-c7629260bbd6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522475-hrfk9" Feb 17 17:15:00 crc kubenswrapper[4808]: I0217 17:15:00.350500 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzc8h\" (UniqueName: \"kubernetes.io/projected/7ac3bf12-5c8e-40fe-b51b-c7629260bbd6-kube-api-access-tzc8h\") pod \"collect-profiles-29522475-hrfk9\" (UID: \"7ac3bf12-5c8e-40fe-b51b-c7629260bbd6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522475-hrfk9" Feb 17 17:15:00 crc kubenswrapper[4808]: I0217 17:15:00.351344 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7ac3bf12-5c8e-40fe-b51b-c7629260bbd6-config-volume\") pod \"collect-profiles-29522475-hrfk9\" (UID: \"7ac3bf12-5c8e-40fe-b51b-c7629260bbd6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522475-hrfk9" Feb 17 17:15:00 crc kubenswrapper[4808]: I0217 17:15:00.379800 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7ac3bf12-5c8e-40fe-b51b-c7629260bbd6-secret-volume\") pod \"collect-profiles-29522475-hrfk9\" (UID: \"7ac3bf12-5c8e-40fe-b51b-c7629260bbd6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522475-hrfk9" Feb 17 17:15:00 crc kubenswrapper[4808]: I0217 17:15:00.380933 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzc8h\" (UniqueName: \"kubernetes.io/projected/7ac3bf12-5c8e-40fe-b51b-c7629260bbd6-kube-api-access-tzc8h\") pod \"collect-profiles-29522475-hrfk9\" (UID: \"7ac3bf12-5c8e-40fe-b51b-c7629260bbd6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522475-hrfk9" Feb 17 17:15:00 crc kubenswrapper[4808]: I0217 17:15:00.484080 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522475-hrfk9" Feb 17 17:15:00 crc kubenswrapper[4808]: I0217 17:15:00.955922 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522475-hrfk9"] Feb 17 17:15:01 crc kubenswrapper[4808]: I0217 17:15:01.144343 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522475-hrfk9" event={"ID":"7ac3bf12-5c8e-40fe-b51b-c7629260bbd6","Type":"ContainerStarted","Data":"4291df65054dcd470f5111a9574440824feb650adcbbe4cc8e3880fa44689cf1"} Feb 17 17:15:02 crc kubenswrapper[4808]: I0217 17:15:02.157192 4808 generic.go:334] "Generic (PLEG): container finished" podID="7ac3bf12-5c8e-40fe-b51b-c7629260bbd6" containerID="cc95aa572ff0403bb73e21beb9f0dc29f6d5c4ca75ea590e0734ae58b602f1f0" exitCode=0 Feb 17 17:15:02 crc kubenswrapper[4808]: I0217 17:15:02.157491 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522475-hrfk9" event={"ID":"7ac3bf12-5c8e-40fe-b51b-c7629260bbd6","Type":"ContainerDied","Data":"cc95aa572ff0403bb73e21beb9f0dc29f6d5c4ca75ea590e0734ae58b602f1f0"} Feb 17 17:15:03 crc kubenswrapper[4808]: E0217 17:15:03.147794 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:15:03 crc kubenswrapper[4808]: I0217 17:15:03.571040 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522475-hrfk9" Feb 17 17:15:03 crc kubenswrapper[4808]: I0217 17:15:03.722860 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tzc8h\" (UniqueName: \"kubernetes.io/projected/7ac3bf12-5c8e-40fe-b51b-c7629260bbd6-kube-api-access-tzc8h\") pod \"7ac3bf12-5c8e-40fe-b51b-c7629260bbd6\" (UID: \"7ac3bf12-5c8e-40fe-b51b-c7629260bbd6\") " Feb 17 17:15:03 crc kubenswrapper[4808]: I0217 17:15:03.723014 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7ac3bf12-5c8e-40fe-b51b-c7629260bbd6-config-volume\") pod \"7ac3bf12-5c8e-40fe-b51b-c7629260bbd6\" (UID: \"7ac3bf12-5c8e-40fe-b51b-c7629260bbd6\") " Feb 17 17:15:03 crc kubenswrapper[4808]: I0217 17:15:03.723057 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7ac3bf12-5c8e-40fe-b51b-c7629260bbd6-secret-volume\") pod \"7ac3bf12-5c8e-40fe-b51b-c7629260bbd6\" (UID: \"7ac3bf12-5c8e-40fe-b51b-c7629260bbd6\") " Feb 17 17:15:03 crc kubenswrapper[4808]: I0217 17:15:03.723900 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ac3bf12-5c8e-40fe-b51b-c7629260bbd6-config-volume" (OuterVolumeSpecName: "config-volume") pod "7ac3bf12-5c8e-40fe-b51b-c7629260bbd6" (UID: "7ac3bf12-5c8e-40fe-b51b-c7629260bbd6"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 17:15:03 crc kubenswrapper[4808]: I0217 17:15:03.731720 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ac3bf12-5c8e-40fe-b51b-c7629260bbd6-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "7ac3bf12-5c8e-40fe-b51b-c7629260bbd6" (UID: "7ac3bf12-5c8e-40fe-b51b-c7629260bbd6"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:15:03 crc kubenswrapper[4808]: I0217 17:15:03.734800 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ac3bf12-5c8e-40fe-b51b-c7629260bbd6-kube-api-access-tzc8h" (OuterVolumeSpecName: "kube-api-access-tzc8h") pod "7ac3bf12-5c8e-40fe-b51b-c7629260bbd6" (UID: "7ac3bf12-5c8e-40fe-b51b-c7629260bbd6"). InnerVolumeSpecName "kube-api-access-tzc8h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:15:03 crc kubenswrapper[4808]: I0217 17:15:03.826162 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tzc8h\" (UniqueName: \"kubernetes.io/projected/7ac3bf12-5c8e-40fe-b51b-c7629260bbd6-kube-api-access-tzc8h\") on node \"crc\" DevicePath \"\"" Feb 17 17:15:03 crc kubenswrapper[4808]: I0217 17:15:03.826194 4808 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7ac3bf12-5c8e-40fe-b51b-c7629260bbd6-config-volume\") on node \"crc\" DevicePath \"\"" Feb 17 17:15:03 crc kubenswrapper[4808]: I0217 17:15:03.826203 4808 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7ac3bf12-5c8e-40fe-b51b-c7629260bbd6-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 17 17:15:04 crc kubenswrapper[4808]: I0217 17:15:04.180684 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522475-hrfk9" event={"ID":"7ac3bf12-5c8e-40fe-b51b-c7629260bbd6","Type":"ContainerDied","Data":"4291df65054dcd470f5111a9574440824feb650adcbbe4cc8e3880fa44689cf1"} Feb 17 17:15:04 crc kubenswrapper[4808]: I0217 17:15:04.180731 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522475-hrfk9" Feb 17 17:15:04 crc kubenswrapper[4808]: I0217 17:15:04.180746 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4291df65054dcd470f5111a9574440824feb650adcbbe4cc8e3880fa44689cf1" Feb 17 17:15:04 crc kubenswrapper[4808]: I0217 17:15:04.654098 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522430-jhp9b"] Feb 17 17:15:04 crc kubenswrapper[4808]: I0217 17:15:04.665354 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522430-jhp9b"] Feb 17 17:15:05 crc kubenswrapper[4808]: I0217 17:15:05.165122 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5f89f01-6a5d-4eb4-adc9-cbfbd921accf" path="/var/lib/kubelet/pods/e5f89f01-6a5d-4eb4-adc9-cbfbd921accf/volumes" Feb 17 17:15:11 crc kubenswrapper[4808]: E0217 17:15:11.148391 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:15:15 crc kubenswrapper[4808]: E0217 17:15:15.149910 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:15:19 crc kubenswrapper[4808]: I0217 17:15:19.046496 4808 scope.go:117] "RemoveContainer" containerID="c5ba79dcf1a3ea436f18f622b5a896f04d2d690a78e981b12dc981865c236bbe" Feb 17 17:15:21 crc kubenswrapper[4808]: I0217 17:15:21.592995 4808 patch_prober.go:28] interesting pod/machine-config-daemon-k8v8k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 17:15:21 crc kubenswrapper[4808]: I0217 17:15:21.593297 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 17:15:24 crc kubenswrapper[4808]: I0217 17:15:24.149617 4808 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 17:15:24 crc kubenswrapper[4808]: E0217 17:15:24.273544 4808 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 17 17:15:24 crc kubenswrapper[4808]: E0217 17:15:24.274174 4808 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 17 17:15:24 crc kubenswrapper[4808]: E0217 17:15:24.274406 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fnd2x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-zl7nk_openstack(a4b182d0-48fc-4487-b7ad-18f7803a4d4c): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 17:15:24 crc kubenswrapper[4808]: E0217 17:15:24.275870 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:15:28 crc kubenswrapper[4808]: E0217 17:15:28.149687 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:15:36 crc kubenswrapper[4808]: E0217 17:15:36.147835 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:15:42 crc kubenswrapper[4808]: E0217 17:15:42.149213 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:15:50 crc kubenswrapper[4808]: E0217 17:15:50.147935 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:15:51 crc kubenswrapper[4808]: I0217 17:15:51.592482 4808 patch_prober.go:28] interesting pod/machine-config-daemon-k8v8k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 17:15:51 crc kubenswrapper[4808]: I0217 17:15:51.592763 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 17:15:51 crc kubenswrapper[4808]: I0217 17:15:51.592807 4808 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" Feb 17 17:15:51 crc kubenswrapper[4808]: I0217 17:15:51.593561 4808 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"58bcbbc2c5e0ad864e56ef85b7ac0fac1bf31a5ac704070c7ce20d28c92d2ac6"} pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 17:15:51 crc kubenswrapper[4808]: I0217 17:15:51.593688 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" containerID="cri-o://58bcbbc2c5e0ad864e56ef85b7ac0fac1bf31a5ac704070c7ce20d28c92d2ac6" gracePeriod=600 Feb 17 17:15:51 crc kubenswrapper[4808]: I0217 17:15:51.722856 4808 generic.go:334] "Generic (PLEG): container finished" podID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerID="58bcbbc2c5e0ad864e56ef85b7ac0fac1bf31a5ac704070c7ce20d28c92d2ac6" exitCode=0 Feb 17 17:15:51 crc kubenswrapper[4808]: I0217 17:15:51.722907 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" event={"ID":"ca38b6e7-b21c-453d-8b6c-a163dac84b35","Type":"ContainerDied","Data":"58bcbbc2c5e0ad864e56ef85b7ac0fac1bf31a5ac704070c7ce20d28c92d2ac6"} Feb 17 17:15:51 crc kubenswrapper[4808]: I0217 17:15:51.722945 4808 scope.go:117] "RemoveContainer" containerID="8c4199e704474ea94fecd76ffd4e953c14d6c8288f54377aa2b3edb555caf82d" Feb 17 17:15:52 crc kubenswrapper[4808]: I0217 17:15:52.754305 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" event={"ID":"ca38b6e7-b21c-453d-8b6c-a163dac84b35","Type":"ContainerStarted","Data":"700c3283572281c218af9f0b845d6de62277f81d69443b3b1ffcaa7d804aa22e"} Feb 17 17:15:55 crc kubenswrapper[4808]: E0217 17:15:55.275960 4808 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 17:15:55 crc kubenswrapper[4808]: E0217 17:15:55.276656 4808 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 17:15:55 crc kubenswrapper[4808]: E0217 17:15:55.276822 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nfchb4h678h649h5fbh664h79h7fh666h5bfh68h565h555h59dh5b6h5bfh66ch645h547h5cbh549h9fh58bh5d4hcfh78h68chc7h5ch67dhc7h5b4q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rjgf2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(2876084b-7055-449d-9ddb-447d3a515d80): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 17:15:55 crc kubenswrapper[4808]: E0217 17:15:55.278983 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:16:05 crc kubenswrapper[4808]: E0217 17:16:05.151062 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:16:09 crc kubenswrapper[4808]: E0217 17:16:09.147774 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:16:17 crc kubenswrapper[4808]: E0217 17:16:17.162284 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:16:24 crc kubenswrapper[4808]: E0217 17:16:24.147944 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:16:32 crc kubenswrapper[4808]: E0217 17:16:32.148518 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:16:39 crc kubenswrapper[4808]: E0217 17:16:39.147469 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:16:43 crc kubenswrapper[4808]: E0217 17:16:43.147052 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:16:50 crc kubenswrapper[4808]: E0217 17:16:50.148337 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:16:57 crc kubenswrapper[4808]: E0217 17:16:57.162252 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:17:04 crc kubenswrapper[4808]: E0217 17:17:04.150650 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:17:09 crc kubenswrapper[4808]: E0217 17:17:09.147895 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:17:18 crc kubenswrapper[4808]: E0217 17:17:18.148215 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:17:22 crc kubenswrapper[4808]: E0217 17:17:22.148309 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:17:32 crc kubenswrapper[4808]: E0217 17:17:32.149416 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:17:36 crc kubenswrapper[4808]: E0217 17:17:36.148416 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:17:44 crc kubenswrapper[4808]: E0217 17:17:44.150065 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:17:51 crc kubenswrapper[4808]: E0217 17:17:51.149030 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:17:51 crc kubenswrapper[4808]: I0217 17:17:51.592775 4808 patch_prober.go:28] interesting pod/machine-config-daemon-k8v8k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 17:17:51 crc kubenswrapper[4808]: I0217 17:17:51.593048 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 17:17:51 crc kubenswrapper[4808]: I0217 17:17:51.922331 4808 generic.go:334] "Generic (PLEG): container finished" podID="8b75e2b3-ab6a-4088-897b-7a11da62a654" containerID="567a499a540dcc4f77c295be8cc3ad41d4b2ef5fffbee3f75374436d200ff856" exitCode=2 Feb 17 17:17:51 crc kubenswrapper[4808]: I0217 17:17:51.922373 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n8rxl" event={"ID":"8b75e2b3-ab6a-4088-897b-7a11da62a654","Type":"ContainerDied","Data":"567a499a540dcc4f77c295be8cc3ad41d4b2ef5fffbee3f75374436d200ff856"} Feb 17 17:17:53 crc kubenswrapper[4808]: I0217 17:17:53.455565 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n8rxl" Feb 17 17:17:53 crc kubenswrapper[4808]: I0217 17:17:53.538398 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8b75e2b3-ab6a-4088-897b-7a11da62a654-inventory\") pod \"8b75e2b3-ab6a-4088-897b-7a11da62a654\" (UID: \"8b75e2b3-ab6a-4088-897b-7a11da62a654\") " Feb 17 17:17:53 crc kubenswrapper[4808]: I0217 17:17:53.538560 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8b75e2b3-ab6a-4088-897b-7a11da62a654-ssh-key-openstack-edpm-ipam\") pod \"8b75e2b3-ab6a-4088-897b-7a11da62a654\" (UID: \"8b75e2b3-ab6a-4088-897b-7a11da62a654\") " Feb 17 17:17:53 crc kubenswrapper[4808]: I0217 17:17:53.538607 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w6mwd\" (UniqueName: \"kubernetes.io/projected/8b75e2b3-ab6a-4088-897b-7a11da62a654-kube-api-access-w6mwd\") pod \"8b75e2b3-ab6a-4088-897b-7a11da62a654\" (UID: \"8b75e2b3-ab6a-4088-897b-7a11da62a654\") " Feb 17 17:17:53 crc kubenswrapper[4808]: I0217 17:17:53.544796 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b75e2b3-ab6a-4088-897b-7a11da62a654-kube-api-access-w6mwd" (OuterVolumeSpecName: "kube-api-access-w6mwd") pod "8b75e2b3-ab6a-4088-897b-7a11da62a654" (UID: "8b75e2b3-ab6a-4088-897b-7a11da62a654"). InnerVolumeSpecName "kube-api-access-w6mwd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:17:53 crc kubenswrapper[4808]: I0217 17:17:53.568080 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b75e2b3-ab6a-4088-897b-7a11da62a654-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "8b75e2b3-ab6a-4088-897b-7a11da62a654" (UID: "8b75e2b3-ab6a-4088-897b-7a11da62a654"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:17:53 crc kubenswrapper[4808]: I0217 17:17:53.573246 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b75e2b3-ab6a-4088-897b-7a11da62a654-inventory" (OuterVolumeSpecName: "inventory") pod "8b75e2b3-ab6a-4088-897b-7a11da62a654" (UID: "8b75e2b3-ab6a-4088-897b-7a11da62a654"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:17:53 crc kubenswrapper[4808]: I0217 17:17:53.642137 4808 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8b75e2b3-ab6a-4088-897b-7a11da62a654-inventory\") on node \"crc\" DevicePath \"\"" Feb 17 17:17:53 crc kubenswrapper[4808]: I0217 17:17:53.642180 4808 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8b75e2b3-ab6a-4088-897b-7a11da62a654-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 17:17:53 crc kubenswrapper[4808]: I0217 17:17:53.642194 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w6mwd\" (UniqueName: \"kubernetes.io/projected/8b75e2b3-ab6a-4088-897b-7a11da62a654-kube-api-access-w6mwd\") on node \"crc\" DevicePath \"\"" Feb 17 17:17:53 crc kubenswrapper[4808]: I0217 17:17:53.941462 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n8rxl" event={"ID":"8b75e2b3-ab6a-4088-897b-7a11da62a654","Type":"ContainerDied","Data":"51b0c8c29ac10b4d9baa4163a7a8c609d16873c474ab44261d797cf1ed54691b"} Feb 17 17:17:53 crc kubenswrapper[4808]: I0217 17:17:53.941502 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="51b0c8c29ac10b4d9baa4163a7a8c609d16873c474ab44261d797cf1ed54691b" Feb 17 17:17:53 crc kubenswrapper[4808]: I0217 17:17:53.941609 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n8rxl" Feb 17 17:17:58 crc kubenswrapper[4808]: E0217 17:17:58.148250 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:18:06 crc kubenswrapper[4808]: E0217 17:18:06.147457 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:18:13 crc kubenswrapper[4808]: E0217 17:18:13.147753 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:18:19 crc kubenswrapper[4808]: E0217 17:18:19.147735 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:18:21 crc kubenswrapper[4808]: I0217 17:18:21.592267 4808 patch_prober.go:28] interesting pod/machine-config-daemon-k8v8k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 17:18:21 crc kubenswrapper[4808]: I0217 17:18:21.593616 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 17:18:24 crc kubenswrapper[4808]: E0217 17:18:24.149266 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:18:34 crc kubenswrapper[4808]: E0217 17:18:34.935478 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:18:34 crc kubenswrapper[4808]: I0217 17:18:34.963483 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-c58vl" podUID="42711d14-278f-41eb-80ce-2e67add356b9" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 17 17:18:37 crc kubenswrapper[4808]: E0217 17:18:37.156841 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:18:49 crc kubenswrapper[4808]: E0217 17:18:49.150819 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:18:51 crc kubenswrapper[4808]: E0217 17:18:51.150829 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:18:51 crc kubenswrapper[4808]: I0217 17:18:51.592266 4808 patch_prober.go:28] interesting pod/machine-config-daemon-k8v8k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 17:18:51 crc kubenswrapper[4808]: I0217 17:18:51.592344 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 17:18:51 crc kubenswrapper[4808]: I0217 17:18:51.592399 4808 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" Feb 17 17:18:51 crc kubenswrapper[4808]: I0217 17:18:51.593359 4808 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"700c3283572281c218af9f0b845d6de62277f81d69443b3b1ffcaa7d804aa22e"} pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 17:18:51 crc kubenswrapper[4808]: I0217 17:18:51.593482 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" containerID="cri-o://700c3283572281c218af9f0b845d6de62277f81d69443b3b1ffcaa7d804aa22e" gracePeriod=600 Feb 17 17:18:51 crc kubenswrapper[4808]: E0217 17:18:51.724890 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 17:18:51 crc kubenswrapper[4808]: E0217 17:18:51.788184 4808 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podca38b6e7_b21c_453d_8b6c_a163dac84b35.slice/crio-conmon-700c3283572281c218af9f0b845d6de62277f81d69443b3b1ffcaa7d804aa22e.scope\": RecentStats: unable to find data in memory cache]" Feb 17 17:18:52 crc kubenswrapper[4808]: I0217 17:18:52.234166 4808 generic.go:334] "Generic (PLEG): container finished" podID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerID="700c3283572281c218af9f0b845d6de62277f81d69443b3b1ffcaa7d804aa22e" exitCode=0 Feb 17 17:18:52 crc kubenswrapper[4808]: I0217 17:18:52.234248 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" event={"ID":"ca38b6e7-b21c-453d-8b6c-a163dac84b35","Type":"ContainerDied","Data":"700c3283572281c218af9f0b845d6de62277f81d69443b3b1ffcaa7d804aa22e"} Feb 17 17:18:52 crc kubenswrapper[4808]: I0217 17:18:52.234523 4808 scope.go:117] "RemoveContainer" containerID="58bcbbc2c5e0ad864e56ef85b7ac0fac1bf31a5ac704070c7ce20d28c92d2ac6" Feb 17 17:18:52 crc kubenswrapper[4808]: I0217 17:18:52.235368 4808 scope.go:117] "RemoveContainer" containerID="700c3283572281c218af9f0b845d6de62277f81d69443b3b1ffcaa7d804aa22e" Feb 17 17:18:52 crc kubenswrapper[4808]: E0217 17:18:52.235705 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 17:19:02 crc kubenswrapper[4808]: E0217 17:19:02.148420 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:19:05 crc kubenswrapper[4808]: E0217 17:19:05.164481 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:19:07 crc kubenswrapper[4808]: I0217 17:19:07.152252 4808 scope.go:117] "RemoveContainer" containerID="700c3283572281c218af9f0b845d6de62277f81d69443b3b1ffcaa7d804aa22e" Feb 17 17:19:07 crc kubenswrapper[4808]: E0217 17:19:07.152902 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 17:19:16 crc kubenswrapper[4808]: E0217 17:19:16.147479 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:19:17 crc kubenswrapper[4808]: E0217 17:19:17.158104 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:19:19 crc kubenswrapper[4808]: I0217 17:19:19.274068 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-v84wc/must-gather-25mrk"] Feb 17 17:19:19 crc kubenswrapper[4808]: E0217 17:19:19.283820 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ac3bf12-5c8e-40fe-b51b-c7629260bbd6" containerName="collect-profiles" Feb 17 17:19:19 crc kubenswrapper[4808]: I0217 17:19:19.283862 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ac3bf12-5c8e-40fe-b51b-c7629260bbd6" containerName="collect-profiles" Feb 17 17:19:19 crc kubenswrapper[4808]: E0217 17:19:19.283872 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b75e2b3-ab6a-4088-897b-7a11da62a654" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 17 17:19:19 crc kubenswrapper[4808]: I0217 17:19:19.283882 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b75e2b3-ab6a-4088-897b-7a11da62a654" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 17 17:19:19 crc kubenswrapper[4808]: I0217 17:19:19.284152 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b75e2b3-ab6a-4088-897b-7a11da62a654" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 17 17:19:19 crc kubenswrapper[4808]: I0217 17:19:19.284190 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ac3bf12-5c8e-40fe-b51b-c7629260bbd6" containerName="collect-profiles" Feb 17 17:19:19 crc kubenswrapper[4808]: I0217 17:19:19.285801 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v84wc/must-gather-25mrk" Feb 17 17:19:19 crc kubenswrapper[4808]: I0217 17:19:19.289286 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-v84wc"/"kube-root-ca.crt" Feb 17 17:19:19 crc kubenswrapper[4808]: I0217 17:19:19.289752 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-v84wc"/"openshift-service-ca.crt" Feb 17 17:19:19 crc kubenswrapper[4808]: I0217 17:19:19.346486 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6431aef1-ada4-4683-967f-18a8a901d3f7-must-gather-output\") pod \"must-gather-25mrk\" (UID: \"6431aef1-ada4-4683-967f-18a8a901d3f7\") " pod="openshift-must-gather-v84wc/must-gather-25mrk" Feb 17 17:19:19 crc kubenswrapper[4808]: I0217 17:19:19.346675 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4xpd\" (UniqueName: \"kubernetes.io/projected/6431aef1-ada4-4683-967f-18a8a901d3f7-kube-api-access-l4xpd\") pod \"must-gather-25mrk\" (UID: \"6431aef1-ada4-4683-967f-18a8a901d3f7\") " pod="openshift-must-gather-v84wc/must-gather-25mrk" Feb 17 17:19:19 crc kubenswrapper[4808]: I0217 17:19:19.378223 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-v84wc/must-gather-25mrk"] Feb 17 17:19:19 crc kubenswrapper[4808]: I0217 17:19:19.448979 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6431aef1-ada4-4683-967f-18a8a901d3f7-must-gather-output\") pod \"must-gather-25mrk\" (UID: \"6431aef1-ada4-4683-967f-18a8a901d3f7\") " pod="openshift-must-gather-v84wc/must-gather-25mrk" Feb 17 17:19:19 crc kubenswrapper[4808]: I0217 17:19:19.449067 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4xpd\" (UniqueName: \"kubernetes.io/projected/6431aef1-ada4-4683-967f-18a8a901d3f7-kube-api-access-l4xpd\") pod \"must-gather-25mrk\" (UID: \"6431aef1-ada4-4683-967f-18a8a901d3f7\") " pod="openshift-must-gather-v84wc/must-gather-25mrk" Feb 17 17:19:19 crc kubenswrapper[4808]: I0217 17:19:19.449870 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6431aef1-ada4-4683-967f-18a8a901d3f7-must-gather-output\") pod \"must-gather-25mrk\" (UID: \"6431aef1-ada4-4683-967f-18a8a901d3f7\") " pod="openshift-must-gather-v84wc/must-gather-25mrk" Feb 17 17:19:19 crc kubenswrapper[4808]: I0217 17:19:19.470426 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4xpd\" (UniqueName: \"kubernetes.io/projected/6431aef1-ada4-4683-967f-18a8a901d3f7-kube-api-access-l4xpd\") pod \"must-gather-25mrk\" (UID: \"6431aef1-ada4-4683-967f-18a8a901d3f7\") " pod="openshift-must-gather-v84wc/must-gather-25mrk" Feb 17 17:19:19 crc kubenswrapper[4808]: I0217 17:19:19.619424 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v84wc/must-gather-25mrk" Feb 17 17:19:20 crc kubenswrapper[4808]: I0217 17:19:20.149214 4808 scope.go:117] "RemoveContainer" containerID="700c3283572281c218af9f0b845d6de62277f81d69443b3b1ffcaa7d804aa22e" Feb 17 17:19:20 crc kubenswrapper[4808]: E0217 17:19:20.150326 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 17:19:20 crc kubenswrapper[4808]: I0217 17:19:20.301370 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-v84wc/must-gather-25mrk"] Feb 17 17:19:20 crc kubenswrapper[4808]: W0217 17:19:20.312241 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6431aef1_ada4_4683_967f_18a8a901d3f7.slice/crio-c6947f6205ae010942b8b25c11256ac17b4ff99bd3e2828634f17024b59bfe8b WatchSource:0}: Error finding container c6947f6205ae010942b8b25c11256ac17b4ff99bd3e2828634f17024b59bfe8b: Status 404 returned error can't find the container with id c6947f6205ae010942b8b25c11256ac17b4ff99bd3e2828634f17024b59bfe8b Feb 17 17:19:20 crc kubenswrapper[4808]: I0217 17:19:20.561856 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-v84wc/must-gather-25mrk" event={"ID":"6431aef1-ada4-4683-967f-18a8a901d3f7","Type":"ContainerStarted","Data":"c6947f6205ae010942b8b25c11256ac17b4ff99bd3e2828634f17024b59bfe8b"} Feb 17 17:19:29 crc kubenswrapper[4808]: E0217 17:19:29.147126 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:19:29 crc kubenswrapper[4808]: I0217 17:19:29.652867 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-v84wc/must-gather-25mrk" event={"ID":"6431aef1-ada4-4683-967f-18a8a901d3f7","Type":"ContainerStarted","Data":"271d9b2135c3935ec151eefdbaf495f4a45fec452012708df37252c90b672306"} Feb 17 17:19:29 crc kubenswrapper[4808]: I0217 17:19:29.652918 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-v84wc/must-gather-25mrk" event={"ID":"6431aef1-ada4-4683-967f-18a8a901d3f7","Type":"ContainerStarted","Data":"c40142ef958d484b3d88ec057c33b3f5b4fdb38dd3e73ba0134c4e1e89733ac2"} Feb 17 17:19:29 crc kubenswrapper[4808]: I0217 17:19:29.677882 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-v84wc/must-gather-25mrk" podStartSLOduration=2.374339833 podStartE2EDuration="10.677861219s" podCreationTimestamp="2026-02-17 17:19:19 +0000 UTC" firstStartedPulling="2026-02-17 17:19:20.314185339 +0000 UTC m=+5123.830544412" lastFinishedPulling="2026-02-17 17:19:28.617706725 +0000 UTC m=+5132.134065798" observedRunningTime="2026-02-17 17:19:29.668108432 +0000 UTC m=+5133.184467515" watchObservedRunningTime="2026-02-17 17:19:29.677861219 +0000 UTC m=+5133.194220282" Feb 17 17:19:30 crc kubenswrapper[4808]: I0217 17:19:30.953405 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4t4r8"] Feb 17 17:19:30 crc kubenswrapper[4808]: I0217 17:19:30.956633 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4t4r8" Feb 17 17:19:30 crc kubenswrapper[4808]: I0217 17:19:30.987299 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4t4r8"] Feb 17 17:19:31 crc kubenswrapper[4808]: I0217 17:19:31.133905 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzzf5\" (UniqueName: \"kubernetes.io/projected/6463b44f-0536-4c98-964e-ffefaf92dd97-kube-api-access-fzzf5\") pod \"certified-operators-4t4r8\" (UID: \"6463b44f-0536-4c98-964e-ffefaf92dd97\") " pod="openshift-marketplace/certified-operators-4t4r8" Feb 17 17:19:31 crc kubenswrapper[4808]: I0217 17:19:31.133981 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6463b44f-0536-4c98-964e-ffefaf92dd97-utilities\") pod \"certified-operators-4t4r8\" (UID: \"6463b44f-0536-4c98-964e-ffefaf92dd97\") " pod="openshift-marketplace/certified-operators-4t4r8" Feb 17 17:19:31 crc kubenswrapper[4808]: I0217 17:19:31.134139 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6463b44f-0536-4c98-964e-ffefaf92dd97-catalog-content\") pod \"certified-operators-4t4r8\" (UID: \"6463b44f-0536-4c98-964e-ffefaf92dd97\") " pod="openshift-marketplace/certified-operators-4t4r8" Feb 17 17:19:31 crc kubenswrapper[4808]: E0217 17:19:31.150521 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:19:31 crc kubenswrapper[4808]: I0217 17:19:31.236474 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzzf5\" (UniqueName: \"kubernetes.io/projected/6463b44f-0536-4c98-964e-ffefaf92dd97-kube-api-access-fzzf5\") pod \"certified-operators-4t4r8\" (UID: \"6463b44f-0536-4c98-964e-ffefaf92dd97\") " pod="openshift-marketplace/certified-operators-4t4r8" Feb 17 17:19:31 crc kubenswrapper[4808]: I0217 17:19:31.236552 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6463b44f-0536-4c98-964e-ffefaf92dd97-utilities\") pod \"certified-operators-4t4r8\" (UID: \"6463b44f-0536-4c98-964e-ffefaf92dd97\") " pod="openshift-marketplace/certified-operators-4t4r8" Feb 17 17:19:31 crc kubenswrapper[4808]: I0217 17:19:31.236806 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6463b44f-0536-4c98-964e-ffefaf92dd97-catalog-content\") pod \"certified-operators-4t4r8\" (UID: \"6463b44f-0536-4c98-964e-ffefaf92dd97\") " pod="openshift-marketplace/certified-operators-4t4r8" Feb 17 17:19:31 crc kubenswrapper[4808]: I0217 17:19:31.237802 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6463b44f-0536-4c98-964e-ffefaf92dd97-utilities\") pod \"certified-operators-4t4r8\" (UID: \"6463b44f-0536-4c98-964e-ffefaf92dd97\") " pod="openshift-marketplace/certified-operators-4t4r8" Feb 17 17:19:31 crc kubenswrapper[4808]: I0217 17:19:31.237930 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6463b44f-0536-4c98-964e-ffefaf92dd97-catalog-content\") pod \"certified-operators-4t4r8\" (UID: \"6463b44f-0536-4c98-964e-ffefaf92dd97\") " pod="openshift-marketplace/certified-operators-4t4r8" Feb 17 17:19:31 crc kubenswrapper[4808]: I0217 17:19:31.264062 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzzf5\" (UniqueName: \"kubernetes.io/projected/6463b44f-0536-4c98-964e-ffefaf92dd97-kube-api-access-fzzf5\") pod \"certified-operators-4t4r8\" (UID: \"6463b44f-0536-4c98-964e-ffefaf92dd97\") " pod="openshift-marketplace/certified-operators-4t4r8" Feb 17 17:19:31 crc kubenswrapper[4808]: I0217 17:19:31.282188 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4t4r8" Feb 17 17:19:35 crc kubenswrapper[4808]: I0217 17:19:32.448832 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4t4r8"] Feb 17 17:19:35 crc kubenswrapper[4808]: W0217 17:19:32.451428 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6463b44f_0536_4c98_964e_ffefaf92dd97.slice/crio-ca759b1c23a9dc6fc5be3ab80ee352689510f493b434b8aba97dd008dd4046cc WatchSource:0}: Error finding container ca759b1c23a9dc6fc5be3ab80ee352689510f493b434b8aba97dd008dd4046cc: Status 404 returned error can't find the container with id ca759b1c23a9dc6fc5be3ab80ee352689510f493b434b8aba97dd008dd4046cc Feb 17 17:19:35 crc kubenswrapper[4808]: I0217 17:19:32.694376 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4t4r8" event={"ID":"6463b44f-0536-4c98-964e-ffefaf92dd97","Type":"ContainerStarted","Data":"ca759b1c23a9dc6fc5be3ab80ee352689510f493b434b8aba97dd008dd4046cc"} Feb 17 17:19:35 crc kubenswrapper[4808]: I0217 17:19:33.146285 4808 scope.go:117] "RemoveContainer" containerID="700c3283572281c218af9f0b845d6de62277f81d69443b3b1ffcaa7d804aa22e" Feb 17 17:19:35 crc kubenswrapper[4808]: E0217 17:19:33.146861 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 17:19:35 crc kubenswrapper[4808]: I0217 17:19:33.708307 4808 generic.go:334] "Generic (PLEG): container finished" podID="6463b44f-0536-4c98-964e-ffefaf92dd97" containerID="9102d6dcaf6e3fbf8c87936c002d9f93bfb04d65b7f6656f4e84306710e44084" exitCode=0 Feb 17 17:19:35 crc kubenswrapper[4808]: I0217 17:19:33.708361 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4t4r8" event={"ID":"6463b44f-0536-4c98-964e-ffefaf92dd97","Type":"ContainerDied","Data":"9102d6dcaf6e3fbf8c87936c002d9f93bfb04d65b7f6656f4e84306710e44084"} Feb 17 17:19:35 crc kubenswrapper[4808]: I0217 17:19:34.594952 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-v84wc/crc-debug-msb9f"] Feb 17 17:19:35 crc kubenswrapper[4808]: I0217 17:19:34.597729 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v84wc/crc-debug-msb9f" Feb 17 17:19:35 crc kubenswrapper[4808]: I0217 17:19:34.599852 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-v84wc"/"default-dockercfg-f8jxd" Feb 17 17:19:35 crc kubenswrapper[4808]: I0217 17:19:34.719911 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qjlq\" (UniqueName: \"kubernetes.io/projected/1421f2cf-bbb7-4679-a249-d3233f1a590a-kube-api-access-5qjlq\") pod \"crc-debug-msb9f\" (UID: \"1421f2cf-bbb7-4679-a249-d3233f1a590a\") " pod="openshift-must-gather-v84wc/crc-debug-msb9f" Feb 17 17:19:35 crc kubenswrapper[4808]: I0217 17:19:34.719994 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1421f2cf-bbb7-4679-a249-d3233f1a590a-host\") pod \"crc-debug-msb9f\" (UID: \"1421f2cf-bbb7-4679-a249-d3233f1a590a\") " pod="openshift-must-gather-v84wc/crc-debug-msb9f" Feb 17 17:19:35 crc kubenswrapper[4808]: I0217 17:19:34.821886 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1421f2cf-bbb7-4679-a249-d3233f1a590a-host\") pod \"crc-debug-msb9f\" (UID: \"1421f2cf-bbb7-4679-a249-d3233f1a590a\") " pod="openshift-must-gather-v84wc/crc-debug-msb9f" Feb 17 17:19:35 crc kubenswrapper[4808]: I0217 17:19:34.822078 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1421f2cf-bbb7-4679-a249-d3233f1a590a-host\") pod \"crc-debug-msb9f\" (UID: \"1421f2cf-bbb7-4679-a249-d3233f1a590a\") " pod="openshift-must-gather-v84wc/crc-debug-msb9f" Feb 17 17:19:35 crc kubenswrapper[4808]: I0217 17:19:34.822097 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qjlq\" (UniqueName: \"kubernetes.io/projected/1421f2cf-bbb7-4679-a249-d3233f1a590a-kube-api-access-5qjlq\") pod \"crc-debug-msb9f\" (UID: \"1421f2cf-bbb7-4679-a249-d3233f1a590a\") " pod="openshift-must-gather-v84wc/crc-debug-msb9f" Feb 17 17:19:35 crc kubenswrapper[4808]: I0217 17:19:34.843799 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qjlq\" (UniqueName: \"kubernetes.io/projected/1421f2cf-bbb7-4679-a249-d3233f1a590a-kube-api-access-5qjlq\") pod \"crc-debug-msb9f\" (UID: \"1421f2cf-bbb7-4679-a249-d3233f1a590a\") " pod="openshift-must-gather-v84wc/crc-debug-msb9f" Feb 17 17:19:35 crc kubenswrapper[4808]: I0217 17:19:34.926052 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v84wc/crc-debug-msb9f" Feb 17 17:19:35 crc kubenswrapper[4808]: I0217 17:19:35.728182 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-v84wc/crc-debug-msb9f" event={"ID":"1421f2cf-bbb7-4679-a249-d3233f1a590a","Type":"ContainerStarted","Data":"684b470c63b940008787f4d6bf54bfccbbb02315a2dd741d1a163efc01817f3e"} Feb 17 17:19:35 crc kubenswrapper[4808]: I0217 17:19:35.731630 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4t4r8" event={"ID":"6463b44f-0536-4c98-964e-ffefaf92dd97","Type":"ContainerStarted","Data":"7d865228fa25e7ce12749d7c2c4de36bd67d5fa5524e81ad097c8a1b40849e1b"} Feb 17 17:19:40 crc kubenswrapper[4808]: I0217 17:19:40.822614 4808 generic.go:334] "Generic (PLEG): container finished" podID="6463b44f-0536-4c98-964e-ffefaf92dd97" containerID="7d865228fa25e7ce12749d7c2c4de36bd67d5fa5524e81ad097c8a1b40849e1b" exitCode=0 Feb 17 17:19:40 crc kubenswrapper[4808]: I0217 17:19:40.822702 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4t4r8" event={"ID":"6463b44f-0536-4c98-964e-ffefaf92dd97","Type":"ContainerDied","Data":"7d865228fa25e7ce12749d7c2c4de36bd67d5fa5524e81ad097c8a1b40849e1b"} Feb 17 17:19:41 crc kubenswrapper[4808]: E0217 17:19:41.148863 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:19:41 crc kubenswrapper[4808]: I0217 17:19:41.837713 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4t4r8" event={"ID":"6463b44f-0536-4c98-964e-ffefaf92dd97","Type":"ContainerStarted","Data":"ed47e3d22836b6652cf2ffaee8f878d60a025a964ccb085ff32c6031cfeb2f0b"} Feb 17 17:19:41 crc kubenswrapper[4808]: I0217 17:19:41.869257 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4t4r8" podStartSLOduration=4.361974856 podStartE2EDuration="11.869237973s" podCreationTimestamp="2026-02-17 17:19:30 +0000 UTC" firstStartedPulling="2026-02-17 17:19:33.710627144 +0000 UTC m=+5137.226986217" lastFinishedPulling="2026-02-17 17:19:41.217890261 +0000 UTC m=+5144.734249334" observedRunningTime="2026-02-17 17:19:41.858600202 +0000 UTC m=+5145.374959285" watchObservedRunningTime="2026-02-17 17:19:41.869237973 +0000 UTC m=+5145.385597046" Feb 17 17:19:42 crc kubenswrapper[4808]: E0217 17:19:42.147922 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:19:48 crc kubenswrapper[4808]: I0217 17:19:48.147063 4808 scope.go:117] "RemoveContainer" containerID="700c3283572281c218af9f0b845d6de62277f81d69443b3b1ffcaa7d804aa22e" Feb 17 17:19:48 crc kubenswrapper[4808]: E0217 17:19:48.147858 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 17:19:48 crc kubenswrapper[4808]: I0217 17:19:48.922199 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-v84wc/crc-debug-msb9f" event={"ID":"1421f2cf-bbb7-4679-a249-d3233f1a590a","Type":"ContainerStarted","Data":"fce94902885db56874aa711abdba927b17899ff624af8c260483d4d779880140"} Feb 17 17:19:48 crc kubenswrapper[4808]: I0217 17:19:48.937688 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-v84wc/crc-debug-msb9f" podStartSLOduration=1.343487591 podStartE2EDuration="14.937667224s" podCreationTimestamp="2026-02-17 17:19:34 +0000 UTC" firstStartedPulling="2026-02-17 17:19:35.100208142 +0000 UTC m=+5138.616567215" lastFinishedPulling="2026-02-17 17:19:48.694387765 +0000 UTC m=+5152.210746848" observedRunningTime="2026-02-17 17:19:48.935306299 +0000 UTC m=+5152.451665372" watchObservedRunningTime="2026-02-17 17:19:48.937667224 +0000 UTC m=+5152.454026297" Feb 17 17:19:51 crc kubenswrapper[4808]: I0217 17:19:51.282975 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4t4r8" Feb 17 17:19:51 crc kubenswrapper[4808]: I0217 17:19:51.283418 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4t4r8" Feb 17 17:19:52 crc kubenswrapper[4808]: I0217 17:19:52.343184 4808 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-4t4r8" podUID="6463b44f-0536-4c98-964e-ffefaf92dd97" containerName="registry-server" probeResult="failure" output=< Feb 17 17:19:52 crc kubenswrapper[4808]: timeout: failed to connect service ":50051" within 1s Feb 17 17:19:52 crc kubenswrapper[4808]: > Feb 17 17:19:53 crc kubenswrapper[4808]: E0217 17:19:53.147472 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:19:55 crc kubenswrapper[4808]: E0217 17:19:55.147615 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:20:01 crc kubenswrapper[4808]: I0217 17:20:01.334974 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4t4r8" Feb 17 17:20:01 crc kubenswrapper[4808]: I0217 17:20:01.382017 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4t4r8" Feb 17 17:20:01 crc kubenswrapper[4808]: I0217 17:20:01.570510 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4t4r8"] Feb 17 17:20:02 crc kubenswrapper[4808]: I0217 17:20:02.145716 4808 scope.go:117] "RemoveContainer" containerID="700c3283572281c218af9f0b845d6de62277f81d69443b3b1ffcaa7d804aa22e" Feb 17 17:20:02 crc kubenswrapper[4808]: E0217 17:20:02.146009 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 17:20:03 crc kubenswrapper[4808]: I0217 17:20:03.082448 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4t4r8" podUID="6463b44f-0536-4c98-964e-ffefaf92dd97" containerName="registry-server" containerID="cri-o://ed47e3d22836b6652cf2ffaee8f878d60a025a964ccb085ff32c6031cfeb2f0b" gracePeriod=2 Feb 17 17:20:05 crc kubenswrapper[4808]: I0217 17:20:05.101804 4808 generic.go:334] "Generic (PLEG): container finished" podID="6463b44f-0536-4c98-964e-ffefaf92dd97" containerID="ed47e3d22836b6652cf2ffaee8f878d60a025a964ccb085ff32c6031cfeb2f0b" exitCode=0 Feb 17 17:20:05 crc kubenswrapper[4808]: I0217 17:20:05.101898 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4t4r8" event={"ID":"6463b44f-0536-4c98-964e-ffefaf92dd97","Type":"ContainerDied","Data":"ed47e3d22836b6652cf2ffaee8f878d60a025a964ccb085ff32c6031cfeb2f0b"} Feb 17 17:20:06 crc kubenswrapper[4808]: I0217 17:20:06.121171 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4t4r8" event={"ID":"6463b44f-0536-4c98-964e-ffefaf92dd97","Type":"ContainerDied","Data":"ca759b1c23a9dc6fc5be3ab80ee352689510f493b434b8aba97dd008dd4046cc"} Feb 17 17:20:06 crc kubenswrapper[4808]: I0217 17:20:06.121816 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ca759b1c23a9dc6fc5be3ab80ee352689510f493b434b8aba97dd008dd4046cc" Feb 17 17:20:08 crc kubenswrapper[4808]: I0217 17:20:08.032148 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4t4r8" Feb 17 17:20:08 crc kubenswrapper[4808]: I0217 17:20:08.136809 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4t4r8" Feb 17 17:20:08 crc kubenswrapper[4808]: E0217 17:20:08.148275 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:20:08 crc kubenswrapper[4808]: E0217 17:20:08.148425 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:20:08 crc kubenswrapper[4808]: I0217 17:20:08.206685 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fzzf5\" (UniqueName: \"kubernetes.io/projected/6463b44f-0536-4c98-964e-ffefaf92dd97-kube-api-access-fzzf5\") pod \"6463b44f-0536-4c98-964e-ffefaf92dd97\" (UID: \"6463b44f-0536-4c98-964e-ffefaf92dd97\") " Feb 17 17:20:08 crc kubenswrapper[4808]: I0217 17:20:08.206815 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6463b44f-0536-4c98-964e-ffefaf92dd97-catalog-content\") pod \"6463b44f-0536-4c98-964e-ffefaf92dd97\" (UID: \"6463b44f-0536-4c98-964e-ffefaf92dd97\") " Feb 17 17:20:08 crc kubenswrapper[4808]: I0217 17:20:08.206939 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6463b44f-0536-4c98-964e-ffefaf92dd97-utilities\") pod \"6463b44f-0536-4c98-964e-ffefaf92dd97\" (UID: \"6463b44f-0536-4c98-964e-ffefaf92dd97\") " Feb 17 17:20:08 crc kubenswrapper[4808]: I0217 17:20:08.207554 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6463b44f-0536-4c98-964e-ffefaf92dd97-utilities" (OuterVolumeSpecName: "utilities") pod "6463b44f-0536-4c98-964e-ffefaf92dd97" (UID: "6463b44f-0536-4c98-964e-ffefaf92dd97"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:20:08 crc kubenswrapper[4808]: I0217 17:20:08.213647 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6463b44f-0536-4c98-964e-ffefaf92dd97-kube-api-access-fzzf5" (OuterVolumeSpecName: "kube-api-access-fzzf5") pod "6463b44f-0536-4c98-964e-ffefaf92dd97" (UID: "6463b44f-0536-4c98-964e-ffefaf92dd97"). InnerVolumeSpecName "kube-api-access-fzzf5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:20:08 crc kubenswrapper[4808]: I0217 17:20:08.269568 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6463b44f-0536-4c98-964e-ffefaf92dd97-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6463b44f-0536-4c98-964e-ffefaf92dd97" (UID: "6463b44f-0536-4c98-964e-ffefaf92dd97"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:20:08 crc kubenswrapper[4808]: I0217 17:20:08.309697 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6463b44f-0536-4c98-964e-ffefaf92dd97-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 17:20:08 crc kubenswrapper[4808]: I0217 17:20:08.309741 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fzzf5\" (UniqueName: \"kubernetes.io/projected/6463b44f-0536-4c98-964e-ffefaf92dd97-kube-api-access-fzzf5\") on node \"crc\" DevicePath \"\"" Feb 17 17:20:08 crc kubenswrapper[4808]: I0217 17:20:08.309761 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6463b44f-0536-4c98-964e-ffefaf92dd97-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 17:20:08 crc kubenswrapper[4808]: I0217 17:20:08.507343 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4t4r8"] Feb 17 17:20:08 crc kubenswrapper[4808]: I0217 17:20:08.520980 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4t4r8"] Feb 17 17:20:09 crc kubenswrapper[4808]: I0217 17:20:09.156322 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6463b44f-0536-4c98-964e-ffefaf92dd97" path="/var/lib/kubelet/pods/6463b44f-0536-4c98-964e-ffefaf92dd97/volumes" Feb 17 17:20:14 crc kubenswrapper[4808]: I0217 17:20:14.146657 4808 scope.go:117] "RemoveContainer" containerID="700c3283572281c218af9f0b845d6de62277f81d69443b3b1ffcaa7d804aa22e" Feb 17 17:20:14 crc kubenswrapper[4808]: E0217 17:20:14.148553 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 17:20:20 crc kubenswrapper[4808]: E0217 17:20:20.148768 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:20:22 crc kubenswrapper[4808]: E0217 17:20:22.148974 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:20:26 crc kubenswrapper[4808]: I0217 17:20:26.146844 4808 scope.go:117] "RemoveContainer" containerID="700c3283572281c218af9f0b845d6de62277f81d69443b3b1ffcaa7d804aa22e" Feb 17 17:20:26 crc kubenswrapper[4808]: E0217 17:20:26.148296 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 17:20:27 crc kubenswrapper[4808]: I0217 17:20:27.347794 4808 generic.go:334] "Generic (PLEG): container finished" podID="1421f2cf-bbb7-4679-a249-d3233f1a590a" containerID="fce94902885db56874aa711abdba927b17899ff624af8c260483d4d779880140" exitCode=0 Feb 17 17:20:27 crc kubenswrapper[4808]: I0217 17:20:27.347883 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-v84wc/crc-debug-msb9f" event={"ID":"1421f2cf-bbb7-4679-a249-d3233f1a590a","Type":"ContainerDied","Data":"fce94902885db56874aa711abdba927b17899ff624af8c260483d4d779880140"} Feb 17 17:20:28 crc kubenswrapper[4808]: I0217 17:20:28.517587 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v84wc/crc-debug-msb9f" Feb 17 17:20:28 crc kubenswrapper[4808]: I0217 17:20:28.555718 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-v84wc/crc-debug-msb9f"] Feb 17 17:20:28 crc kubenswrapper[4808]: I0217 17:20:28.567287 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-v84wc/crc-debug-msb9f"] Feb 17 17:20:28 crc kubenswrapper[4808]: I0217 17:20:28.641280 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1421f2cf-bbb7-4679-a249-d3233f1a590a-host\") pod \"1421f2cf-bbb7-4679-a249-d3233f1a590a\" (UID: \"1421f2cf-bbb7-4679-a249-d3233f1a590a\") " Feb 17 17:20:28 crc kubenswrapper[4808]: I0217 17:20:28.641427 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5qjlq\" (UniqueName: \"kubernetes.io/projected/1421f2cf-bbb7-4679-a249-d3233f1a590a-kube-api-access-5qjlq\") pod \"1421f2cf-bbb7-4679-a249-d3233f1a590a\" (UID: \"1421f2cf-bbb7-4679-a249-d3233f1a590a\") " Feb 17 17:20:28 crc kubenswrapper[4808]: I0217 17:20:28.641429 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1421f2cf-bbb7-4679-a249-d3233f1a590a-host" (OuterVolumeSpecName: "host") pod "1421f2cf-bbb7-4679-a249-d3233f1a590a" (UID: "1421f2cf-bbb7-4679-a249-d3233f1a590a"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 17:20:28 crc kubenswrapper[4808]: I0217 17:20:28.642003 4808 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1421f2cf-bbb7-4679-a249-d3233f1a590a-host\") on node \"crc\" DevicePath \"\"" Feb 17 17:20:28 crc kubenswrapper[4808]: I0217 17:20:28.649630 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1421f2cf-bbb7-4679-a249-d3233f1a590a-kube-api-access-5qjlq" (OuterVolumeSpecName: "kube-api-access-5qjlq") pod "1421f2cf-bbb7-4679-a249-d3233f1a590a" (UID: "1421f2cf-bbb7-4679-a249-d3233f1a590a"). InnerVolumeSpecName "kube-api-access-5qjlq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:20:28 crc kubenswrapper[4808]: I0217 17:20:28.743811 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5qjlq\" (UniqueName: \"kubernetes.io/projected/1421f2cf-bbb7-4679-a249-d3233f1a590a-kube-api-access-5qjlq\") on node \"crc\" DevicePath \"\"" Feb 17 17:20:29 crc kubenswrapper[4808]: I0217 17:20:29.155998 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1421f2cf-bbb7-4679-a249-d3233f1a590a" path="/var/lib/kubelet/pods/1421f2cf-bbb7-4679-a249-d3233f1a590a/volumes" Feb 17 17:20:29 crc kubenswrapper[4808]: I0217 17:20:29.366880 4808 scope.go:117] "RemoveContainer" containerID="fce94902885db56874aa711abdba927b17899ff624af8c260483d4d779880140" Feb 17 17:20:29 crc kubenswrapper[4808]: I0217 17:20:29.366936 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v84wc/crc-debug-msb9f" Feb 17 17:20:29 crc kubenswrapper[4808]: I0217 17:20:29.737070 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-v84wc/crc-debug-s4cnw"] Feb 17 17:20:29 crc kubenswrapper[4808]: E0217 17:20:29.737713 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1421f2cf-bbb7-4679-a249-d3233f1a590a" containerName="container-00" Feb 17 17:20:29 crc kubenswrapper[4808]: I0217 17:20:29.737725 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="1421f2cf-bbb7-4679-a249-d3233f1a590a" containerName="container-00" Feb 17 17:20:29 crc kubenswrapper[4808]: E0217 17:20:29.737757 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6463b44f-0536-4c98-964e-ffefaf92dd97" containerName="extract-content" Feb 17 17:20:29 crc kubenswrapper[4808]: I0217 17:20:29.737763 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="6463b44f-0536-4c98-964e-ffefaf92dd97" containerName="extract-content" Feb 17 17:20:29 crc kubenswrapper[4808]: E0217 17:20:29.737778 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6463b44f-0536-4c98-964e-ffefaf92dd97" containerName="extract-utilities" Feb 17 17:20:29 crc kubenswrapper[4808]: I0217 17:20:29.737786 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="6463b44f-0536-4c98-964e-ffefaf92dd97" containerName="extract-utilities" Feb 17 17:20:29 crc kubenswrapper[4808]: E0217 17:20:29.737798 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6463b44f-0536-4c98-964e-ffefaf92dd97" containerName="registry-server" Feb 17 17:20:29 crc kubenswrapper[4808]: I0217 17:20:29.737803 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="6463b44f-0536-4c98-964e-ffefaf92dd97" containerName="registry-server" Feb 17 17:20:29 crc kubenswrapper[4808]: I0217 17:20:29.737981 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="1421f2cf-bbb7-4679-a249-d3233f1a590a" containerName="container-00" Feb 17 17:20:29 crc kubenswrapper[4808]: I0217 17:20:29.738000 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="6463b44f-0536-4c98-964e-ffefaf92dd97" containerName="registry-server" Feb 17 17:20:29 crc kubenswrapper[4808]: I0217 17:20:29.738712 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v84wc/crc-debug-s4cnw" Feb 17 17:20:29 crc kubenswrapper[4808]: I0217 17:20:29.740716 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-v84wc"/"default-dockercfg-f8jxd" Feb 17 17:20:29 crc kubenswrapper[4808]: I0217 17:20:29.866444 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxf52\" (UniqueName: \"kubernetes.io/projected/99456b7d-1910-4568-bd41-1530e3e72765-kube-api-access-fxf52\") pod \"crc-debug-s4cnw\" (UID: \"99456b7d-1910-4568-bd41-1530e3e72765\") " pod="openshift-must-gather-v84wc/crc-debug-s4cnw" Feb 17 17:20:29 crc kubenswrapper[4808]: I0217 17:20:29.866854 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/99456b7d-1910-4568-bd41-1530e3e72765-host\") pod \"crc-debug-s4cnw\" (UID: \"99456b7d-1910-4568-bd41-1530e3e72765\") " pod="openshift-must-gather-v84wc/crc-debug-s4cnw" Feb 17 17:20:29 crc kubenswrapper[4808]: I0217 17:20:29.969427 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/99456b7d-1910-4568-bd41-1530e3e72765-host\") pod \"crc-debug-s4cnw\" (UID: \"99456b7d-1910-4568-bd41-1530e3e72765\") " pod="openshift-must-gather-v84wc/crc-debug-s4cnw" Feb 17 17:20:29 crc kubenswrapper[4808]: I0217 17:20:29.969595 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/99456b7d-1910-4568-bd41-1530e3e72765-host\") pod \"crc-debug-s4cnw\" (UID: \"99456b7d-1910-4568-bd41-1530e3e72765\") " pod="openshift-must-gather-v84wc/crc-debug-s4cnw" Feb 17 17:20:29 crc kubenswrapper[4808]: I0217 17:20:29.969700 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxf52\" (UniqueName: \"kubernetes.io/projected/99456b7d-1910-4568-bd41-1530e3e72765-kube-api-access-fxf52\") pod \"crc-debug-s4cnw\" (UID: \"99456b7d-1910-4568-bd41-1530e3e72765\") " pod="openshift-must-gather-v84wc/crc-debug-s4cnw" Feb 17 17:20:29 crc kubenswrapper[4808]: I0217 17:20:29.988598 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxf52\" (UniqueName: \"kubernetes.io/projected/99456b7d-1910-4568-bd41-1530e3e72765-kube-api-access-fxf52\") pod \"crc-debug-s4cnw\" (UID: \"99456b7d-1910-4568-bd41-1530e3e72765\") " pod="openshift-must-gather-v84wc/crc-debug-s4cnw" Feb 17 17:20:30 crc kubenswrapper[4808]: I0217 17:20:30.056226 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v84wc/crc-debug-s4cnw" Feb 17 17:20:30 crc kubenswrapper[4808]: I0217 17:20:30.385159 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-v84wc/crc-debug-s4cnw" event={"ID":"99456b7d-1910-4568-bd41-1530e3e72765","Type":"ContainerStarted","Data":"ab59990105963838e5a53279c7bde66d50d8761853c8aa2e109846f43c7c2405"} Feb 17 17:20:31 crc kubenswrapper[4808]: I0217 17:20:31.417145 4808 generic.go:334] "Generic (PLEG): container finished" podID="99456b7d-1910-4568-bd41-1530e3e72765" containerID="886212de31c048e2a4a7d6ec1f21ce8db66db2cb601a099787b0b26295d79e07" exitCode=0 Feb 17 17:20:31 crc kubenswrapper[4808]: I0217 17:20:31.417246 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-v84wc/crc-debug-s4cnw" event={"ID":"99456b7d-1910-4568-bd41-1530e3e72765","Type":"ContainerDied","Data":"886212de31c048e2a4a7d6ec1f21ce8db66db2cb601a099787b0b26295d79e07"} Feb 17 17:20:31 crc kubenswrapper[4808]: I0217 17:20:31.927146 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-v84wc/crc-debug-s4cnw"] Feb 17 17:20:31 crc kubenswrapper[4808]: I0217 17:20:31.941634 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-v84wc/crc-debug-s4cnw"] Feb 17 17:20:32 crc kubenswrapper[4808]: I0217 17:20:32.538875 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v84wc/crc-debug-s4cnw" Feb 17 17:20:32 crc kubenswrapper[4808]: I0217 17:20:32.622215 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fxf52\" (UniqueName: \"kubernetes.io/projected/99456b7d-1910-4568-bd41-1530e3e72765-kube-api-access-fxf52\") pod \"99456b7d-1910-4568-bd41-1530e3e72765\" (UID: \"99456b7d-1910-4568-bd41-1530e3e72765\") " Feb 17 17:20:32 crc kubenswrapper[4808]: I0217 17:20:32.622394 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/99456b7d-1910-4568-bd41-1530e3e72765-host\") pod \"99456b7d-1910-4568-bd41-1530e3e72765\" (UID: \"99456b7d-1910-4568-bd41-1530e3e72765\") " Feb 17 17:20:32 crc kubenswrapper[4808]: I0217 17:20:32.622908 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/99456b7d-1910-4568-bd41-1530e3e72765-host" (OuterVolumeSpecName: "host") pod "99456b7d-1910-4568-bd41-1530e3e72765" (UID: "99456b7d-1910-4568-bd41-1530e3e72765"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 17:20:32 crc kubenswrapper[4808]: I0217 17:20:32.623319 4808 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/99456b7d-1910-4568-bd41-1530e3e72765-host\") on node \"crc\" DevicePath \"\"" Feb 17 17:20:32 crc kubenswrapper[4808]: I0217 17:20:32.630302 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99456b7d-1910-4568-bd41-1530e3e72765-kube-api-access-fxf52" (OuterVolumeSpecName: "kube-api-access-fxf52") pod "99456b7d-1910-4568-bd41-1530e3e72765" (UID: "99456b7d-1910-4568-bd41-1530e3e72765"). InnerVolumeSpecName "kube-api-access-fxf52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:20:32 crc kubenswrapper[4808]: I0217 17:20:32.726019 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fxf52\" (UniqueName: \"kubernetes.io/projected/99456b7d-1910-4568-bd41-1530e3e72765-kube-api-access-fxf52\") on node \"crc\" DevicePath \"\"" Feb 17 17:20:33 crc kubenswrapper[4808]: I0217 17:20:33.148008 4808 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 17:20:33 crc kubenswrapper[4808]: I0217 17:20:33.157513 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="99456b7d-1910-4568-bd41-1530e3e72765" path="/var/lib/kubelet/pods/99456b7d-1910-4568-bd41-1530e3e72765/volumes" Feb 17 17:20:33 crc kubenswrapper[4808]: I0217 17:20:33.213519 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-v84wc/crc-debug-8xw5k"] Feb 17 17:20:33 crc kubenswrapper[4808]: E0217 17:20:33.214019 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99456b7d-1910-4568-bd41-1530e3e72765" containerName="container-00" Feb 17 17:20:33 crc kubenswrapper[4808]: I0217 17:20:33.214043 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="99456b7d-1910-4568-bd41-1530e3e72765" containerName="container-00" Feb 17 17:20:33 crc kubenswrapper[4808]: I0217 17:20:33.214367 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="99456b7d-1910-4568-bd41-1530e3e72765" containerName="container-00" Feb 17 17:20:33 crc kubenswrapper[4808]: I0217 17:20:33.215247 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v84wc/crc-debug-8xw5k" Feb 17 17:20:33 crc kubenswrapper[4808]: E0217 17:20:33.275635 4808 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 17 17:20:33 crc kubenswrapper[4808]: E0217 17:20:33.275705 4808 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 17 17:20:33 crc kubenswrapper[4808]: E0217 17:20:33.275853 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fnd2x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-zl7nk_openstack(a4b182d0-48fc-4487-b7ad-18f7803a4d4c): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 17:20:33 crc kubenswrapper[4808]: E0217 17:20:33.277036 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:20:33 crc kubenswrapper[4808]: I0217 17:20:33.338130 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c88gf\" (UniqueName: \"kubernetes.io/projected/fe1f8ccd-1720-43d2-b334-f9dde62e0972-kube-api-access-c88gf\") pod \"crc-debug-8xw5k\" (UID: \"fe1f8ccd-1720-43d2-b334-f9dde62e0972\") " pod="openshift-must-gather-v84wc/crc-debug-8xw5k" Feb 17 17:20:33 crc kubenswrapper[4808]: I0217 17:20:33.338273 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fe1f8ccd-1720-43d2-b334-f9dde62e0972-host\") pod \"crc-debug-8xw5k\" (UID: \"fe1f8ccd-1720-43d2-b334-f9dde62e0972\") " pod="openshift-must-gather-v84wc/crc-debug-8xw5k" Feb 17 17:20:33 crc kubenswrapper[4808]: I0217 17:20:33.436206 4808 scope.go:117] "RemoveContainer" containerID="886212de31c048e2a4a7d6ec1f21ce8db66db2cb601a099787b0b26295d79e07" Feb 17 17:20:33 crc kubenswrapper[4808]: I0217 17:20:33.436244 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v84wc/crc-debug-s4cnw" Feb 17 17:20:33 crc kubenswrapper[4808]: I0217 17:20:33.439920 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c88gf\" (UniqueName: \"kubernetes.io/projected/fe1f8ccd-1720-43d2-b334-f9dde62e0972-kube-api-access-c88gf\") pod \"crc-debug-8xw5k\" (UID: \"fe1f8ccd-1720-43d2-b334-f9dde62e0972\") " pod="openshift-must-gather-v84wc/crc-debug-8xw5k" Feb 17 17:20:33 crc kubenswrapper[4808]: I0217 17:20:33.440004 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fe1f8ccd-1720-43d2-b334-f9dde62e0972-host\") pod \"crc-debug-8xw5k\" (UID: \"fe1f8ccd-1720-43d2-b334-f9dde62e0972\") " pod="openshift-must-gather-v84wc/crc-debug-8xw5k" Feb 17 17:20:33 crc kubenswrapper[4808]: I0217 17:20:33.440210 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fe1f8ccd-1720-43d2-b334-f9dde62e0972-host\") pod \"crc-debug-8xw5k\" (UID: \"fe1f8ccd-1720-43d2-b334-f9dde62e0972\") " pod="openshift-must-gather-v84wc/crc-debug-8xw5k" Feb 17 17:20:33 crc kubenswrapper[4808]: I0217 17:20:33.462318 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c88gf\" (UniqueName: \"kubernetes.io/projected/fe1f8ccd-1720-43d2-b334-f9dde62e0972-kube-api-access-c88gf\") pod \"crc-debug-8xw5k\" (UID: \"fe1f8ccd-1720-43d2-b334-f9dde62e0972\") " pod="openshift-must-gather-v84wc/crc-debug-8xw5k" Feb 17 17:20:33 crc kubenswrapper[4808]: I0217 17:20:33.539014 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v84wc/crc-debug-8xw5k" Feb 17 17:20:34 crc kubenswrapper[4808]: I0217 17:20:34.448794 4808 generic.go:334] "Generic (PLEG): container finished" podID="fe1f8ccd-1720-43d2-b334-f9dde62e0972" containerID="cd15dbf76ff4b1429591d975b57babb4c210c92b9b9c36cf667e623e8c29cf61" exitCode=0 Feb 17 17:20:34 crc kubenswrapper[4808]: I0217 17:20:34.449329 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-v84wc/crc-debug-8xw5k" event={"ID":"fe1f8ccd-1720-43d2-b334-f9dde62e0972","Type":"ContainerDied","Data":"cd15dbf76ff4b1429591d975b57babb4c210c92b9b9c36cf667e623e8c29cf61"} Feb 17 17:20:34 crc kubenswrapper[4808]: I0217 17:20:34.449361 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-v84wc/crc-debug-8xw5k" event={"ID":"fe1f8ccd-1720-43d2-b334-f9dde62e0972","Type":"ContainerStarted","Data":"fc9bd2d6092fb23dae133918bd69420618835ebb59416af2542dadb082ea10ee"} Feb 17 17:20:34 crc kubenswrapper[4808]: I0217 17:20:34.493089 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-v84wc/crc-debug-8xw5k"] Feb 17 17:20:34 crc kubenswrapper[4808]: I0217 17:20:34.501790 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-v84wc/crc-debug-8xw5k"] Feb 17 17:20:35 crc kubenswrapper[4808]: E0217 17:20:35.147420 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:20:35 crc kubenswrapper[4808]: I0217 17:20:35.610103 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v84wc/crc-debug-8xw5k" Feb 17 17:20:35 crc kubenswrapper[4808]: I0217 17:20:35.703992 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fe1f8ccd-1720-43d2-b334-f9dde62e0972-host\") pod \"fe1f8ccd-1720-43d2-b334-f9dde62e0972\" (UID: \"fe1f8ccd-1720-43d2-b334-f9dde62e0972\") " Feb 17 17:20:35 crc kubenswrapper[4808]: I0217 17:20:35.704260 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fe1f8ccd-1720-43d2-b334-f9dde62e0972-host" (OuterVolumeSpecName: "host") pod "fe1f8ccd-1720-43d2-b334-f9dde62e0972" (UID: "fe1f8ccd-1720-43d2-b334-f9dde62e0972"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 17:20:35 crc kubenswrapper[4808]: I0217 17:20:35.704298 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c88gf\" (UniqueName: \"kubernetes.io/projected/fe1f8ccd-1720-43d2-b334-f9dde62e0972-kube-api-access-c88gf\") pod \"fe1f8ccd-1720-43d2-b334-f9dde62e0972\" (UID: \"fe1f8ccd-1720-43d2-b334-f9dde62e0972\") " Feb 17 17:20:35 crc kubenswrapper[4808]: I0217 17:20:35.704886 4808 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fe1f8ccd-1720-43d2-b334-f9dde62e0972-host\") on node \"crc\" DevicePath \"\"" Feb 17 17:20:35 crc kubenswrapper[4808]: I0217 17:20:35.716200 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe1f8ccd-1720-43d2-b334-f9dde62e0972-kube-api-access-c88gf" (OuterVolumeSpecName: "kube-api-access-c88gf") pod "fe1f8ccd-1720-43d2-b334-f9dde62e0972" (UID: "fe1f8ccd-1720-43d2-b334-f9dde62e0972"). InnerVolumeSpecName "kube-api-access-c88gf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:20:35 crc kubenswrapper[4808]: I0217 17:20:35.807420 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c88gf\" (UniqueName: \"kubernetes.io/projected/fe1f8ccd-1720-43d2-b334-f9dde62e0972-kube-api-access-c88gf\") on node \"crc\" DevicePath \"\"" Feb 17 17:20:36 crc kubenswrapper[4808]: I0217 17:20:36.502051 4808 scope.go:117] "RemoveContainer" containerID="cd15dbf76ff4b1429591d975b57babb4c210c92b9b9c36cf667e623e8c29cf61" Feb 17 17:20:36 crc kubenswrapper[4808]: I0217 17:20:36.502345 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v84wc/crc-debug-8xw5k" Feb 17 17:20:37 crc kubenswrapper[4808]: I0217 17:20:37.161939 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe1f8ccd-1720-43d2-b334-f9dde62e0972" path="/var/lib/kubelet/pods/fe1f8ccd-1720-43d2-b334-f9dde62e0972/volumes" Feb 17 17:20:40 crc kubenswrapper[4808]: I0217 17:20:40.146085 4808 scope.go:117] "RemoveContainer" containerID="700c3283572281c218af9f0b845d6de62277f81d69443b3b1ffcaa7d804aa22e" Feb 17 17:20:40 crc kubenswrapper[4808]: E0217 17:20:40.147067 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 17:20:47 crc kubenswrapper[4808]: E0217 17:20:47.154335 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:20:48 crc kubenswrapper[4808]: E0217 17:20:48.146998 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:20:52 crc kubenswrapper[4808]: I0217 17:20:52.146269 4808 scope.go:117] "RemoveContainer" containerID="700c3283572281c218af9f0b845d6de62277f81d69443b3b1ffcaa7d804aa22e" Feb 17 17:20:52 crc kubenswrapper[4808]: E0217 17:20:52.147824 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 17:20:59 crc kubenswrapper[4808]: E0217 17:20:59.275040 4808 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 17:20:59 crc kubenswrapper[4808]: E0217 17:20:59.275722 4808 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 17:20:59 crc kubenswrapper[4808]: E0217 17:20:59.275908 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nfchb4h678h649h5fbh664h79h7fh666h5bfh68h565h555h59dh5b6h5bfh66ch645h547h5cbh549h9fh58bh5d4hcfh78h68chc7h5ch67dhc7h5b4q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rjgf2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(2876084b-7055-449d-9ddb-447d3a515d80): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 17:20:59 crc kubenswrapper[4808]: E0217 17:20:59.277191 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:21:02 crc kubenswrapper[4808]: E0217 17:21:02.148459 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:21:04 crc kubenswrapper[4808]: I0217 17:21:04.146292 4808 scope.go:117] "RemoveContainer" containerID="700c3283572281c218af9f0b845d6de62277f81d69443b3b1ffcaa7d804aa22e" Feb 17 17:21:04 crc kubenswrapper[4808]: E0217 17:21:04.146934 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 17:21:10 crc kubenswrapper[4808]: I0217 17:21:10.511905 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-g2wvv"] Feb 17 17:21:10 crc kubenswrapper[4808]: E0217 17:21:10.513031 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe1f8ccd-1720-43d2-b334-f9dde62e0972" containerName="container-00" Feb 17 17:21:10 crc kubenswrapper[4808]: I0217 17:21:10.513047 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe1f8ccd-1720-43d2-b334-f9dde62e0972" containerName="container-00" Feb 17 17:21:10 crc kubenswrapper[4808]: I0217 17:21:10.513287 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe1f8ccd-1720-43d2-b334-f9dde62e0972" containerName="container-00" Feb 17 17:21:10 crc kubenswrapper[4808]: I0217 17:21:10.514976 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g2wvv" Feb 17 17:21:10 crc kubenswrapper[4808]: I0217 17:21:10.541327 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-g2wvv"] Feb 17 17:21:10 crc kubenswrapper[4808]: I0217 17:21:10.604908 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d9a64bc-8829-4eb8-b992-92f15c06c5cd-utilities\") pod \"redhat-operators-g2wvv\" (UID: \"9d9a64bc-8829-4eb8-b992-92f15c06c5cd\") " pod="openshift-marketplace/redhat-operators-g2wvv" Feb 17 17:21:10 crc kubenswrapper[4808]: I0217 17:21:10.605308 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d9a64bc-8829-4eb8-b992-92f15c06c5cd-catalog-content\") pod \"redhat-operators-g2wvv\" (UID: \"9d9a64bc-8829-4eb8-b992-92f15c06c5cd\") " pod="openshift-marketplace/redhat-operators-g2wvv" Feb 17 17:21:10 crc kubenswrapper[4808]: I0217 17:21:10.605386 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbnrk\" (UniqueName: \"kubernetes.io/projected/9d9a64bc-8829-4eb8-b992-92f15c06c5cd-kube-api-access-dbnrk\") pod \"redhat-operators-g2wvv\" (UID: \"9d9a64bc-8829-4eb8-b992-92f15c06c5cd\") " pod="openshift-marketplace/redhat-operators-g2wvv" Feb 17 17:21:10 crc kubenswrapper[4808]: I0217 17:21:10.708132 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d9a64bc-8829-4eb8-b992-92f15c06c5cd-catalog-content\") pod \"redhat-operators-g2wvv\" (UID: \"9d9a64bc-8829-4eb8-b992-92f15c06c5cd\") " pod="openshift-marketplace/redhat-operators-g2wvv" Feb 17 17:21:10 crc kubenswrapper[4808]: I0217 17:21:10.708188 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbnrk\" (UniqueName: \"kubernetes.io/projected/9d9a64bc-8829-4eb8-b992-92f15c06c5cd-kube-api-access-dbnrk\") pod \"redhat-operators-g2wvv\" (UID: \"9d9a64bc-8829-4eb8-b992-92f15c06c5cd\") " pod="openshift-marketplace/redhat-operators-g2wvv" Feb 17 17:21:10 crc kubenswrapper[4808]: I0217 17:21:10.708312 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d9a64bc-8829-4eb8-b992-92f15c06c5cd-utilities\") pod \"redhat-operators-g2wvv\" (UID: \"9d9a64bc-8829-4eb8-b992-92f15c06c5cd\") " pod="openshift-marketplace/redhat-operators-g2wvv" Feb 17 17:21:10 crc kubenswrapper[4808]: I0217 17:21:10.708930 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d9a64bc-8829-4eb8-b992-92f15c06c5cd-catalog-content\") pod \"redhat-operators-g2wvv\" (UID: \"9d9a64bc-8829-4eb8-b992-92f15c06c5cd\") " pod="openshift-marketplace/redhat-operators-g2wvv" Feb 17 17:21:10 crc kubenswrapper[4808]: I0217 17:21:10.710060 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d9a64bc-8829-4eb8-b992-92f15c06c5cd-utilities\") pod \"redhat-operators-g2wvv\" (UID: \"9d9a64bc-8829-4eb8-b992-92f15c06c5cd\") " pod="openshift-marketplace/redhat-operators-g2wvv" Feb 17 17:21:10 crc kubenswrapper[4808]: I0217 17:21:10.878094 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbnrk\" (UniqueName: \"kubernetes.io/projected/9d9a64bc-8829-4eb8-b992-92f15c06c5cd-kube-api-access-dbnrk\") pod \"redhat-operators-g2wvv\" (UID: \"9d9a64bc-8829-4eb8-b992-92f15c06c5cd\") " pod="openshift-marketplace/redhat-operators-g2wvv" Feb 17 17:21:11 crc kubenswrapper[4808]: I0217 17:21:11.161547 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g2wvv" Feb 17 17:21:11 crc kubenswrapper[4808]: I0217 17:21:11.721816 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-g2wvv"] Feb 17 17:21:11 crc kubenswrapper[4808]: I0217 17:21:11.951544 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g2wvv" event={"ID":"9d9a64bc-8829-4eb8-b992-92f15c06c5cd","Type":"ContainerStarted","Data":"fba4b2968632d2bd4cdd0c26e698a48d92c3645d42d2a965a77a8846ddad4b21"} Feb 17 17:21:12 crc kubenswrapper[4808]: I0217 17:21:12.963169 4808 generic.go:334] "Generic (PLEG): container finished" podID="9d9a64bc-8829-4eb8-b992-92f15c06c5cd" containerID="bf062c4b1aac25419c20905ed7b4186bca0dfc1bb2e6718ad6071f72a64f7076" exitCode=0 Feb 17 17:21:12 crc kubenswrapper[4808]: I0217 17:21:12.963227 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g2wvv" event={"ID":"9d9a64bc-8829-4eb8-b992-92f15c06c5cd","Type":"ContainerDied","Data":"bf062c4b1aac25419c20905ed7b4186bca0dfc1bb2e6718ad6071f72a64f7076"} Feb 17 17:21:13 crc kubenswrapper[4808]: E0217 17:21:13.147153 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:21:15 crc kubenswrapper[4808]: I0217 17:21:15.008115 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g2wvv" event={"ID":"9d9a64bc-8829-4eb8-b992-92f15c06c5cd","Type":"ContainerStarted","Data":"486ec7c212bbca48871a09cf79788c0160085756cf021132e3d8b32feaab142f"} Feb 17 17:21:16 crc kubenswrapper[4808]: I0217 17:21:16.146646 4808 scope.go:117] "RemoveContainer" containerID="700c3283572281c218af9f0b845d6de62277f81d69443b3b1ffcaa7d804aa22e" Feb 17 17:21:16 crc kubenswrapper[4808]: E0217 17:21:16.147314 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 17:21:16 crc kubenswrapper[4808]: E0217 17:21:16.147812 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:21:21 crc kubenswrapper[4808]: I0217 17:21:21.151802 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_56f9931d-b010-4282-9068-16b2e4e4b247/init-config-reloader/0.log" Feb 17 17:21:21 crc kubenswrapper[4808]: I0217 17:21:21.706124 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_56f9931d-b010-4282-9068-16b2e4e4b247/config-reloader/0.log" Feb 17 17:21:21 crc kubenswrapper[4808]: I0217 17:21:21.715387 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_56f9931d-b010-4282-9068-16b2e4e4b247/alertmanager/0.log" Feb 17 17:21:21 crc kubenswrapper[4808]: I0217 17:21:21.879112 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_56f9931d-b010-4282-9068-16b2e4e4b247/init-config-reloader/0.log" Feb 17 17:21:21 crc kubenswrapper[4808]: I0217 17:21:21.952430 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5f445fb886-lsqq4_a9bf13d7-3430-4818-b8fc-239796570b6c/barbican-api/0.log" Feb 17 17:21:21 crc kubenswrapper[4808]: I0217 17:21:21.990167 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5f445fb886-lsqq4_a9bf13d7-3430-4818-b8fc-239796570b6c/barbican-api-log/0.log" Feb 17 17:21:22 crc kubenswrapper[4808]: I0217 17:21:22.114406 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6d78867d94-7lhqs_990b124d-3558-48ad-87f8-503580da5cc7/barbican-keystone-listener/0.log" Feb 17 17:21:22 crc kubenswrapper[4808]: I0217 17:21:22.226968 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6d78867d94-7lhqs_990b124d-3558-48ad-87f8-503580da5cc7/barbican-keystone-listener-log/0.log" Feb 17 17:21:22 crc kubenswrapper[4808]: I0217 17:21:22.302298 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-55f6d995c5-hnz4n_a0db6993-f3e7-4aa7-b5cc-1b848a15b56c/barbican-worker/0.log" Feb 17 17:21:22 crc kubenswrapper[4808]: I0217 17:21:22.357003 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-55f6d995c5-hnz4n_a0db6993-f3e7-4aa7-b5cc-1b848a15b56c/barbican-worker-log/0.log" Feb 17 17:21:22 crc kubenswrapper[4808]: I0217 17:21:22.608337 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-vwl2g_e4a30af7-342e-49c0-8e89-c38f11b7cc63/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 17:21:23 crc kubenswrapper[4808]: I0217 17:21:23.001642 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_2876084b-7055-449d-9ddb-447d3a515d80/ceilometer-notification-agent/0.log" Feb 17 17:21:23 crc kubenswrapper[4808]: I0217 17:21:23.048905 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_2876084b-7055-449d-9ddb-447d3a515d80/proxy-httpd/0.log" Feb 17 17:21:23 crc kubenswrapper[4808]: I0217 17:21:23.279416 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_2876084b-7055-449d-9ddb-447d3a515d80/sg-core/0.log" Feb 17 17:21:23 crc kubenswrapper[4808]: I0217 17:21:23.316157 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_b221adbf-8d08-4f9c-8bb2-578555a453df/cinder-api/0.log" Feb 17 17:21:23 crc kubenswrapper[4808]: I0217 17:21:23.381303 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_b221adbf-8d08-4f9c-8bb2-578555a453df/cinder-api-log/0.log" Feb 17 17:21:23 crc kubenswrapper[4808]: I0217 17:21:23.610822 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_fce98890-1299-4c07-8a3a-739241f0bf0d/cinder-scheduler/0.log" Feb 17 17:21:23 crc kubenswrapper[4808]: I0217 17:21:23.639719 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_fce98890-1299-4c07-8a3a-739241f0bf0d/probe/0.log" Feb 17 17:21:23 crc kubenswrapper[4808]: I0217 17:21:23.901779 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-api-0_b35dce7b-8ffe-4981-8376-5db5a01dcf77/cloudkitty-api-log/0.log" Feb 17 17:21:23 crc kubenswrapper[4808]: I0217 17:21:23.905435 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-api-0_b35dce7b-8ffe-4981-8376-5db5a01dcf77/cloudkitty-api/0.log" Feb 17 17:21:24 crc kubenswrapper[4808]: I0217 17:21:24.174244 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-compactor-0_c850b5fe-4c28-4136-8136-fae52e38371b/loki-compactor/0.log" Feb 17 17:21:24 crc kubenswrapper[4808]: I0217 17:21:24.301145 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-distributor-585d9bcbc-zfhfg_4fa85572-1552-4a27-8974-b1e2d376167c/loki-distributor/0.log" Feb 17 17:21:24 crc kubenswrapper[4808]: I0217 17:21:24.465036 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-gateway-7f8685b49f-77rbq_c4fa7a6a-b7fc-464c-b529-dcf8d20de97e/gateway/0.log" Feb 17 17:21:24 crc kubenswrapper[4808]: I0217 17:21:24.568714 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-gateway-7f8685b49f-mdlhq_dc9fa7d9-5340-4cb0-adbb-980e7ae2acb0/gateway/0.log" Feb 17 17:21:24 crc kubenswrapper[4808]: I0217 17:21:24.723052 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-f8d96"] Feb 17 17:21:24 crc kubenswrapper[4808]: I0217 17:21:24.725380 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f8d96" Feb 17 17:21:24 crc kubenswrapper[4808]: I0217 17:21:24.737563 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-index-gateway-0_d6dbebd3-2b7c-4afa-8937-5c47b749e8b0/loki-index-gateway/0.log" Feb 17 17:21:24 crc kubenswrapper[4808]: I0217 17:21:24.770825 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-f8d96"] Feb 17 17:21:24 crc kubenswrapper[4808]: I0217 17:21:24.840255 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-ingester-0_c7929d5b-e791-419e-8039-50cc9f8202f2/loki-ingester/0.log" Feb 17 17:21:24 crc kubenswrapper[4808]: I0217 17:21:24.852902 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40119af6-a3e0-44d6-abc8-df39c96836ac-utilities\") pod \"redhat-marketplace-f8d96\" (UID: \"40119af6-a3e0-44d6-abc8-df39c96836ac\") " pod="openshift-marketplace/redhat-marketplace-f8d96" Feb 17 17:21:24 crc kubenswrapper[4808]: I0217 17:21:24.852958 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40119af6-a3e0-44d6-abc8-df39c96836ac-catalog-content\") pod \"redhat-marketplace-f8d96\" (UID: \"40119af6-a3e0-44d6-abc8-df39c96836ac\") " pod="openshift-marketplace/redhat-marketplace-f8d96" Feb 17 17:21:24 crc kubenswrapper[4808]: I0217 17:21:24.853041 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcgzp\" (UniqueName: \"kubernetes.io/projected/40119af6-a3e0-44d6-abc8-df39c96836ac-kube-api-access-pcgzp\") pod \"redhat-marketplace-f8d96\" (UID: \"40119af6-a3e0-44d6-abc8-df39c96836ac\") " pod="openshift-marketplace/redhat-marketplace-f8d96" Feb 17 17:21:24 crc kubenswrapper[4808]: I0217 17:21:24.954894 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pcgzp\" (UniqueName: \"kubernetes.io/projected/40119af6-a3e0-44d6-abc8-df39c96836ac-kube-api-access-pcgzp\") pod \"redhat-marketplace-f8d96\" (UID: \"40119af6-a3e0-44d6-abc8-df39c96836ac\") " pod="openshift-marketplace/redhat-marketplace-f8d96" Feb 17 17:21:24 crc kubenswrapper[4808]: I0217 17:21:24.955117 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40119af6-a3e0-44d6-abc8-df39c96836ac-utilities\") pod \"redhat-marketplace-f8d96\" (UID: \"40119af6-a3e0-44d6-abc8-df39c96836ac\") " pod="openshift-marketplace/redhat-marketplace-f8d96" Feb 17 17:21:24 crc kubenswrapper[4808]: I0217 17:21:24.955144 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40119af6-a3e0-44d6-abc8-df39c96836ac-catalog-content\") pod \"redhat-marketplace-f8d96\" (UID: \"40119af6-a3e0-44d6-abc8-df39c96836ac\") " pod="openshift-marketplace/redhat-marketplace-f8d96" Feb 17 17:21:24 crc kubenswrapper[4808]: I0217 17:21:24.955915 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40119af6-a3e0-44d6-abc8-df39c96836ac-catalog-content\") pod \"redhat-marketplace-f8d96\" (UID: \"40119af6-a3e0-44d6-abc8-df39c96836ac\") " pod="openshift-marketplace/redhat-marketplace-f8d96" Feb 17 17:21:24 crc kubenswrapper[4808]: I0217 17:21:24.955946 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40119af6-a3e0-44d6-abc8-df39c96836ac-utilities\") pod \"redhat-marketplace-f8d96\" (UID: \"40119af6-a3e0-44d6-abc8-df39c96836ac\") " pod="openshift-marketplace/redhat-marketplace-f8d96" Feb 17 17:21:24 crc kubenswrapper[4808]: I0217 17:21:24.981681 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pcgzp\" (UniqueName: \"kubernetes.io/projected/40119af6-a3e0-44d6-abc8-df39c96836ac-kube-api-access-pcgzp\") pod \"redhat-marketplace-f8d96\" (UID: \"40119af6-a3e0-44d6-abc8-df39c96836ac\") " pod="openshift-marketplace/redhat-marketplace-f8d96" Feb 17 17:21:25 crc kubenswrapper[4808]: I0217 17:21:25.051716 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-querier-58c84b5844-pkj8k_6df15762-0f06-48ff-89bf-00f5118c6ced/loki-querier/0.log" Feb 17 17:21:25 crc kubenswrapper[4808]: I0217 17:21:25.067148 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f8d96" Feb 17 17:21:25 crc kubenswrapper[4808]: I0217 17:21:25.123400 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-query-frontend-67bb4dfcd8-52cj4_be29c259-d619-4326-b866-2a8560d9b818/loki-query-frontend/0.log" Feb 17 17:21:25 crc kubenswrapper[4808]: I0217 17:21:25.458306 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-85f64749dc-mqnbz_3d16d4be-1ab3-4261-97a7-054701cf9dba/init/0.log" Feb 17 17:21:25 crc kubenswrapper[4808]: I0217 17:21:25.651886 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-85f64749dc-mqnbz_3d16d4be-1ab3-4261-97a7-054701cf9dba/init/0.log" Feb 17 17:21:25 crc kubenswrapper[4808]: I0217 17:21:25.738276 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-f8d96"] Feb 17 17:21:25 crc kubenswrapper[4808]: I0217 17:21:25.773353 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-85f64749dc-mqnbz_3d16d4be-1ab3-4261-97a7-054701cf9dba/dnsmasq-dns/0.log" Feb 17 17:21:25 crc kubenswrapper[4808]: I0217 17:21:25.850709 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-9nkdz_486d1a55-6cee-4d24-ab2b-5c5c61c6d3d3/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 17:21:26 crc kubenswrapper[4808]: I0217 17:21:26.127323 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-hsdg8_c51156c6-7d2b-4871-9ae0-963c4eb67454/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 17:21:26 crc kubenswrapper[4808]: E0217 17:21:26.153634 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:21:26 crc kubenswrapper[4808]: I0217 17:21:26.192533 4808 generic.go:334] "Generic (PLEG): container finished" podID="40119af6-a3e0-44d6-abc8-df39c96836ac" containerID="eca172e38f749572103f9af3900358585716634266e768829cda8d4d2cf5fcea" exitCode=0 Feb 17 17:21:26 crc kubenswrapper[4808]: I0217 17:21:26.192602 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f8d96" event={"ID":"40119af6-a3e0-44d6-abc8-df39c96836ac","Type":"ContainerDied","Data":"eca172e38f749572103f9af3900358585716634266e768829cda8d4d2cf5fcea"} Feb 17 17:21:26 crc kubenswrapper[4808]: I0217 17:21:26.192628 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f8d96" event={"ID":"40119af6-a3e0-44d6-abc8-df39c96836ac","Type":"ContainerStarted","Data":"2536e14a994b64f27af984baacbd8fd7c12099545e13e6a5747da97bd5cf5e03"} Feb 17 17:21:26 crc kubenswrapper[4808]: I0217 17:21:26.452448 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-n8rxl_8b75e2b3-ab6a-4088-897b-7a11da62a654/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 17:21:26 crc kubenswrapper[4808]: I0217 17:21:26.557955 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-pmbdv_d178dfcd-66d8-40ba-b740-909fe6e081ac/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 17:21:26 crc kubenswrapper[4808]: E0217 17:21:26.605181 4808 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d9a64bc_8829_4eb8_b992_92f15c06c5cd.slice/crio-conmon-486ec7c212bbca48871a09cf79788c0160085756cf021132e3d8b32feaab142f.scope\": RecentStats: unable to find data in memory cache]" Feb 17 17:21:26 crc kubenswrapper[4808]: I0217 17:21:26.756934 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-sjckt_2084629b-ffd4-4f5e-8db7-070d4a08dd8e/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 17:21:26 crc kubenswrapper[4808]: I0217 17:21:26.868522 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-tjd7w_11efc7ce-322d-4bfe-95ad-c84d779a80d8/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 17:21:27 crc kubenswrapper[4808]: I0217 17:21:27.063077 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-zzjwk_6fa90ca1-9ae4-4cce-a41f-640f2629ccfd/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 17:21:27 crc kubenswrapper[4808]: I0217 17:21:27.153923 4808 scope.go:117] "RemoveContainer" containerID="700c3283572281c218af9f0b845d6de62277f81d69443b3b1ffcaa7d804aa22e" Feb 17 17:21:27 crc kubenswrapper[4808]: E0217 17:21:27.154269 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 17:21:27 crc kubenswrapper[4808]: I0217 17:21:27.168990 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_d5dbe689-5e11-4832-84c8-d603c08a23e2/glance-httpd/0.log" Feb 17 17:21:27 crc kubenswrapper[4808]: I0217 17:21:27.205315 4808 generic.go:334] "Generic (PLEG): container finished" podID="9d9a64bc-8829-4eb8-b992-92f15c06c5cd" containerID="486ec7c212bbca48871a09cf79788c0160085756cf021132e3d8b32feaab142f" exitCode=0 Feb 17 17:21:27 crc kubenswrapper[4808]: I0217 17:21:27.205666 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g2wvv" event={"ID":"9d9a64bc-8829-4eb8-b992-92f15c06c5cd","Type":"ContainerDied","Data":"486ec7c212bbca48871a09cf79788c0160085756cf021132e3d8b32feaab142f"} Feb 17 17:21:27 crc kubenswrapper[4808]: I0217 17:21:27.316259 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_d5dbe689-5e11-4832-84c8-d603c08a23e2/glance-log/0.log" Feb 17 17:21:27 crc kubenswrapper[4808]: I0217 17:21:27.358807 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_b59528d2-0bad-4c66-9971-222dcaf72184/glance-httpd/0.log" Feb 17 17:21:27 crc kubenswrapper[4808]: I0217 17:21:27.488384 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_b59528d2-0bad-4c66-9971-222dcaf72184/glance-log/0.log" Feb 17 17:21:27 crc kubenswrapper[4808]: I0217 17:21:27.732456 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-679dfcbbb9-npbsd_8a521aa0-4048-49a0-b6c1-32e07f349ac5/keystone-api/0.log" Feb 17 17:21:27 crc kubenswrapper[4808]: I0217 17:21:27.772551 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29522461-f5wx2_d443f775-9b53-4aaf-bcda-68aed8d88e84/keystone-cron/0.log" Feb 17 17:21:28 crc kubenswrapper[4808]: I0217 17:21:28.031473 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_65ea994e-22f1-4dbf-8b79-8810148fad94/kube-state-metrics/0.log" Feb 17 17:21:28 crc kubenswrapper[4808]: I0217 17:21:28.216102 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g2wvv" event={"ID":"9d9a64bc-8829-4eb8-b992-92f15c06c5cd","Type":"ContainerStarted","Data":"0d3a78f5fb095aa39c81dd33f5acf4dc012780fac7bb00799b6830fec08d8d94"} Feb 17 17:21:28 crc kubenswrapper[4808]: I0217 17:21:28.218078 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f8d96" event={"ID":"40119af6-a3e0-44d6-abc8-df39c96836ac","Type":"ContainerStarted","Data":"f52e1028ee668631d7d301879c6552f478f86d9433b0f76259e2b4091453e169"} Feb 17 17:21:28 crc kubenswrapper[4808]: I0217 17:21:28.245373 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-g2wvv" podStartSLOduration=3.500940941 podStartE2EDuration="18.245348783s" podCreationTimestamp="2026-02-17 17:21:10 +0000 UTC" firstStartedPulling="2026-02-17 17:21:12.9656657 +0000 UTC m=+5236.482024773" lastFinishedPulling="2026-02-17 17:21:27.710073542 +0000 UTC m=+5251.226432615" observedRunningTime="2026-02-17 17:21:28.239362872 +0000 UTC m=+5251.755721955" watchObservedRunningTime="2026-02-17 17:21:28.245348783 +0000 UTC m=+5251.761707856" Feb 17 17:21:28 crc kubenswrapper[4808]: I0217 17:21:28.357257 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-6c6489dbc7-2ddnw_b7e54d61-1bf6-41ae-b885-7e6448d351a5/neutron-api/0.log" Feb 17 17:21:28 crc kubenswrapper[4808]: I0217 17:21:28.404082 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-6c6489dbc7-2ddnw_b7e54d61-1bf6-41ae-b885-7e6448d351a5/neutron-httpd/0.log" Feb 17 17:21:28 crc kubenswrapper[4808]: I0217 17:21:28.944994 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_e91a7ada-9f3c-4a6c-a56e-355538c9a868/nova-api-log/0.log" Feb 17 17:21:29 crc kubenswrapper[4808]: I0217 17:21:29.414566 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_e91a7ada-9f3c-4a6c-a56e-355538c9a868/nova-api-api/0.log" Feb 17 17:21:29 crc kubenswrapper[4808]: I0217 17:21:29.637183 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_fd596411-c54c-4a8a-9b6a-420b6ab3c9ff/nova-cell0-conductor-conductor/0.log" Feb 17 17:21:29 crc kubenswrapper[4808]: I0217 17:21:29.773935 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_1c30e340-2218-46f6-97d6-aaf96a54d84d/nova-cell1-conductor-conductor/0.log" Feb 17 17:21:30 crc kubenswrapper[4808]: I0217 17:21:30.122742 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_e1acfe51-1173-4ce1-a645-d757d30e3312/nova-cell1-novncproxy-novncproxy/0.log" Feb 17 17:21:30 crc kubenswrapper[4808]: E0217 17:21:30.151211 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:21:30 crc kubenswrapper[4808]: I0217 17:21:30.244195 4808 generic.go:334] "Generic (PLEG): container finished" podID="40119af6-a3e0-44d6-abc8-df39c96836ac" containerID="f52e1028ee668631d7d301879c6552f478f86d9433b0f76259e2b4091453e169" exitCode=0 Feb 17 17:21:30 crc kubenswrapper[4808]: I0217 17:21:30.244247 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f8d96" event={"ID":"40119af6-a3e0-44d6-abc8-df39c96836ac","Type":"ContainerDied","Data":"f52e1028ee668631d7d301879c6552f478f86d9433b0f76259e2b4091453e169"} Feb 17 17:21:30 crc kubenswrapper[4808]: I0217 17:21:30.273035 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_fbdf54f1-8cfa-46c6-addd-bda126337c05/nova-metadata-log/0.log" Feb 17 17:21:30 crc kubenswrapper[4808]: I0217 17:21:30.876091 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_4481dde9-062b-48d4-ae35-b6fa96ccf94e/nova-scheduler-scheduler/0.log" Feb 17 17:21:31 crc kubenswrapper[4808]: I0217 17:21:31.165633 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-g2wvv" Feb 17 17:21:31 crc kubenswrapper[4808]: I0217 17:21:31.165708 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-g2wvv" Feb 17 17:21:31 crc kubenswrapper[4808]: I0217 17:21:31.354525 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_ade81c90-5cdf-45d4-ad2f-52a3514e1596/mysql-bootstrap/0.log" Feb 17 17:21:31 crc kubenswrapper[4808]: I0217 17:21:31.401339 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_ade81c90-5cdf-45d4-ad2f-52a3514e1596/mysql-bootstrap/0.log" Feb 17 17:21:31 crc kubenswrapper[4808]: I0217 17:21:31.597223 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_ade81c90-5cdf-45d4-ad2f-52a3514e1596/galera/0.log" Feb 17 17:21:31 crc kubenswrapper[4808]: I0217 17:21:31.950985 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_a020d38c-5e24-4266-96dc-9050e4d82f46/mysql-bootstrap/0.log" Feb 17 17:21:32 crc kubenswrapper[4808]: I0217 17:21:32.216011 4808 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-g2wvv" podUID="9d9a64bc-8829-4eb8-b992-92f15c06c5cd" containerName="registry-server" probeResult="failure" output=< Feb 17 17:21:32 crc kubenswrapper[4808]: timeout: failed to connect service ":50051" within 1s Feb 17 17:21:32 crc kubenswrapper[4808]: > Feb 17 17:21:32 crc kubenswrapper[4808]: I0217 17:21:32.263996 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f8d96" event={"ID":"40119af6-a3e0-44d6-abc8-df39c96836ac","Type":"ContainerStarted","Data":"34cf12a8516fa96f211bf0ade4a15eb8a53165aaf5fa12f237f1539bcdae53c4"} Feb 17 17:21:32 crc kubenswrapper[4808]: I0217 17:21:32.291811 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-f8d96" podStartSLOduration=3.634446022 podStartE2EDuration="8.291792951s" podCreationTimestamp="2026-02-17 17:21:24 +0000 UTC" firstStartedPulling="2026-02-17 17:21:26.203973879 +0000 UTC m=+5249.720332952" lastFinishedPulling="2026-02-17 17:21:30.861320808 +0000 UTC m=+5254.377679881" observedRunningTime="2026-02-17 17:21:32.289004475 +0000 UTC m=+5255.805363548" watchObservedRunningTime="2026-02-17 17:21:32.291792951 +0000 UTC m=+5255.808152024" Feb 17 17:21:32 crc kubenswrapper[4808]: I0217 17:21:32.608431 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-proc-0_14f49c04-388f-4eeb-be54-cbf3713606db/cloudkitty-proc/0.log" Feb 17 17:21:32 crc kubenswrapper[4808]: I0217 17:21:32.736413 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_a020d38c-5e24-4266-96dc-9050e4d82f46/mysql-bootstrap/0.log" Feb 17 17:21:32 crc kubenswrapper[4808]: I0217 17:21:32.791790 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_a020d38c-5e24-4266-96dc-9050e4d82f46/galera/0.log" Feb 17 17:21:33 crc kubenswrapper[4808]: I0217 17:21:33.249369 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_5ce308e0-2ba0-41ae-8760-e749c8d04130/openstackclient/0.log" Feb 17 17:21:33 crc kubenswrapper[4808]: I0217 17:21:33.323351 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_fbdf54f1-8cfa-46c6-addd-bda126337c05/nova-metadata-metadata/0.log" Feb 17 17:21:33 crc kubenswrapper[4808]: I0217 17:21:33.379901 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-qh29t_52d5a09f-33dd-49cf-9a31-a21d73a43b86/openstack-network-exporter/0.log" Feb 17 17:21:33 crc kubenswrapper[4808]: I0217 17:21:33.589542 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-wkzp6_30b7fc5a-690b-4ac6-b37c-9c1ec074f962/ovsdb-server-init/0.log" Feb 17 17:21:33 crc kubenswrapper[4808]: I0217 17:21:33.694179 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-wkzp6_30b7fc5a-690b-4ac6-b37c-9c1ec074f962/ovsdb-server-init/0.log" Feb 17 17:21:33 crc kubenswrapper[4808]: I0217 17:21:33.798208 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-wkzp6_30b7fc5a-690b-4ac6-b37c-9c1ec074f962/ovs-vswitchd/0.log" Feb 17 17:21:33 crc kubenswrapper[4808]: I0217 17:21:33.869602 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-wkzp6_30b7fc5a-690b-4ac6-b37c-9c1ec074f962/ovsdb-server/0.log" Feb 17 17:21:34 crc kubenswrapper[4808]: I0217 17:21:34.025677 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-pfcvm_8a76a2ff-ed1a-4279-898c-54e85973f024/ovn-controller/0.log" Feb 17 17:21:34 crc kubenswrapper[4808]: I0217 17:21:34.133889 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_79b7a04d-f324-40d0-ad2b-370cfef43858/openstack-network-exporter/0.log" Feb 17 17:21:34 crc kubenswrapper[4808]: I0217 17:21:34.245857 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_79b7a04d-f324-40d0-ad2b-370cfef43858/ovn-northd/0.log" Feb 17 17:21:34 crc kubenswrapper[4808]: I0217 17:21:34.410794 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_8c434a76-4dcf-4c69-aefa-5cda8b120a26/openstack-network-exporter/0.log" Feb 17 17:21:34 crc kubenswrapper[4808]: I0217 17:21:34.464159 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_8c434a76-4dcf-4c69-aefa-5cda8b120a26/ovsdbserver-nb/0.log" Feb 17 17:21:34 crc kubenswrapper[4808]: I0217 17:21:34.567148 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_220c5de1-b4bf-454c-b013-17d78d86cca3/openstack-network-exporter/0.log" Feb 17 17:21:34 crc kubenswrapper[4808]: I0217 17:21:34.649343 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_220c5de1-b4bf-454c-b013-17d78d86cca3/ovsdbserver-sb/0.log" Feb 17 17:21:34 crc kubenswrapper[4808]: I0217 17:21:34.913328 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-76b995d5cb-7xs25_ab7f0766-47a0-4616-b6dc-32957d59188a/placement-api/0.log" Feb 17 17:21:35 crc kubenswrapper[4808]: I0217 17:21:35.005001 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-76b995d5cb-7xs25_ab7f0766-47a0-4616-b6dc-32957d59188a/placement-log/0.log" Feb 17 17:21:35 crc kubenswrapper[4808]: I0217 17:21:35.067325 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-f8d96" Feb 17 17:21:35 crc kubenswrapper[4808]: I0217 17:21:35.067378 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-f8d96" Feb 17 17:21:35 crc kubenswrapper[4808]: I0217 17:21:35.117016 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_dadd7e91-13f0-4ba2-9f87-ad057567a56d/init-config-reloader/0.log" Feb 17 17:21:35 crc kubenswrapper[4808]: I0217 17:21:35.279093 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_dadd7e91-13f0-4ba2-9f87-ad057567a56d/init-config-reloader/0.log" Feb 17 17:21:35 crc kubenswrapper[4808]: I0217 17:21:35.348166 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_dadd7e91-13f0-4ba2-9f87-ad057567a56d/config-reloader/0.log" Feb 17 17:21:35 crc kubenswrapper[4808]: I0217 17:21:35.365855 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_dadd7e91-13f0-4ba2-9f87-ad057567a56d/prometheus/0.log" Feb 17 17:21:35 crc kubenswrapper[4808]: I0217 17:21:35.366951 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_dadd7e91-13f0-4ba2-9f87-ad057567a56d/thanos-sidecar/0.log" Feb 17 17:21:35 crc kubenswrapper[4808]: I0217 17:21:35.584532 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_9da8d67e-00c6-4ba1-a08b-09c5653d93fd/setup-container/0.log" Feb 17 17:21:36 crc kubenswrapper[4808]: I0217 17:21:36.119971 4808 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-f8d96" podUID="40119af6-a3e0-44d6-abc8-df39c96836ac" containerName="registry-server" probeResult="failure" output=< Feb 17 17:21:36 crc kubenswrapper[4808]: timeout: failed to connect service ":50051" within 1s Feb 17 17:21:36 crc kubenswrapper[4808]: > Feb 17 17:21:36 crc kubenswrapper[4808]: I0217 17:21:36.183778 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_9da8d67e-00c6-4ba1-a08b-09c5653d93fd/setup-container/0.log" Feb 17 17:21:36 crc kubenswrapper[4808]: I0217 17:21:36.192439 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_357e5513-bef7-45cc-b62f-072a161ccce3/setup-container/0.log" Feb 17 17:21:36 crc kubenswrapper[4808]: I0217 17:21:36.301658 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_9da8d67e-00c6-4ba1-a08b-09c5653d93fd/rabbitmq/0.log" Feb 17 17:21:36 crc kubenswrapper[4808]: I0217 17:21:36.653817 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_357e5513-bef7-45cc-b62f-072a161ccce3/setup-container/0.log" Feb 17 17:21:36 crc kubenswrapper[4808]: I0217 17:21:36.717479 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_357e5513-bef7-45cc-b62f-072a161ccce3/rabbitmq/0.log" Feb 17 17:21:36 crc kubenswrapper[4808]: I0217 17:21:36.846723 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-8pfvq_404291d9-a172-4a9a-8a0e-2f2514ce06ff/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 17:21:37 crc kubenswrapper[4808]: I0217 17:21:37.294341 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-4n9tl_785a49f6-7a06-4787-a829-fc9956730c15/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 17:21:37 crc kubenswrapper[4808]: I0217 17:21:37.341192 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-dcfbdc547-54spv_45097e1f-e6c7-40c1-8338-3f1ac506c3fe/proxy-httpd/0.log" Feb 17 17:21:37 crc kubenswrapper[4808]: I0217 17:21:37.544346 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-dcfbdc547-54spv_45097e1f-e6c7-40c1-8338-3f1ac506c3fe/proxy-server/0.log" Feb 17 17:21:37 crc kubenswrapper[4808]: I0217 17:21:37.577613 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-qg65w_eb2856a7-c37a-4ecc-a4a2-c49864240315/swift-ring-rebalance/0.log" Feb 17 17:21:37 crc kubenswrapper[4808]: I0217 17:21:37.967293 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8f52ebe4-f003-4d0b-8539-1d406db95b2f/account-reaper/0.log" Feb 17 17:21:37 crc kubenswrapper[4808]: I0217 17:21:37.983851 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8f52ebe4-f003-4d0b-8539-1d406db95b2f/account-auditor/0.log" Feb 17 17:21:38 crc kubenswrapper[4808]: I0217 17:21:38.066217 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8f52ebe4-f003-4d0b-8539-1d406db95b2f/account-replicator/0.log" Feb 17 17:21:38 crc kubenswrapper[4808]: I0217 17:21:38.103135 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8f52ebe4-f003-4d0b-8539-1d406db95b2f/account-server/0.log" Feb 17 17:21:38 crc kubenswrapper[4808]: I0217 17:21:38.145537 4808 scope.go:117] "RemoveContainer" containerID="700c3283572281c218af9f0b845d6de62277f81d69443b3b1ffcaa7d804aa22e" Feb 17 17:21:38 crc kubenswrapper[4808]: E0217 17:21:38.145904 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 17:21:38 crc kubenswrapper[4808]: I0217 17:21:38.307878 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8f52ebe4-f003-4d0b-8539-1d406db95b2f/container-server/0.log" Feb 17 17:21:38 crc kubenswrapper[4808]: I0217 17:21:38.331842 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8f52ebe4-f003-4d0b-8539-1d406db95b2f/container-auditor/0.log" Feb 17 17:21:38 crc kubenswrapper[4808]: I0217 17:21:38.352124 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8f52ebe4-f003-4d0b-8539-1d406db95b2f/container-replicator/0.log" Feb 17 17:21:38 crc kubenswrapper[4808]: I0217 17:21:38.431654 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8f52ebe4-f003-4d0b-8539-1d406db95b2f/container-updater/0.log" Feb 17 17:21:38 crc kubenswrapper[4808]: I0217 17:21:38.518507 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8f52ebe4-f003-4d0b-8539-1d406db95b2f/object-expirer/0.log" Feb 17 17:21:38 crc kubenswrapper[4808]: I0217 17:21:38.647645 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8f52ebe4-f003-4d0b-8539-1d406db95b2f/object-auditor/0.log" Feb 17 17:21:38 crc kubenswrapper[4808]: I0217 17:21:38.681465 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8f52ebe4-f003-4d0b-8539-1d406db95b2f/object-server/0.log" Feb 17 17:21:38 crc kubenswrapper[4808]: I0217 17:21:38.700173 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8f52ebe4-f003-4d0b-8539-1d406db95b2f/object-replicator/0.log" Feb 17 17:21:38 crc kubenswrapper[4808]: I0217 17:21:38.844668 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8f52ebe4-f003-4d0b-8539-1d406db95b2f/object-updater/0.log" Feb 17 17:21:38 crc kubenswrapper[4808]: I0217 17:21:38.869422 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8f52ebe4-f003-4d0b-8539-1d406db95b2f/rsync/0.log" Feb 17 17:21:38 crc kubenswrapper[4808]: I0217 17:21:38.944653 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8f52ebe4-f003-4d0b-8539-1d406db95b2f/swift-recon-cron/0.log" Feb 17 17:21:41 crc kubenswrapper[4808]: E0217 17:21:41.148014 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:21:42 crc kubenswrapper[4808]: I0217 17:21:42.290134 4808 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-g2wvv" podUID="9d9a64bc-8829-4eb8-b992-92f15c06c5cd" containerName="registry-server" probeResult="failure" output=< Feb 17 17:21:42 crc kubenswrapper[4808]: timeout: failed to connect service ":50051" within 1s Feb 17 17:21:42 crc kubenswrapper[4808]: > Feb 17 17:21:43 crc kubenswrapper[4808]: E0217 17:21:43.149039 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:21:43 crc kubenswrapper[4808]: I0217 17:21:43.182975 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_2ea38754-3b00-4bcb-93d9-28b60dda0e0a/memcached/0.log" Feb 17 17:21:45 crc kubenswrapper[4808]: I0217 17:21:45.122342 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-f8d96" Feb 17 17:21:45 crc kubenswrapper[4808]: I0217 17:21:45.176022 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-f8d96" Feb 17 17:21:45 crc kubenswrapper[4808]: I0217 17:21:45.359704 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-f8d96"] Feb 17 17:21:46 crc kubenswrapper[4808]: I0217 17:21:46.417305 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-f8d96" podUID="40119af6-a3e0-44d6-abc8-df39c96836ac" containerName="registry-server" containerID="cri-o://34cf12a8516fa96f211bf0ade4a15eb8a53165aaf5fa12f237f1539bcdae53c4" gracePeriod=2 Feb 17 17:21:47 crc kubenswrapper[4808]: I0217 17:21:47.161858 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f8d96" Feb 17 17:21:47 crc kubenswrapper[4808]: I0217 17:21:47.315133 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40119af6-a3e0-44d6-abc8-df39c96836ac-catalog-content\") pod \"40119af6-a3e0-44d6-abc8-df39c96836ac\" (UID: \"40119af6-a3e0-44d6-abc8-df39c96836ac\") " Feb 17 17:21:47 crc kubenswrapper[4808]: I0217 17:21:47.315198 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40119af6-a3e0-44d6-abc8-df39c96836ac-utilities\") pod \"40119af6-a3e0-44d6-abc8-df39c96836ac\" (UID: \"40119af6-a3e0-44d6-abc8-df39c96836ac\") " Feb 17 17:21:47 crc kubenswrapper[4808]: I0217 17:21:47.315406 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcgzp\" (UniqueName: \"kubernetes.io/projected/40119af6-a3e0-44d6-abc8-df39c96836ac-kube-api-access-pcgzp\") pod \"40119af6-a3e0-44d6-abc8-df39c96836ac\" (UID: \"40119af6-a3e0-44d6-abc8-df39c96836ac\") " Feb 17 17:21:47 crc kubenswrapper[4808]: I0217 17:21:47.317962 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40119af6-a3e0-44d6-abc8-df39c96836ac-utilities" (OuterVolumeSpecName: "utilities") pod "40119af6-a3e0-44d6-abc8-df39c96836ac" (UID: "40119af6-a3e0-44d6-abc8-df39c96836ac"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:21:47 crc kubenswrapper[4808]: I0217 17:21:47.327541 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40119af6-a3e0-44d6-abc8-df39c96836ac-kube-api-access-pcgzp" (OuterVolumeSpecName: "kube-api-access-pcgzp") pod "40119af6-a3e0-44d6-abc8-df39c96836ac" (UID: "40119af6-a3e0-44d6-abc8-df39c96836ac"). InnerVolumeSpecName "kube-api-access-pcgzp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:21:47 crc kubenswrapper[4808]: I0217 17:21:47.347489 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40119af6-a3e0-44d6-abc8-df39c96836ac-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "40119af6-a3e0-44d6-abc8-df39c96836ac" (UID: "40119af6-a3e0-44d6-abc8-df39c96836ac"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:21:47 crc kubenswrapper[4808]: I0217 17:21:47.418291 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40119af6-a3e0-44d6-abc8-df39c96836ac-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 17:21:47 crc kubenswrapper[4808]: I0217 17:21:47.418332 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40119af6-a3e0-44d6-abc8-df39c96836ac-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 17:21:47 crc kubenswrapper[4808]: I0217 17:21:47.418344 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcgzp\" (UniqueName: \"kubernetes.io/projected/40119af6-a3e0-44d6-abc8-df39c96836ac-kube-api-access-pcgzp\") on node \"crc\" DevicePath \"\"" Feb 17 17:21:47 crc kubenswrapper[4808]: I0217 17:21:47.430685 4808 generic.go:334] "Generic (PLEG): container finished" podID="40119af6-a3e0-44d6-abc8-df39c96836ac" containerID="34cf12a8516fa96f211bf0ade4a15eb8a53165aaf5fa12f237f1539bcdae53c4" exitCode=0 Feb 17 17:21:47 crc kubenswrapper[4808]: I0217 17:21:47.430737 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f8d96" event={"ID":"40119af6-a3e0-44d6-abc8-df39c96836ac","Type":"ContainerDied","Data":"34cf12a8516fa96f211bf0ade4a15eb8a53165aaf5fa12f237f1539bcdae53c4"} Feb 17 17:21:47 crc kubenswrapper[4808]: I0217 17:21:47.430782 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f8d96" event={"ID":"40119af6-a3e0-44d6-abc8-df39c96836ac","Type":"ContainerDied","Data":"2536e14a994b64f27af984baacbd8fd7c12099545e13e6a5747da97bd5cf5e03"} Feb 17 17:21:47 crc kubenswrapper[4808]: I0217 17:21:47.430788 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f8d96" Feb 17 17:21:47 crc kubenswrapper[4808]: I0217 17:21:47.430801 4808 scope.go:117] "RemoveContainer" containerID="34cf12a8516fa96f211bf0ade4a15eb8a53165aaf5fa12f237f1539bcdae53c4" Feb 17 17:21:47 crc kubenswrapper[4808]: I0217 17:21:47.463027 4808 scope.go:117] "RemoveContainer" containerID="f52e1028ee668631d7d301879c6552f478f86d9433b0f76259e2b4091453e169" Feb 17 17:21:47 crc kubenswrapper[4808]: I0217 17:21:47.481305 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-f8d96"] Feb 17 17:21:47 crc kubenswrapper[4808]: I0217 17:21:47.503542 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-f8d96"] Feb 17 17:21:47 crc kubenswrapper[4808]: I0217 17:21:47.508768 4808 scope.go:117] "RemoveContainer" containerID="eca172e38f749572103f9af3900358585716634266e768829cda8d4d2cf5fcea" Feb 17 17:21:47 crc kubenswrapper[4808]: I0217 17:21:47.547811 4808 scope.go:117] "RemoveContainer" containerID="34cf12a8516fa96f211bf0ade4a15eb8a53165aaf5fa12f237f1539bcdae53c4" Feb 17 17:21:47 crc kubenswrapper[4808]: E0217 17:21:47.548441 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34cf12a8516fa96f211bf0ade4a15eb8a53165aaf5fa12f237f1539bcdae53c4\": container with ID starting with 34cf12a8516fa96f211bf0ade4a15eb8a53165aaf5fa12f237f1539bcdae53c4 not found: ID does not exist" containerID="34cf12a8516fa96f211bf0ade4a15eb8a53165aaf5fa12f237f1539bcdae53c4" Feb 17 17:21:47 crc kubenswrapper[4808]: I0217 17:21:47.548479 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34cf12a8516fa96f211bf0ade4a15eb8a53165aaf5fa12f237f1539bcdae53c4"} err="failed to get container status \"34cf12a8516fa96f211bf0ade4a15eb8a53165aaf5fa12f237f1539bcdae53c4\": rpc error: code = NotFound desc = could not find container \"34cf12a8516fa96f211bf0ade4a15eb8a53165aaf5fa12f237f1539bcdae53c4\": container with ID starting with 34cf12a8516fa96f211bf0ade4a15eb8a53165aaf5fa12f237f1539bcdae53c4 not found: ID does not exist" Feb 17 17:21:47 crc kubenswrapper[4808]: I0217 17:21:47.548505 4808 scope.go:117] "RemoveContainer" containerID="f52e1028ee668631d7d301879c6552f478f86d9433b0f76259e2b4091453e169" Feb 17 17:21:47 crc kubenswrapper[4808]: E0217 17:21:47.549137 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f52e1028ee668631d7d301879c6552f478f86d9433b0f76259e2b4091453e169\": container with ID starting with f52e1028ee668631d7d301879c6552f478f86d9433b0f76259e2b4091453e169 not found: ID does not exist" containerID="f52e1028ee668631d7d301879c6552f478f86d9433b0f76259e2b4091453e169" Feb 17 17:21:47 crc kubenswrapper[4808]: I0217 17:21:47.549163 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f52e1028ee668631d7d301879c6552f478f86d9433b0f76259e2b4091453e169"} err="failed to get container status \"f52e1028ee668631d7d301879c6552f478f86d9433b0f76259e2b4091453e169\": rpc error: code = NotFound desc = could not find container \"f52e1028ee668631d7d301879c6552f478f86d9433b0f76259e2b4091453e169\": container with ID starting with f52e1028ee668631d7d301879c6552f478f86d9433b0f76259e2b4091453e169 not found: ID does not exist" Feb 17 17:21:47 crc kubenswrapper[4808]: I0217 17:21:47.549182 4808 scope.go:117] "RemoveContainer" containerID="eca172e38f749572103f9af3900358585716634266e768829cda8d4d2cf5fcea" Feb 17 17:21:47 crc kubenswrapper[4808]: E0217 17:21:47.549463 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eca172e38f749572103f9af3900358585716634266e768829cda8d4d2cf5fcea\": container with ID starting with eca172e38f749572103f9af3900358585716634266e768829cda8d4d2cf5fcea not found: ID does not exist" containerID="eca172e38f749572103f9af3900358585716634266e768829cda8d4d2cf5fcea" Feb 17 17:21:47 crc kubenswrapper[4808]: I0217 17:21:47.549608 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eca172e38f749572103f9af3900358585716634266e768829cda8d4d2cf5fcea"} err="failed to get container status \"eca172e38f749572103f9af3900358585716634266e768829cda8d4d2cf5fcea\": rpc error: code = NotFound desc = could not find container \"eca172e38f749572103f9af3900358585716634266e768829cda8d4d2cf5fcea\": container with ID starting with eca172e38f749572103f9af3900358585716634266e768829cda8d4d2cf5fcea not found: ID does not exist" Feb 17 17:21:49 crc kubenswrapper[4808]: I0217 17:21:49.158931 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40119af6-a3e0-44d6-abc8-df39c96836ac" path="/var/lib/kubelet/pods/40119af6-a3e0-44d6-abc8-df39c96836ac/volumes" Feb 17 17:21:51 crc kubenswrapper[4808]: I0217 17:21:51.146130 4808 scope.go:117] "RemoveContainer" containerID="700c3283572281c218af9f0b845d6de62277f81d69443b3b1ffcaa7d804aa22e" Feb 17 17:21:51 crc kubenswrapper[4808]: E0217 17:21:51.146786 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 17:21:51 crc kubenswrapper[4808]: I0217 17:21:51.215527 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-g2wvv" Feb 17 17:21:51 crc kubenswrapper[4808]: I0217 17:21:51.287284 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-g2wvv" Feb 17 17:21:51 crc kubenswrapper[4808]: I0217 17:21:51.449742 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-g2wvv"] Feb 17 17:21:52 crc kubenswrapper[4808]: E0217 17:21:52.148535 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:21:52 crc kubenswrapper[4808]: I0217 17:21:52.482103 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-g2wvv" podUID="9d9a64bc-8829-4eb8-b992-92f15c06c5cd" containerName="registry-server" containerID="cri-o://0d3a78f5fb095aa39c81dd33f5acf4dc012780fac7bb00799b6830fec08d8d94" gracePeriod=2 Feb 17 17:21:53 crc kubenswrapper[4808]: I0217 17:21:53.045830 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g2wvv" Feb 17 17:21:53 crc kubenswrapper[4808]: I0217 17:21:53.142054 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d9a64bc-8829-4eb8-b992-92f15c06c5cd-utilities\") pod \"9d9a64bc-8829-4eb8-b992-92f15c06c5cd\" (UID: \"9d9a64bc-8829-4eb8-b992-92f15c06c5cd\") " Feb 17 17:21:53 crc kubenswrapper[4808]: I0217 17:21:53.142153 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbnrk\" (UniqueName: \"kubernetes.io/projected/9d9a64bc-8829-4eb8-b992-92f15c06c5cd-kube-api-access-dbnrk\") pod \"9d9a64bc-8829-4eb8-b992-92f15c06c5cd\" (UID: \"9d9a64bc-8829-4eb8-b992-92f15c06c5cd\") " Feb 17 17:21:53 crc kubenswrapper[4808]: I0217 17:21:53.142291 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d9a64bc-8829-4eb8-b992-92f15c06c5cd-catalog-content\") pod \"9d9a64bc-8829-4eb8-b992-92f15c06c5cd\" (UID: \"9d9a64bc-8829-4eb8-b992-92f15c06c5cd\") " Feb 17 17:21:53 crc kubenswrapper[4808]: I0217 17:21:53.143172 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d9a64bc-8829-4eb8-b992-92f15c06c5cd-utilities" (OuterVolumeSpecName: "utilities") pod "9d9a64bc-8829-4eb8-b992-92f15c06c5cd" (UID: "9d9a64bc-8829-4eb8-b992-92f15c06c5cd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:21:53 crc kubenswrapper[4808]: I0217 17:21:53.151793 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d9a64bc-8829-4eb8-b992-92f15c06c5cd-kube-api-access-dbnrk" (OuterVolumeSpecName: "kube-api-access-dbnrk") pod "9d9a64bc-8829-4eb8-b992-92f15c06c5cd" (UID: "9d9a64bc-8829-4eb8-b992-92f15c06c5cd"). InnerVolumeSpecName "kube-api-access-dbnrk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:21:53 crc kubenswrapper[4808]: I0217 17:21:53.245128 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d9a64bc-8829-4eb8-b992-92f15c06c5cd-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 17:21:53 crc kubenswrapper[4808]: I0217 17:21:53.245155 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbnrk\" (UniqueName: \"kubernetes.io/projected/9d9a64bc-8829-4eb8-b992-92f15c06c5cd-kube-api-access-dbnrk\") on node \"crc\" DevicePath \"\"" Feb 17 17:21:53 crc kubenswrapper[4808]: I0217 17:21:53.299759 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d9a64bc-8829-4eb8-b992-92f15c06c5cd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9d9a64bc-8829-4eb8-b992-92f15c06c5cd" (UID: "9d9a64bc-8829-4eb8-b992-92f15c06c5cd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:21:53 crc kubenswrapper[4808]: I0217 17:21:53.347720 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d9a64bc-8829-4eb8-b992-92f15c06c5cd-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 17:21:53 crc kubenswrapper[4808]: I0217 17:21:53.495119 4808 generic.go:334] "Generic (PLEG): container finished" podID="9d9a64bc-8829-4eb8-b992-92f15c06c5cd" containerID="0d3a78f5fb095aa39c81dd33f5acf4dc012780fac7bb00799b6830fec08d8d94" exitCode=0 Feb 17 17:21:53 crc kubenswrapper[4808]: I0217 17:21:53.495174 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g2wvv" event={"ID":"9d9a64bc-8829-4eb8-b992-92f15c06c5cd","Type":"ContainerDied","Data":"0d3a78f5fb095aa39c81dd33f5acf4dc012780fac7bb00799b6830fec08d8d94"} Feb 17 17:21:53 crc kubenswrapper[4808]: I0217 17:21:53.495186 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g2wvv" Feb 17 17:21:53 crc kubenswrapper[4808]: I0217 17:21:53.495213 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g2wvv" event={"ID":"9d9a64bc-8829-4eb8-b992-92f15c06c5cd","Type":"ContainerDied","Data":"fba4b2968632d2bd4cdd0c26e698a48d92c3645d42d2a965a77a8846ddad4b21"} Feb 17 17:21:53 crc kubenswrapper[4808]: I0217 17:21:53.495240 4808 scope.go:117] "RemoveContainer" containerID="0d3a78f5fb095aa39c81dd33f5acf4dc012780fac7bb00799b6830fec08d8d94" Feb 17 17:21:53 crc kubenswrapper[4808]: I0217 17:21:53.517834 4808 scope.go:117] "RemoveContainer" containerID="486ec7c212bbca48871a09cf79788c0160085756cf021132e3d8b32feaab142f" Feb 17 17:21:53 crc kubenswrapper[4808]: I0217 17:21:53.593642 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-g2wvv"] Feb 17 17:21:53 crc kubenswrapper[4808]: I0217 17:21:53.597522 4808 scope.go:117] "RemoveContainer" containerID="bf062c4b1aac25419c20905ed7b4186bca0dfc1bb2e6718ad6071f72a64f7076" Feb 17 17:21:53 crc kubenswrapper[4808]: I0217 17:21:53.624945 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-g2wvv"] Feb 17 17:21:53 crc kubenswrapper[4808]: I0217 17:21:53.688872 4808 scope.go:117] "RemoveContainer" containerID="0d3a78f5fb095aa39c81dd33f5acf4dc012780fac7bb00799b6830fec08d8d94" Feb 17 17:21:53 crc kubenswrapper[4808]: E0217 17:21:53.698091 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d3a78f5fb095aa39c81dd33f5acf4dc012780fac7bb00799b6830fec08d8d94\": container with ID starting with 0d3a78f5fb095aa39c81dd33f5acf4dc012780fac7bb00799b6830fec08d8d94 not found: ID does not exist" containerID="0d3a78f5fb095aa39c81dd33f5acf4dc012780fac7bb00799b6830fec08d8d94" Feb 17 17:21:53 crc kubenswrapper[4808]: I0217 17:21:53.698139 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d3a78f5fb095aa39c81dd33f5acf4dc012780fac7bb00799b6830fec08d8d94"} err="failed to get container status \"0d3a78f5fb095aa39c81dd33f5acf4dc012780fac7bb00799b6830fec08d8d94\": rpc error: code = NotFound desc = could not find container \"0d3a78f5fb095aa39c81dd33f5acf4dc012780fac7bb00799b6830fec08d8d94\": container with ID starting with 0d3a78f5fb095aa39c81dd33f5acf4dc012780fac7bb00799b6830fec08d8d94 not found: ID does not exist" Feb 17 17:21:53 crc kubenswrapper[4808]: I0217 17:21:53.698172 4808 scope.go:117] "RemoveContainer" containerID="486ec7c212bbca48871a09cf79788c0160085756cf021132e3d8b32feaab142f" Feb 17 17:21:53 crc kubenswrapper[4808]: E0217 17:21:53.701983 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"486ec7c212bbca48871a09cf79788c0160085756cf021132e3d8b32feaab142f\": container with ID starting with 486ec7c212bbca48871a09cf79788c0160085756cf021132e3d8b32feaab142f not found: ID does not exist" containerID="486ec7c212bbca48871a09cf79788c0160085756cf021132e3d8b32feaab142f" Feb 17 17:21:53 crc kubenswrapper[4808]: I0217 17:21:53.702030 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"486ec7c212bbca48871a09cf79788c0160085756cf021132e3d8b32feaab142f"} err="failed to get container status \"486ec7c212bbca48871a09cf79788c0160085756cf021132e3d8b32feaab142f\": rpc error: code = NotFound desc = could not find container \"486ec7c212bbca48871a09cf79788c0160085756cf021132e3d8b32feaab142f\": container with ID starting with 486ec7c212bbca48871a09cf79788c0160085756cf021132e3d8b32feaab142f not found: ID does not exist" Feb 17 17:21:53 crc kubenswrapper[4808]: I0217 17:21:53.702057 4808 scope.go:117] "RemoveContainer" containerID="bf062c4b1aac25419c20905ed7b4186bca0dfc1bb2e6718ad6071f72a64f7076" Feb 17 17:21:53 crc kubenswrapper[4808]: E0217 17:21:53.705905 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf062c4b1aac25419c20905ed7b4186bca0dfc1bb2e6718ad6071f72a64f7076\": container with ID starting with bf062c4b1aac25419c20905ed7b4186bca0dfc1bb2e6718ad6071f72a64f7076 not found: ID does not exist" containerID="bf062c4b1aac25419c20905ed7b4186bca0dfc1bb2e6718ad6071f72a64f7076" Feb 17 17:21:53 crc kubenswrapper[4808]: I0217 17:21:53.705947 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf062c4b1aac25419c20905ed7b4186bca0dfc1bb2e6718ad6071f72a64f7076"} err="failed to get container status \"bf062c4b1aac25419c20905ed7b4186bca0dfc1bb2e6718ad6071f72a64f7076\": rpc error: code = NotFound desc = could not find container \"bf062c4b1aac25419c20905ed7b4186bca0dfc1bb2e6718ad6071f72a64f7076\": container with ID starting with bf062c4b1aac25419c20905ed7b4186bca0dfc1bb2e6718ad6071f72a64f7076 not found: ID does not exist" Feb 17 17:21:55 crc kubenswrapper[4808]: I0217 17:21:55.157392 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d9a64bc-8829-4eb8-b992-92f15c06c5cd" path="/var/lib/kubelet/pods/9d9a64bc-8829-4eb8-b992-92f15c06c5cd/volumes" Feb 17 17:21:58 crc kubenswrapper[4808]: E0217 17:21:58.147569 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:22:03 crc kubenswrapper[4808]: I0217 17:22:03.145890 4808 scope.go:117] "RemoveContainer" containerID="700c3283572281c218af9f0b845d6de62277f81d69443b3b1ffcaa7d804aa22e" Feb 17 17:22:03 crc kubenswrapper[4808]: E0217 17:22:03.146747 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 17:22:03 crc kubenswrapper[4808]: E0217 17:22:03.150239 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:22:10 crc kubenswrapper[4808]: E0217 17:22:10.148731 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:22:10 crc kubenswrapper[4808]: I0217 17:22:10.723821 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4vwgr6_bb0fef44-0d18-499b-bfd1-c684136b5095/util/0.log" Feb 17 17:22:11 crc kubenswrapper[4808]: I0217 17:22:11.390334 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4vwgr6_bb0fef44-0d18-499b-bfd1-c684136b5095/util/0.log" Feb 17 17:22:11 crc kubenswrapper[4808]: I0217 17:22:11.444179 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4vwgr6_bb0fef44-0d18-499b-bfd1-c684136b5095/pull/0.log" Feb 17 17:22:11 crc kubenswrapper[4808]: I0217 17:22:11.454110 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4vwgr6_bb0fef44-0d18-499b-bfd1-c684136b5095/pull/0.log" Feb 17 17:22:11 crc kubenswrapper[4808]: I0217 17:22:11.674941 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4vwgr6_bb0fef44-0d18-499b-bfd1-c684136b5095/pull/0.log" Feb 17 17:22:11 crc kubenswrapper[4808]: I0217 17:22:11.733286 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4vwgr6_bb0fef44-0d18-499b-bfd1-c684136b5095/util/0.log" Feb 17 17:22:11 crc kubenswrapper[4808]: I0217 17:22:11.763708 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4vwgr6_bb0fef44-0d18-499b-bfd1-c684136b5095/extract/0.log" Feb 17 17:22:12 crc kubenswrapper[4808]: I0217 17:22:12.126750 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d8bf5c495-gl97b_e2e1b5f4-7ed2-4ab1-871b-1974a7559252/manager/0.log" Feb 17 17:22:12 crc kubenswrapper[4808]: I0217 17:22:12.467066 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-77987464f4-b7hkk_b622bb16-c5b4-45ea-b493-e681d36d49ac/manager/0.log" Feb 17 17:22:12 crc kubenswrapper[4808]: I0217 17:22:12.687519 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69f49c598c-xv924_d4bd0818-617e-418a-b7c7-f70ba7ebc3d8/manager/0.log" Feb 17 17:22:13 crc kubenswrapper[4808]: I0217 17:22:13.766061 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5b9b8895d5-plpr2_681f334b-d0ac-43dc-babb-92d9cb7c0440/manager/0.log" Feb 17 17:22:14 crc kubenswrapper[4808]: I0217 17:22:14.145512 4808 scope.go:117] "RemoveContainer" containerID="700c3283572281c218af9f0b845d6de62277f81d69443b3b1ffcaa7d804aa22e" Feb 17 17:22:14 crc kubenswrapper[4808]: E0217 17:22:14.146070 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 17:22:14 crc kubenswrapper[4808]: I0217 17:22:14.408392 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79d975b745-n6qxn_6508a74d-2dba-4d1b-910c-95c9463c15a4/manager/0.log" Feb 17 17:22:14 crc kubenswrapper[4808]: I0217 17:22:14.418116 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-554564d7fc-thpj7_ace1fd54-7ff8-45b9-a77b-c3908044365e/manager/0.log" Feb 17 17:22:14 crc kubenswrapper[4808]: I0217 17:22:14.505479 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-5d946d989d-4cv77_77df5d1f-daff-4508-861a-335ab87f2366/manager/0.log" Feb 17 17:22:14 crc kubenswrapper[4808]: I0217 17:22:14.797648 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b4d948c87-8xfc6_96baec58-63b9-49cd-9cf4-32639e58d4ac/manager/0.log" Feb 17 17:22:14 crc kubenswrapper[4808]: I0217 17:22:14.822621 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-54f6768c69-tkhr5_93278ccd-52fe-4848-9a46-3f47369d47ab/manager/0.log" Feb 17 17:22:15 crc kubenswrapper[4808]: I0217 17:22:15.111280 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6994f66f48-vgbmj_a40e52a1-9867-413a-81fb-324789e0a009/manager/0.log" Feb 17 17:22:15 crc kubenswrapper[4808]: I0217 17:22:15.214693 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-64ddbf8bb-kg6xx_8d4c91a6-8441-45a6-bb6a-7655ba464fb9/manager/0.log" Feb 17 17:22:15 crc kubenswrapper[4808]: I0217 17:22:15.433871 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-567668f5cf-t9k25_a6f8ca14-e1db-4dcc-a64d-7bf137105e80/manager/0.log" Feb 17 17:22:15 crc kubenswrapper[4808]: I0217 17:22:15.577700 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-7c6767dc9csf4ws_2ec18a16-766f-4a0c-a393-0ca7a999011e/manager/0.log" Feb 17 17:22:16 crc kubenswrapper[4808]: I0217 17:22:16.103390 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-64549bfd8b-rwgq9_2db6cd8b-961f-442e-8bd4-ced98807709a/operator/0.log" Feb 17 17:22:16 crc kubenswrapper[4808]: I0217 17:22:16.330726 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-75t5f_aa72ff82-f411-42f6-8144-937ca196211b/registry-server/0.log" Feb 17 17:22:16 crc kubenswrapper[4808]: I0217 17:22:16.596659 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-d44cf6b75-slw7s_6764d3f3-5e9f-4635-973e-81324dbc8e34/manager/0.log" Feb 17 17:22:16 crc kubenswrapper[4808]: I0217 17:22:16.835491 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-8497b45c89-5mm2j_0a170b4f-607d-4c7c-bd0c-ee6c29523b44/manager/0.log" Feb 17 17:22:17 crc kubenswrapper[4808]: I0217 17:22:17.094835 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-xcs6n_a83d92da-4f15-4e33-ab57-ae7bc9e0da5e/operator/0.log" Feb 17 17:22:17 crc kubenswrapper[4808]: E0217 17:22:17.158509 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:22:17 crc kubenswrapper[4808]: I0217 17:22:17.337705 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68f46476f-z4vp8_74dda28c-8860-440c-b97c-b16bab985ff0/manager/0.log" Feb 17 17:22:17 crc kubenswrapper[4808]: I0217 17:22:17.778837 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7866795846-zxqhb_b42c0b9b-cca5-4ecb-908e-508fbf932dfe/manager/0.log" Feb 17 17:22:18 crc kubenswrapper[4808]: I0217 17:22:18.200285 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-546d579865-b8s4r_5e47b192-26de-4639-afe8-ec7b5fcc10c8/manager/0.log" Feb 17 17:22:18 crc kubenswrapper[4808]: I0217 17:22:18.448072 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-5db88f68c-5qkk2_cde66c49-b3c4-4f4f-b614-c4343d1c3732/manager/0.log" Feb 17 17:22:18 crc kubenswrapper[4808]: I0217 17:22:18.634616 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-66fcc5ff49-dnzp5_bdd19f1d-df45-4dda-a2bd-b14da398e043/manager/0.log" Feb 17 17:22:18 crc kubenswrapper[4808]: I0217 17:22:18.703376 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-69f8888797-xp9sf_a2547c9d-80d6-491d-8517-26327e35a1f4/manager/0.log" Feb 17 17:22:21 crc kubenswrapper[4808]: E0217 17:22:21.147519 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:22:24 crc kubenswrapper[4808]: I0217 17:22:24.606453 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-868647ff47-cjh7p_3e657888-7f8f-4d5d-8ef3-7f7472a7e4fb/manager/0.log" Feb 17 17:22:28 crc kubenswrapper[4808]: E0217 17:22:28.148657 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:22:29 crc kubenswrapper[4808]: I0217 17:22:29.146376 4808 scope.go:117] "RemoveContainer" containerID="700c3283572281c218af9f0b845d6de62277f81d69443b3b1ffcaa7d804aa22e" Feb 17 17:22:29 crc kubenswrapper[4808]: E0217 17:22:29.146891 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 17:22:34 crc kubenswrapper[4808]: E0217 17:22:34.148103 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:22:41 crc kubenswrapper[4808]: I0217 17:22:41.145717 4808 scope.go:117] "RemoveContainer" containerID="700c3283572281c218af9f0b845d6de62277f81d69443b3b1ffcaa7d804aa22e" Feb 17 17:22:41 crc kubenswrapper[4808]: E0217 17:22:41.146699 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 17:22:41 crc kubenswrapper[4808]: I0217 17:22:41.232027 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-t8ws2_94f0bc0d-40c0-45b7-b6c4-7b285ba26c52/control-plane-machine-set-operator/0.log" Feb 17 17:22:41 crc kubenswrapper[4808]: I0217 17:22:41.412078 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-srhjb_656b06bf-9660-4c18-941b-5e5589f0301a/kube-rbac-proxy/0.log" Feb 17 17:22:41 crc kubenswrapper[4808]: I0217 17:22:41.464547 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-srhjb_656b06bf-9660-4c18-941b-5e5589f0301a/machine-api-operator/0.log" Feb 17 17:22:42 crc kubenswrapper[4808]: E0217 17:22:42.148470 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:22:45 crc kubenswrapper[4808]: E0217 17:22:45.147374 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:22:54 crc kubenswrapper[4808]: I0217 17:22:54.506882 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-2mptt_e17861f0-9138-4fa1-8fa0-7bd761f1e1bd/cert-manager-controller/0.log" Feb 17 17:22:54 crc kubenswrapper[4808]: I0217 17:22:54.665278 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-cjbd9_f70c72b0-4029-491f-b93e-4b4e52c5bf77/cert-manager-cainjector/0.log" Feb 17 17:22:54 crc kubenswrapper[4808]: I0217 17:22:54.731268 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-dgw65_5bcb3c4d-b451-49ff-87b7-7b95830c0628/cert-manager-webhook/0.log" Feb 17 17:22:55 crc kubenswrapper[4808]: I0217 17:22:55.146273 4808 scope.go:117] "RemoveContainer" containerID="700c3283572281c218af9f0b845d6de62277f81d69443b3b1ffcaa7d804aa22e" Feb 17 17:22:55 crc kubenswrapper[4808]: E0217 17:22:55.146563 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 17:22:57 crc kubenswrapper[4808]: E0217 17:22:57.157363 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:22:57 crc kubenswrapper[4808]: E0217 17:22:57.157917 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:23:06 crc kubenswrapper[4808]: I0217 17:23:06.146575 4808 scope.go:117] "RemoveContainer" containerID="700c3283572281c218af9f0b845d6de62277f81d69443b3b1ffcaa7d804aa22e" Feb 17 17:23:06 crc kubenswrapper[4808]: E0217 17:23:06.147313 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 17:23:07 crc kubenswrapper[4808]: I0217 17:23:07.647345 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5c78fc5d65-48n66_2c731526-11bd-4ef9-bb62-eb3a0512ff1d/nmstate-console-plugin/0.log" Feb 17 17:23:07 crc kubenswrapper[4808]: I0217 17:23:07.862798 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-q5xs9_16498191-a001-4403-af35-b76104720e91/nmstate-handler/0.log" Feb 17 17:23:07 crc kubenswrapper[4808]: I0217 17:23:07.913024 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-j8rw5_56fb3ff0-71b6-4792-acdf-33edb0cb23b4/kube-rbac-proxy/0.log" Feb 17 17:23:07 crc kubenswrapper[4808]: I0217 17:23:07.955755 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-j8rw5_56fb3ff0-71b6-4792-acdf-33edb0cb23b4/nmstate-metrics/0.log" Feb 17 17:23:08 crc kubenswrapper[4808]: I0217 17:23:08.090070 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-694c9596b7-bjzdq_691d742f-d55e-48e4-89bc-7936f6b31f12/nmstate-operator/0.log" Feb 17 17:23:08 crc kubenswrapper[4808]: I0217 17:23:08.154636 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-866bcb46dc-vz75q_9f2e1846-9112-48fb-b69e-0a12393c62e6/nmstate-webhook/0.log" Feb 17 17:23:11 crc kubenswrapper[4808]: E0217 17:23:11.149116 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:23:12 crc kubenswrapper[4808]: E0217 17:23:12.148255 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:23:18 crc kubenswrapper[4808]: I0217 17:23:18.145826 4808 scope.go:117] "RemoveContainer" containerID="700c3283572281c218af9f0b845d6de62277f81d69443b3b1ffcaa7d804aa22e" Feb 17 17:23:18 crc kubenswrapper[4808]: E0217 17:23:18.146560 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 17:23:21 crc kubenswrapper[4808]: I0217 17:23:21.830301 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-85fb78767c-g2qqj_fb7a346a-c0ef-4aa3-bfb0-b111bdef90ec/manager/0.log" Feb 17 17:23:21 crc kubenswrapper[4808]: I0217 17:23:21.894161 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-85fb78767c-g2qqj_fb7a346a-c0ef-4aa3-bfb0-b111bdef90ec/kube-rbac-proxy/0.log" Feb 17 17:23:22 crc kubenswrapper[4808]: E0217 17:23:22.147867 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:23:23 crc kubenswrapper[4808]: E0217 17:23:23.149018 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:23:30 crc kubenswrapper[4808]: I0217 17:23:30.146261 4808 scope.go:117] "RemoveContainer" containerID="700c3283572281c218af9f0b845d6de62277f81d69443b3b1ffcaa7d804aa22e" Feb 17 17:23:30 crc kubenswrapper[4808]: E0217 17:23:30.147470 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 17:23:34 crc kubenswrapper[4808]: I0217 17:23:34.786015 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-lshnf_038219cb-02e4-4451-b0d4-3e6af1518769/prometheus-operator/0.log" Feb 17 17:23:34 crc kubenswrapper[4808]: I0217 17:23:34.944879 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-98b6f68bc-j86z5_2b8a3138-8c3d-434b-9069-8cafc18a0111/prometheus-operator-admission-webhook/0.log" Feb 17 17:23:35 crc kubenswrapper[4808]: I0217 17:23:35.009703 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-98b6f68bc-qxc24_6d2656af-cd69-49ff-8d35-7c81fa4c4693/prometheus-operator-admission-webhook/0.log" Feb 17 17:23:35 crc kubenswrapper[4808]: I0217 17:23:35.169548 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-7nl9q_c7703980-a631-414f-b3fc-a76dfdd1e085/operator/0.log" Feb 17 17:23:35 crc kubenswrapper[4808]: I0217 17:23:35.208626 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-pkvl8_b6f5eae7-5253-4562-a5d0-30dfe6e5a8ab/perses-operator/0.log" Feb 17 17:23:37 crc kubenswrapper[4808]: E0217 17:23:37.153954 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:23:37 crc kubenswrapper[4808]: E0217 17:23:37.153993 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:23:45 crc kubenswrapper[4808]: I0217 17:23:45.145869 4808 scope.go:117] "RemoveContainer" containerID="700c3283572281c218af9f0b845d6de62277f81d69443b3b1ffcaa7d804aa22e" Feb 17 17:23:45 crc kubenswrapper[4808]: E0217 17:23:45.146658 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 17:23:49 crc kubenswrapper[4808]: E0217 17:23:49.148178 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:23:51 crc kubenswrapper[4808]: I0217 17:23:51.454547 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-jvlrt_86420ee7-2594-4ef8-8b9d-05a073118389/kube-rbac-proxy/0.log" Feb 17 17:23:51 crc kubenswrapper[4808]: I0217 17:23:51.701417 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-c58vl_42711d14-278f-41eb-80ce-2e67add356b9/cp-frr-files/0.log" Feb 17 17:23:51 crc kubenswrapper[4808]: I0217 17:23:51.703008 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-jvlrt_86420ee7-2594-4ef8-8b9d-05a073118389/controller/0.log" Feb 17 17:23:51 crc kubenswrapper[4808]: I0217 17:23:51.956271 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-c58vl_42711d14-278f-41eb-80ce-2e67add356b9/cp-frr-files/0.log" Feb 17 17:23:51 crc kubenswrapper[4808]: I0217 17:23:51.970410 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-c58vl_42711d14-278f-41eb-80ce-2e67add356b9/cp-reloader/0.log" Feb 17 17:23:51 crc kubenswrapper[4808]: I0217 17:23:51.983129 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-c58vl_42711d14-278f-41eb-80ce-2e67add356b9/cp-metrics/0.log" Feb 17 17:23:52 crc kubenswrapper[4808]: I0217 17:23:52.018854 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-c58vl_42711d14-278f-41eb-80ce-2e67add356b9/cp-reloader/0.log" Feb 17 17:23:52 crc kubenswrapper[4808]: E0217 17:23:52.148849 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:23:52 crc kubenswrapper[4808]: I0217 17:23:52.192018 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-c58vl_42711d14-278f-41eb-80ce-2e67add356b9/cp-metrics/0.log" Feb 17 17:23:52 crc kubenswrapper[4808]: I0217 17:23:52.225591 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-c58vl_42711d14-278f-41eb-80ce-2e67add356b9/cp-reloader/0.log" Feb 17 17:23:52 crc kubenswrapper[4808]: I0217 17:23:52.246100 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-c58vl_42711d14-278f-41eb-80ce-2e67add356b9/cp-frr-files/0.log" Feb 17 17:23:52 crc kubenswrapper[4808]: I0217 17:23:52.266159 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-c58vl_42711d14-278f-41eb-80ce-2e67add356b9/cp-metrics/0.log" Feb 17 17:23:52 crc kubenswrapper[4808]: I0217 17:23:52.455167 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-c58vl_42711d14-278f-41eb-80ce-2e67add356b9/cp-metrics/0.log" Feb 17 17:23:52 crc kubenswrapper[4808]: I0217 17:23:52.468882 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-c58vl_42711d14-278f-41eb-80ce-2e67add356b9/cp-frr-files/0.log" Feb 17 17:23:52 crc kubenswrapper[4808]: I0217 17:23:52.498656 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-c58vl_42711d14-278f-41eb-80ce-2e67add356b9/controller/0.log" Feb 17 17:23:52 crc kubenswrapper[4808]: I0217 17:23:52.546664 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-c58vl_42711d14-278f-41eb-80ce-2e67add356b9/cp-reloader/0.log" Feb 17 17:23:52 crc kubenswrapper[4808]: I0217 17:23:52.659058 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-c58vl_42711d14-278f-41eb-80ce-2e67add356b9/frr-metrics/0.log" Feb 17 17:23:52 crc kubenswrapper[4808]: I0217 17:23:52.718846 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-c58vl_42711d14-278f-41eb-80ce-2e67add356b9/kube-rbac-proxy/0.log" Feb 17 17:23:52 crc kubenswrapper[4808]: I0217 17:23:52.773313 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-c58vl_42711d14-278f-41eb-80ce-2e67add356b9/kube-rbac-proxy-frr/0.log" Feb 17 17:23:52 crc kubenswrapper[4808]: I0217 17:23:52.965690 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-c58vl_42711d14-278f-41eb-80ce-2e67add356b9/reloader/0.log" Feb 17 17:23:53 crc kubenswrapper[4808]: I0217 17:23:53.080455 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-78b44bf5bb-zvr84_b55883d0-d8e0-4609-8b1a-033d6808ab56/frr-k8s-webhook-server/0.log" Feb 17 17:23:53 crc kubenswrapper[4808]: I0217 17:23:53.314941 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-6655d59788-74j79_d90f3d87-35f4-4c7d-b157-424ee7b502cd/manager/0.log" Feb 17 17:23:53 crc kubenswrapper[4808]: I0217 17:23:53.500874 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-5f74458966-dhjp5_6de38240-7d75-47a0-b5c1-788f619bb8ff/webhook-server/0.log" Feb 17 17:23:53 crc kubenswrapper[4808]: I0217 17:23:53.583900 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-2hrgh_c8e5bfe8-d4de-4863-b830-db146a4f0bd8/kube-rbac-proxy/0.log" Feb 17 17:23:54 crc kubenswrapper[4808]: I0217 17:23:54.084333 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-c58vl_42711d14-278f-41eb-80ce-2e67add356b9/frr/0.log" Feb 17 17:23:54 crc kubenswrapper[4808]: I0217 17:23:54.299041 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-2hrgh_c8e5bfe8-d4de-4863-b830-db146a4f0bd8/speaker/0.log" Feb 17 17:23:58 crc kubenswrapper[4808]: I0217 17:23:58.146021 4808 scope.go:117] "RemoveContainer" containerID="700c3283572281c218af9f0b845d6de62277f81d69443b3b1ffcaa7d804aa22e" Feb 17 17:23:58 crc kubenswrapper[4808]: I0217 17:23:58.728812 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" event={"ID":"ca38b6e7-b21c-453d-8b6c-a163dac84b35","Type":"ContainerStarted","Data":"6a461065a2b0984e9cb114713503f1076e495225fe534e196caafd6860edb08f"} Feb 17 17:24:00 crc kubenswrapper[4808]: E0217 17:24:00.148698 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:24:07 crc kubenswrapper[4808]: E0217 17:24:07.158217 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:24:09 crc kubenswrapper[4808]: I0217 17:24:09.020696 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651nnldz_da4f14dc-179d-4178-9a9c-747ab825f3e4/util/0.log" Feb 17 17:24:09 crc kubenswrapper[4808]: I0217 17:24:09.447738 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651nnldz_da4f14dc-179d-4178-9a9c-747ab825f3e4/pull/0.log" Feb 17 17:24:09 crc kubenswrapper[4808]: I0217 17:24:09.450769 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651nnldz_da4f14dc-179d-4178-9a9c-747ab825f3e4/pull/0.log" Feb 17 17:24:09 crc kubenswrapper[4808]: I0217 17:24:09.495336 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651nnldz_da4f14dc-179d-4178-9a9c-747ab825f3e4/util/0.log" Feb 17 17:24:09 crc kubenswrapper[4808]: I0217 17:24:09.595773 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651nnldz_da4f14dc-179d-4178-9a9c-747ab825f3e4/extract/0.log" Feb 17 17:24:09 crc kubenswrapper[4808]: I0217 17:24:09.617534 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651nnldz_da4f14dc-179d-4178-9a9c-747ab825f3e4/util/0.log" Feb 17 17:24:09 crc kubenswrapper[4808]: I0217 17:24:09.643049 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651nnldz_da4f14dc-179d-4178-9a9c-747ab825f3e4/pull/0.log" Feb 17 17:24:09 crc kubenswrapper[4808]: I0217 17:24:09.828736 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gm8bm_11d9feea-2c1d-48e4-9cf4-bde172f9faea/util/0.log" Feb 17 17:24:09 crc kubenswrapper[4808]: I0217 17:24:09.996546 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gm8bm_11d9feea-2c1d-48e4-9cf4-bde172f9faea/pull/0.log" Feb 17 17:24:10 crc kubenswrapper[4808]: I0217 17:24:10.029168 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gm8bm_11d9feea-2c1d-48e4-9cf4-bde172f9faea/util/0.log" Feb 17 17:24:10 crc kubenswrapper[4808]: I0217 17:24:10.054231 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gm8bm_11d9feea-2c1d-48e4-9cf4-bde172f9faea/pull/0.log" Feb 17 17:24:10 crc kubenswrapper[4808]: I0217 17:24:10.236015 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gm8bm_11d9feea-2c1d-48e4-9cf4-bde172f9faea/pull/0.log" Feb 17 17:24:10 crc kubenswrapper[4808]: I0217 17:24:10.248780 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gm8bm_11d9feea-2c1d-48e4-9cf4-bde172f9faea/util/0.log" Feb 17 17:24:10 crc kubenswrapper[4808]: I0217 17:24:10.260906 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gm8bm_11d9feea-2c1d-48e4-9cf4-bde172f9faea/extract/0.log" Feb 17 17:24:10 crc kubenswrapper[4808]: I0217 17:24:10.411707 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kj6bw_df1cf40f-e7a2-40b1-8adb-45d2b5205584/util/0.log" Feb 17 17:24:10 crc kubenswrapper[4808]: I0217 17:24:10.615798 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kj6bw_df1cf40f-e7a2-40b1-8adb-45d2b5205584/util/0.log" Feb 17 17:24:10 crc kubenswrapper[4808]: I0217 17:24:10.634697 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kj6bw_df1cf40f-e7a2-40b1-8adb-45d2b5205584/pull/0.log" Feb 17 17:24:10 crc kubenswrapper[4808]: I0217 17:24:10.683319 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kj6bw_df1cf40f-e7a2-40b1-8adb-45d2b5205584/pull/0.log" Feb 17 17:24:11 crc kubenswrapper[4808]: E0217 17:24:11.147775 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:24:11 crc kubenswrapper[4808]: I0217 17:24:11.349298 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kj6bw_df1cf40f-e7a2-40b1-8adb-45d2b5205584/util/0.log" Feb 17 17:24:11 crc kubenswrapper[4808]: I0217 17:24:11.380501 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kj6bw_df1cf40f-e7a2-40b1-8adb-45d2b5205584/extract/0.log" Feb 17 17:24:11 crc kubenswrapper[4808]: I0217 17:24:11.416820 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kj6bw_df1cf40f-e7a2-40b1-8adb-45d2b5205584/pull/0.log" Feb 17 17:24:11 crc kubenswrapper[4808]: I0217 17:24:11.546135 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-pgghj_7b0c9cdb-4343-4e20-b099-0f1d04243839/extract-utilities/0.log" Feb 17 17:24:11 crc kubenswrapper[4808]: I0217 17:24:11.728541 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-pgghj_7b0c9cdb-4343-4e20-b099-0f1d04243839/extract-content/0.log" Feb 17 17:24:11 crc kubenswrapper[4808]: I0217 17:24:11.745561 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-pgghj_7b0c9cdb-4343-4e20-b099-0f1d04243839/extract-utilities/0.log" Feb 17 17:24:11 crc kubenswrapper[4808]: I0217 17:24:11.789421 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-pgghj_7b0c9cdb-4343-4e20-b099-0f1d04243839/extract-content/0.log" Feb 17 17:24:11 crc kubenswrapper[4808]: I0217 17:24:11.927454 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-pgghj_7b0c9cdb-4343-4e20-b099-0f1d04243839/extract-utilities/0.log" Feb 17 17:24:11 crc kubenswrapper[4808]: I0217 17:24:11.972963 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-pgghj_7b0c9cdb-4343-4e20-b099-0f1d04243839/extract-content/0.log" Feb 17 17:24:12 crc kubenswrapper[4808]: I0217 17:24:12.193740 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-snf82_9b925660-1865-4603-8f8e-f21a1c342f63/extract-utilities/0.log" Feb 17 17:24:12 crc kubenswrapper[4808]: I0217 17:24:12.394328 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-snf82_9b925660-1865-4603-8f8e-f21a1c342f63/extract-utilities/0.log" Feb 17 17:24:12 crc kubenswrapper[4808]: I0217 17:24:12.431005 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-snf82_9b925660-1865-4603-8f8e-f21a1c342f63/extract-content/0.log" Feb 17 17:24:12 crc kubenswrapper[4808]: I0217 17:24:12.496346 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-snf82_9b925660-1865-4603-8f8e-f21a1c342f63/extract-content/0.log" Feb 17 17:24:12 crc kubenswrapper[4808]: I0217 17:24:12.698802 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-pgghj_7b0c9cdb-4343-4e20-b099-0f1d04243839/registry-server/0.log" Feb 17 17:24:12 crc kubenswrapper[4808]: I0217 17:24:12.724442 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-snf82_9b925660-1865-4603-8f8e-f21a1c342f63/extract-content/0.log" Feb 17 17:24:12 crc kubenswrapper[4808]: I0217 17:24:12.812164 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-snf82_9b925660-1865-4603-8f8e-f21a1c342f63/extract-utilities/0.log" Feb 17 17:24:12 crc kubenswrapper[4808]: I0217 17:24:12.976348 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecal9zzl_5903df73-c7d6-46cf-8aa2-4f0067c08b99/util/0.log" Feb 17 17:24:13 crc kubenswrapper[4808]: I0217 17:24:13.279878 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecal9zzl_5903df73-c7d6-46cf-8aa2-4f0067c08b99/util/0.log" Feb 17 17:24:13 crc kubenswrapper[4808]: I0217 17:24:13.317005 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecal9zzl_5903df73-c7d6-46cf-8aa2-4f0067c08b99/pull/0.log" Feb 17 17:24:13 crc kubenswrapper[4808]: I0217 17:24:13.342625 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecal9zzl_5903df73-c7d6-46cf-8aa2-4f0067c08b99/pull/0.log" Feb 17 17:24:13 crc kubenswrapper[4808]: I0217 17:24:13.437537 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-snf82_9b925660-1865-4603-8f8e-f21a1c342f63/registry-server/0.log" Feb 17 17:24:13 crc kubenswrapper[4808]: I0217 17:24:13.538858 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecal9zzl_5903df73-c7d6-46cf-8aa2-4f0067c08b99/pull/0.log" Feb 17 17:24:13 crc kubenswrapper[4808]: I0217 17:24:13.539417 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecal9zzl_5903df73-c7d6-46cf-8aa2-4f0067c08b99/extract/0.log" Feb 17 17:24:13 crc kubenswrapper[4808]: I0217 17:24:13.557357 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecal9zzl_5903df73-c7d6-46cf-8aa2-4f0067c08b99/util/0.log" Feb 17 17:24:13 crc kubenswrapper[4808]: I0217 17:24:13.664030 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-v2wfq_012287fd-dda3-4c7b-af1f-576ec2dc479b/marketplace-operator/0.log" Feb 17 17:24:13 crc kubenswrapper[4808]: I0217 17:24:13.713420 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-bbhct_5011758e-a6e4-4491-8ac6-c0a8bcb50568/extract-utilities/0.log" Feb 17 17:24:13 crc kubenswrapper[4808]: I0217 17:24:13.891403 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-bbhct_5011758e-a6e4-4491-8ac6-c0a8bcb50568/extract-content/0.log" Feb 17 17:24:13 crc kubenswrapper[4808]: I0217 17:24:13.895976 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-bbhct_5011758e-a6e4-4491-8ac6-c0a8bcb50568/extract-utilities/0.log" Feb 17 17:24:13 crc kubenswrapper[4808]: I0217 17:24:13.918591 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-bbhct_5011758e-a6e4-4491-8ac6-c0a8bcb50568/extract-content/0.log" Feb 17 17:24:14 crc kubenswrapper[4808]: I0217 17:24:14.086069 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-bbhct_5011758e-a6e4-4491-8ac6-c0a8bcb50568/extract-utilities/0.log" Feb 17 17:24:14 crc kubenswrapper[4808]: I0217 17:24:14.095737 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-bbhct_5011758e-a6e4-4491-8ac6-c0a8bcb50568/extract-content/0.log" Feb 17 17:24:14 crc kubenswrapper[4808]: I0217 17:24:14.102190 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lstjz_bcdfcb0d-7a0d-4cee-a80f-f49f078bef37/extract-utilities/0.log" Feb 17 17:24:14 crc kubenswrapper[4808]: I0217 17:24:14.261171 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-bbhct_5011758e-a6e4-4491-8ac6-c0a8bcb50568/registry-server/0.log" Feb 17 17:24:14 crc kubenswrapper[4808]: I0217 17:24:14.399415 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lstjz_bcdfcb0d-7a0d-4cee-a80f-f49f078bef37/extract-content/0.log" Feb 17 17:24:14 crc kubenswrapper[4808]: I0217 17:24:14.409078 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lstjz_bcdfcb0d-7a0d-4cee-a80f-f49f078bef37/extract-content/0.log" Feb 17 17:24:14 crc kubenswrapper[4808]: I0217 17:24:14.415977 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lstjz_bcdfcb0d-7a0d-4cee-a80f-f49f078bef37/extract-utilities/0.log" Feb 17 17:24:14 crc kubenswrapper[4808]: I0217 17:24:14.576637 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lstjz_bcdfcb0d-7a0d-4cee-a80f-f49f078bef37/extract-content/0.log" Feb 17 17:24:14 crc kubenswrapper[4808]: I0217 17:24:14.604438 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lstjz_bcdfcb0d-7a0d-4cee-a80f-f49f078bef37/extract-utilities/0.log" Feb 17 17:24:15 crc kubenswrapper[4808]: I0217 17:24:15.252270 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lstjz_bcdfcb0d-7a0d-4cee-a80f-f49f078bef37/registry-server/0.log" Feb 17 17:24:20 crc kubenswrapper[4808]: E0217 17:24:20.148829 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:24:24 crc kubenswrapper[4808]: E0217 17:24:24.147221 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:24:30 crc kubenswrapper[4808]: I0217 17:24:30.045354 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-lshnf_038219cb-02e4-4451-b0d4-3e6af1518769/prometheus-operator/0.log" Feb 17 17:24:30 crc kubenswrapper[4808]: I0217 17:24:30.054693 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-98b6f68bc-qxc24_6d2656af-cd69-49ff-8d35-7c81fa4c4693/prometheus-operator-admission-webhook/0.log" Feb 17 17:24:30 crc kubenswrapper[4808]: I0217 17:24:30.054710 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-98b6f68bc-j86z5_2b8a3138-8c3d-434b-9069-8cafc18a0111/prometheus-operator-admission-webhook/0.log" Feb 17 17:24:30 crc kubenswrapper[4808]: I0217 17:24:30.224977 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-7nl9q_c7703980-a631-414f-b3fc-a76dfdd1e085/operator/0.log" Feb 17 17:24:30 crc kubenswrapper[4808]: I0217 17:24:30.265451 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-pkvl8_b6f5eae7-5253-4562-a5d0-30dfe6e5a8ab/perses-operator/0.log" Feb 17 17:24:34 crc kubenswrapper[4808]: E0217 17:24:34.148351 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:24:35 crc kubenswrapper[4808]: E0217 17:24:35.147362 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:24:46 crc kubenswrapper[4808]: I0217 17:24:46.409553 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-85fb78767c-g2qqj_fb7a346a-c0ef-4aa3-bfb0-b111bdef90ec/manager/0.log" Feb 17 17:24:46 crc kubenswrapper[4808]: I0217 17:24:46.414275 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-85fb78767c-g2qqj_fb7a346a-c0ef-4aa3-bfb0-b111bdef90ec/kube-rbac-proxy/0.log" Feb 17 17:24:46 crc kubenswrapper[4808]: I0217 17:24:46.857374 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wlh7l"] Feb 17 17:24:46 crc kubenswrapper[4808]: E0217 17:24:46.858203 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40119af6-a3e0-44d6-abc8-df39c96836ac" containerName="registry-server" Feb 17 17:24:46 crc kubenswrapper[4808]: I0217 17:24:46.858269 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="40119af6-a3e0-44d6-abc8-df39c96836ac" containerName="registry-server" Feb 17 17:24:46 crc kubenswrapper[4808]: E0217 17:24:46.858326 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d9a64bc-8829-4eb8-b992-92f15c06c5cd" containerName="extract-utilities" Feb 17 17:24:46 crc kubenswrapper[4808]: I0217 17:24:46.858376 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d9a64bc-8829-4eb8-b992-92f15c06c5cd" containerName="extract-utilities" Feb 17 17:24:46 crc kubenswrapper[4808]: E0217 17:24:46.858428 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d9a64bc-8829-4eb8-b992-92f15c06c5cd" containerName="extract-content" Feb 17 17:24:46 crc kubenswrapper[4808]: I0217 17:24:46.858476 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d9a64bc-8829-4eb8-b992-92f15c06c5cd" containerName="extract-content" Feb 17 17:24:46 crc kubenswrapper[4808]: E0217 17:24:46.858537 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d9a64bc-8829-4eb8-b992-92f15c06c5cd" containerName="registry-server" Feb 17 17:24:46 crc kubenswrapper[4808]: I0217 17:24:46.858616 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d9a64bc-8829-4eb8-b992-92f15c06c5cd" containerName="registry-server" Feb 17 17:24:46 crc kubenswrapper[4808]: E0217 17:24:46.858678 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40119af6-a3e0-44d6-abc8-df39c96836ac" containerName="extract-content" Feb 17 17:24:46 crc kubenswrapper[4808]: I0217 17:24:46.858727 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="40119af6-a3e0-44d6-abc8-df39c96836ac" containerName="extract-content" Feb 17 17:24:46 crc kubenswrapper[4808]: E0217 17:24:46.858782 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40119af6-a3e0-44d6-abc8-df39c96836ac" containerName="extract-utilities" Feb 17 17:24:46 crc kubenswrapper[4808]: I0217 17:24:46.858831 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="40119af6-a3e0-44d6-abc8-df39c96836ac" containerName="extract-utilities" Feb 17 17:24:46 crc kubenswrapper[4808]: I0217 17:24:46.859057 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="40119af6-a3e0-44d6-abc8-df39c96836ac" containerName="registry-server" Feb 17 17:24:46 crc kubenswrapper[4808]: I0217 17:24:46.859127 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d9a64bc-8829-4eb8-b992-92f15c06c5cd" containerName="registry-server" Feb 17 17:24:46 crc kubenswrapper[4808]: I0217 17:24:46.860531 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wlh7l" Feb 17 17:24:46 crc kubenswrapper[4808]: I0217 17:24:46.873462 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wlh7l"] Feb 17 17:24:47 crc kubenswrapper[4808]: I0217 17:24:47.011648 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6abeea5-59f7-4b89-a47c-bee82aac4741-catalog-content\") pod \"community-operators-wlh7l\" (UID: \"c6abeea5-59f7-4b89-a47c-bee82aac4741\") " pod="openshift-marketplace/community-operators-wlh7l" Feb 17 17:24:47 crc kubenswrapper[4808]: I0217 17:24:47.011704 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qb2b\" (UniqueName: \"kubernetes.io/projected/c6abeea5-59f7-4b89-a47c-bee82aac4741-kube-api-access-9qb2b\") pod \"community-operators-wlh7l\" (UID: \"c6abeea5-59f7-4b89-a47c-bee82aac4741\") " pod="openshift-marketplace/community-operators-wlh7l" Feb 17 17:24:47 crc kubenswrapper[4808]: I0217 17:24:47.012057 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6abeea5-59f7-4b89-a47c-bee82aac4741-utilities\") pod \"community-operators-wlh7l\" (UID: \"c6abeea5-59f7-4b89-a47c-bee82aac4741\") " pod="openshift-marketplace/community-operators-wlh7l" Feb 17 17:24:47 crc kubenswrapper[4808]: I0217 17:24:47.114030 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6abeea5-59f7-4b89-a47c-bee82aac4741-utilities\") pod \"community-operators-wlh7l\" (UID: \"c6abeea5-59f7-4b89-a47c-bee82aac4741\") " pod="openshift-marketplace/community-operators-wlh7l" Feb 17 17:24:47 crc kubenswrapper[4808]: I0217 17:24:47.114172 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6abeea5-59f7-4b89-a47c-bee82aac4741-catalog-content\") pod \"community-operators-wlh7l\" (UID: \"c6abeea5-59f7-4b89-a47c-bee82aac4741\") " pod="openshift-marketplace/community-operators-wlh7l" Feb 17 17:24:47 crc kubenswrapper[4808]: I0217 17:24:47.114197 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qb2b\" (UniqueName: \"kubernetes.io/projected/c6abeea5-59f7-4b89-a47c-bee82aac4741-kube-api-access-9qb2b\") pod \"community-operators-wlh7l\" (UID: \"c6abeea5-59f7-4b89-a47c-bee82aac4741\") " pod="openshift-marketplace/community-operators-wlh7l" Feb 17 17:24:47 crc kubenswrapper[4808]: I0217 17:24:47.114693 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6abeea5-59f7-4b89-a47c-bee82aac4741-utilities\") pod \"community-operators-wlh7l\" (UID: \"c6abeea5-59f7-4b89-a47c-bee82aac4741\") " pod="openshift-marketplace/community-operators-wlh7l" Feb 17 17:24:47 crc kubenswrapper[4808]: I0217 17:24:47.114711 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6abeea5-59f7-4b89-a47c-bee82aac4741-catalog-content\") pod \"community-operators-wlh7l\" (UID: \"c6abeea5-59f7-4b89-a47c-bee82aac4741\") " pod="openshift-marketplace/community-operators-wlh7l" Feb 17 17:24:47 crc kubenswrapper[4808]: I0217 17:24:47.147547 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qb2b\" (UniqueName: \"kubernetes.io/projected/c6abeea5-59f7-4b89-a47c-bee82aac4741-kube-api-access-9qb2b\") pod \"community-operators-wlh7l\" (UID: \"c6abeea5-59f7-4b89-a47c-bee82aac4741\") " pod="openshift-marketplace/community-operators-wlh7l" Feb 17 17:24:47 crc kubenswrapper[4808]: I0217 17:24:47.195621 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wlh7l" Feb 17 17:24:47 crc kubenswrapper[4808]: I0217 17:24:47.798626 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wlh7l"] Feb 17 17:24:48 crc kubenswrapper[4808]: E0217 17:24:48.148318 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:24:48 crc kubenswrapper[4808]: I0217 17:24:48.209755 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wlh7l" event={"ID":"c6abeea5-59f7-4b89-a47c-bee82aac4741","Type":"ContainerStarted","Data":"8a25a6931025d6f6be5fcb2fccd2fda1166a482876723231d8e539131a85c6ff"} Feb 17 17:24:49 crc kubenswrapper[4808]: E0217 17:24:49.148323 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:24:49 crc kubenswrapper[4808]: I0217 17:24:49.219392 4808 generic.go:334] "Generic (PLEG): container finished" podID="c6abeea5-59f7-4b89-a47c-bee82aac4741" containerID="2a62b920a605ea8344d4c8c97e6919fa689e4888f2666af6e339c4d1c28a3a0d" exitCode=0 Feb 17 17:24:49 crc kubenswrapper[4808]: I0217 17:24:49.219432 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wlh7l" event={"ID":"c6abeea5-59f7-4b89-a47c-bee82aac4741","Type":"ContainerDied","Data":"2a62b920a605ea8344d4c8c97e6919fa689e4888f2666af6e339c4d1c28a3a0d"} Feb 17 17:24:50 crc kubenswrapper[4808]: I0217 17:24:50.245014 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wlh7l" event={"ID":"c6abeea5-59f7-4b89-a47c-bee82aac4741","Type":"ContainerStarted","Data":"7e1e70ea95f9af0e5ac87f6cfc8ba3e3136f9ce0e6178ed86a5488af66d3f0fd"} Feb 17 17:24:53 crc kubenswrapper[4808]: I0217 17:24:53.295167 4808 generic.go:334] "Generic (PLEG): container finished" podID="c6abeea5-59f7-4b89-a47c-bee82aac4741" containerID="7e1e70ea95f9af0e5ac87f6cfc8ba3e3136f9ce0e6178ed86a5488af66d3f0fd" exitCode=0 Feb 17 17:24:53 crc kubenswrapper[4808]: I0217 17:24:53.295265 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wlh7l" event={"ID":"c6abeea5-59f7-4b89-a47c-bee82aac4741","Type":"ContainerDied","Data":"7e1e70ea95f9af0e5ac87f6cfc8ba3e3136f9ce0e6178ed86a5488af66d3f0fd"} Feb 17 17:24:54 crc kubenswrapper[4808]: I0217 17:24:54.320140 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wlh7l" event={"ID":"c6abeea5-59f7-4b89-a47c-bee82aac4741","Type":"ContainerStarted","Data":"466dba8e1e7a633742fe8a6b8681ccced6381d274bc461ee92c102da1aa1eede"} Feb 17 17:24:54 crc kubenswrapper[4808]: I0217 17:24:54.366223 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wlh7l" podStartSLOduration=3.857899531 podStartE2EDuration="8.366206873s" podCreationTimestamp="2026-02-17 17:24:46 +0000 UTC" firstStartedPulling="2026-02-17 17:24:49.221373891 +0000 UTC m=+5452.737732964" lastFinishedPulling="2026-02-17 17:24:53.729681223 +0000 UTC m=+5457.246040306" observedRunningTime="2026-02-17 17:24:54.359646655 +0000 UTC m=+5457.876005728" watchObservedRunningTime="2026-02-17 17:24:54.366206873 +0000 UTC m=+5457.882565946" Feb 17 17:24:57 crc kubenswrapper[4808]: I0217 17:24:57.196487 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wlh7l" Feb 17 17:24:57 crc kubenswrapper[4808]: I0217 17:24:57.196928 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wlh7l" Feb 17 17:24:58 crc kubenswrapper[4808]: I0217 17:24:58.241839 4808 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-wlh7l" podUID="c6abeea5-59f7-4b89-a47c-bee82aac4741" containerName="registry-server" probeResult="failure" output=< Feb 17 17:24:58 crc kubenswrapper[4808]: timeout: failed to connect service ":50051" within 1s Feb 17 17:24:58 crc kubenswrapper[4808]: > Feb 17 17:24:59 crc kubenswrapper[4808]: E0217 17:24:59.146626 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:25:00 crc kubenswrapper[4808]: E0217 17:25:00.147352 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:25:08 crc kubenswrapper[4808]: I0217 17:25:08.250170 4808 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-wlh7l" podUID="c6abeea5-59f7-4b89-a47c-bee82aac4741" containerName="registry-server" probeResult="failure" output=< Feb 17 17:25:08 crc kubenswrapper[4808]: timeout: failed to connect service ":50051" within 1s Feb 17 17:25:08 crc kubenswrapper[4808]: > Feb 17 17:25:12 crc kubenswrapper[4808]: E0217 17:25:12.147830 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:25:13 crc kubenswrapper[4808]: E0217 17:25:13.166512 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:25:17 crc kubenswrapper[4808]: I0217 17:25:17.327610 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wlh7l" Feb 17 17:25:17 crc kubenswrapper[4808]: I0217 17:25:17.406138 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wlh7l" Feb 17 17:25:17 crc kubenswrapper[4808]: I0217 17:25:17.564972 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wlh7l"] Feb 17 17:25:18 crc kubenswrapper[4808]: I0217 17:25:18.561251 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-wlh7l" podUID="c6abeea5-59f7-4b89-a47c-bee82aac4741" containerName="registry-server" containerID="cri-o://466dba8e1e7a633742fe8a6b8681ccced6381d274bc461ee92c102da1aa1eede" gracePeriod=2 Feb 17 17:25:19 crc kubenswrapper[4808]: I0217 17:25:19.171668 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wlh7l" Feb 17 17:25:19 crc kubenswrapper[4808]: I0217 17:25:19.309321 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6abeea5-59f7-4b89-a47c-bee82aac4741-catalog-content\") pod \"c6abeea5-59f7-4b89-a47c-bee82aac4741\" (UID: \"c6abeea5-59f7-4b89-a47c-bee82aac4741\") " Feb 17 17:25:19 crc kubenswrapper[4808]: I0217 17:25:19.309393 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6abeea5-59f7-4b89-a47c-bee82aac4741-utilities\") pod \"c6abeea5-59f7-4b89-a47c-bee82aac4741\" (UID: \"c6abeea5-59f7-4b89-a47c-bee82aac4741\") " Feb 17 17:25:19 crc kubenswrapper[4808]: I0217 17:25:19.309552 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9qb2b\" (UniqueName: \"kubernetes.io/projected/c6abeea5-59f7-4b89-a47c-bee82aac4741-kube-api-access-9qb2b\") pod \"c6abeea5-59f7-4b89-a47c-bee82aac4741\" (UID: \"c6abeea5-59f7-4b89-a47c-bee82aac4741\") " Feb 17 17:25:19 crc kubenswrapper[4808]: I0217 17:25:19.310781 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6abeea5-59f7-4b89-a47c-bee82aac4741-utilities" (OuterVolumeSpecName: "utilities") pod "c6abeea5-59f7-4b89-a47c-bee82aac4741" (UID: "c6abeea5-59f7-4b89-a47c-bee82aac4741"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:25:19 crc kubenswrapper[4808]: I0217 17:25:19.336767 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6abeea5-59f7-4b89-a47c-bee82aac4741-kube-api-access-9qb2b" (OuterVolumeSpecName: "kube-api-access-9qb2b") pod "c6abeea5-59f7-4b89-a47c-bee82aac4741" (UID: "c6abeea5-59f7-4b89-a47c-bee82aac4741"). InnerVolumeSpecName "kube-api-access-9qb2b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:25:19 crc kubenswrapper[4808]: I0217 17:25:19.393211 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6abeea5-59f7-4b89-a47c-bee82aac4741-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c6abeea5-59f7-4b89-a47c-bee82aac4741" (UID: "c6abeea5-59f7-4b89-a47c-bee82aac4741"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:25:19 crc kubenswrapper[4808]: I0217 17:25:19.413807 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6abeea5-59f7-4b89-a47c-bee82aac4741-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 17:25:19 crc kubenswrapper[4808]: I0217 17:25:19.413858 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6abeea5-59f7-4b89-a47c-bee82aac4741-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 17:25:19 crc kubenswrapper[4808]: I0217 17:25:19.413879 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9qb2b\" (UniqueName: \"kubernetes.io/projected/c6abeea5-59f7-4b89-a47c-bee82aac4741-kube-api-access-9qb2b\") on node \"crc\" DevicePath \"\"" Feb 17 17:25:19 crc kubenswrapper[4808]: I0217 17:25:19.577890 4808 generic.go:334] "Generic (PLEG): container finished" podID="c6abeea5-59f7-4b89-a47c-bee82aac4741" containerID="466dba8e1e7a633742fe8a6b8681ccced6381d274bc461ee92c102da1aa1eede" exitCode=0 Feb 17 17:25:19 crc kubenswrapper[4808]: I0217 17:25:19.577954 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wlh7l" event={"ID":"c6abeea5-59f7-4b89-a47c-bee82aac4741","Type":"ContainerDied","Data":"466dba8e1e7a633742fe8a6b8681ccced6381d274bc461ee92c102da1aa1eede"} Feb 17 17:25:19 crc kubenswrapper[4808]: I0217 17:25:19.578011 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wlh7l" Feb 17 17:25:19 crc kubenswrapper[4808]: I0217 17:25:19.578036 4808 scope.go:117] "RemoveContainer" containerID="466dba8e1e7a633742fe8a6b8681ccced6381d274bc461ee92c102da1aa1eede" Feb 17 17:25:19 crc kubenswrapper[4808]: I0217 17:25:19.578018 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wlh7l" event={"ID":"c6abeea5-59f7-4b89-a47c-bee82aac4741","Type":"ContainerDied","Data":"8a25a6931025d6f6be5fcb2fccd2fda1166a482876723231d8e539131a85c6ff"} Feb 17 17:25:19 crc kubenswrapper[4808]: I0217 17:25:19.644805 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wlh7l"] Feb 17 17:25:19 crc kubenswrapper[4808]: I0217 17:25:19.646415 4808 scope.go:117] "RemoveContainer" containerID="7e1e70ea95f9af0e5ac87f6cfc8ba3e3136f9ce0e6178ed86a5488af66d3f0fd" Feb 17 17:25:19 crc kubenswrapper[4808]: I0217 17:25:19.662125 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-wlh7l"] Feb 17 17:25:19 crc kubenswrapper[4808]: I0217 17:25:19.684735 4808 scope.go:117] "RemoveContainer" containerID="2a62b920a605ea8344d4c8c97e6919fa689e4888f2666af6e339c4d1c28a3a0d" Feb 17 17:25:19 crc kubenswrapper[4808]: I0217 17:25:19.737170 4808 scope.go:117] "RemoveContainer" containerID="466dba8e1e7a633742fe8a6b8681ccced6381d274bc461ee92c102da1aa1eede" Feb 17 17:25:19 crc kubenswrapper[4808]: E0217 17:25:19.742200 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"466dba8e1e7a633742fe8a6b8681ccced6381d274bc461ee92c102da1aa1eede\": container with ID starting with 466dba8e1e7a633742fe8a6b8681ccced6381d274bc461ee92c102da1aa1eede not found: ID does not exist" containerID="466dba8e1e7a633742fe8a6b8681ccced6381d274bc461ee92c102da1aa1eede" Feb 17 17:25:19 crc kubenswrapper[4808]: I0217 17:25:19.742256 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"466dba8e1e7a633742fe8a6b8681ccced6381d274bc461ee92c102da1aa1eede"} err="failed to get container status \"466dba8e1e7a633742fe8a6b8681ccced6381d274bc461ee92c102da1aa1eede\": rpc error: code = NotFound desc = could not find container \"466dba8e1e7a633742fe8a6b8681ccced6381d274bc461ee92c102da1aa1eede\": container with ID starting with 466dba8e1e7a633742fe8a6b8681ccced6381d274bc461ee92c102da1aa1eede not found: ID does not exist" Feb 17 17:25:19 crc kubenswrapper[4808]: I0217 17:25:19.742291 4808 scope.go:117] "RemoveContainer" containerID="7e1e70ea95f9af0e5ac87f6cfc8ba3e3136f9ce0e6178ed86a5488af66d3f0fd" Feb 17 17:25:19 crc kubenswrapper[4808]: E0217 17:25:19.742854 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e1e70ea95f9af0e5ac87f6cfc8ba3e3136f9ce0e6178ed86a5488af66d3f0fd\": container with ID starting with 7e1e70ea95f9af0e5ac87f6cfc8ba3e3136f9ce0e6178ed86a5488af66d3f0fd not found: ID does not exist" containerID="7e1e70ea95f9af0e5ac87f6cfc8ba3e3136f9ce0e6178ed86a5488af66d3f0fd" Feb 17 17:25:19 crc kubenswrapper[4808]: I0217 17:25:19.742902 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e1e70ea95f9af0e5ac87f6cfc8ba3e3136f9ce0e6178ed86a5488af66d3f0fd"} err="failed to get container status \"7e1e70ea95f9af0e5ac87f6cfc8ba3e3136f9ce0e6178ed86a5488af66d3f0fd\": rpc error: code = NotFound desc = could not find container \"7e1e70ea95f9af0e5ac87f6cfc8ba3e3136f9ce0e6178ed86a5488af66d3f0fd\": container with ID starting with 7e1e70ea95f9af0e5ac87f6cfc8ba3e3136f9ce0e6178ed86a5488af66d3f0fd not found: ID does not exist" Feb 17 17:25:19 crc kubenswrapper[4808]: I0217 17:25:19.742933 4808 scope.go:117] "RemoveContainer" containerID="2a62b920a605ea8344d4c8c97e6919fa689e4888f2666af6e339c4d1c28a3a0d" Feb 17 17:25:19 crc kubenswrapper[4808]: E0217 17:25:19.743416 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a62b920a605ea8344d4c8c97e6919fa689e4888f2666af6e339c4d1c28a3a0d\": container with ID starting with 2a62b920a605ea8344d4c8c97e6919fa689e4888f2666af6e339c4d1c28a3a0d not found: ID does not exist" containerID="2a62b920a605ea8344d4c8c97e6919fa689e4888f2666af6e339c4d1c28a3a0d" Feb 17 17:25:19 crc kubenswrapper[4808]: I0217 17:25:19.743458 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a62b920a605ea8344d4c8c97e6919fa689e4888f2666af6e339c4d1c28a3a0d"} err="failed to get container status \"2a62b920a605ea8344d4c8c97e6919fa689e4888f2666af6e339c4d1c28a3a0d\": rpc error: code = NotFound desc = could not find container \"2a62b920a605ea8344d4c8c97e6919fa689e4888f2666af6e339c4d1c28a3a0d\": container with ID starting with 2a62b920a605ea8344d4c8c97e6919fa689e4888f2666af6e339c4d1c28a3a0d not found: ID does not exist" Feb 17 17:25:21 crc kubenswrapper[4808]: I0217 17:25:21.163711 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6abeea5-59f7-4b89-a47c-bee82aac4741" path="/var/lib/kubelet/pods/c6abeea5-59f7-4b89-a47c-bee82aac4741/volumes" Feb 17 17:25:24 crc kubenswrapper[4808]: E0217 17:25:24.150852 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:25:28 crc kubenswrapper[4808]: E0217 17:25:28.148678 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:25:39 crc kubenswrapper[4808]: I0217 17:25:39.149275 4808 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 17:25:39 crc kubenswrapper[4808]: E0217 17:25:39.286283 4808 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 17 17:25:39 crc kubenswrapper[4808]: E0217 17:25:39.286668 4808 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 17 17:25:39 crc kubenswrapper[4808]: E0217 17:25:39.286835 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fnd2x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-zl7nk_openstack(a4b182d0-48fc-4487-b7ad-18f7803a4d4c): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 17:25:39 crc kubenswrapper[4808]: E0217 17:25:39.288144 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:25:43 crc kubenswrapper[4808]: E0217 17:25:43.148267 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:25:51 crc kubenswrapper[4808]: E0217 17:25:51.151286 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:25:56 crc kubenswrapper[4808]: E0217 17:25:56.149260 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:26:06 crc kubenswrapper[4808]: E0217 17:26:06.147930 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:26:09 crc kubenswrapper[4808]: E0217 17:26:09.283629 4808 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 17:26:09 crc kubenswrapper[4808]: E0217 17:26:09.284128 4808 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 17:26:09 crc kubenswrapper[4808]: E0217 17:26:09.284244 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nfchb4h678h649h5fbh664h79h7fh666h5bfh68h565h555h59dh5b6h5bfh66ch645h547h5cbh549h9fh58bh5d4hcfh78h68chc7h5ch67dhc7h5b4q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rjgf2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(2876084b-7055-449d-9ddb-447d3a515d80): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 17:26:09 crc kubenswrapper[4808]: E0217 17:26:09.285456 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:26:19 crc kubenswrapper[4808]: I0217 17:26:19.382329 4808 scope.go:117] "RemoveContainer" containerID="ed47e3d22836b6652cf2ffaee8f878d60a025a964ccb085ff32c6031cfeb2f0b" Feb 17 17:26:19 crc kubenswrapper[4808]: I0217 17:26:19.417684 4808 scope.go:117] "RemoveContainer" containerID="9102d6dcaf6e3fbf8c87936c002d9f93bfb04d65b7f6656f4e84306710e44084" Feb 17 17:26:19 crc kubenswrapper[4808]: I0217 17:26:19.448181 4808 scope.go:117] "RemoveContainer" containerID="7d865228fa25e7ce12749d7c2c4de36bd67d5fa5524e81ad097c8a1b40849e1b" Feb 17 17:26:20 crc kubenswrapper[4808]: E0217 17:26:20.147114 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:26:21 crc kubenswrapper[4808]: I0217 17:26:21.591882 4808 patch_prober.go:28] interesting pod/machine-config-daemon-k8v8k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 17:26:21 crc kubenswrapper[4808]: I0217 17:26:21.592300 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 17:26:22 crc kubenswrapper[4808]: E0217 17:26:22.150812 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:26:26 crc kubenswrapper[4808]: I0217 17:26:26.629362 4808 generic.go:334] "Generic (PLEG): container finished" podID="6431aef1-ada4-4683-967f-18a8a901d3f7" containerID="c40142ef958d484b3d88ec057c33b3f5b4fdb38dd3e73ba0134c4e1e89733ac2" exitCode=0 Feb 17 17:26:26 crc kubenswrapper[4808]: I0217 17:26:26.629446 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-v84wc/must-gather-25mrk" event={"ID":"6431aef1-ada4-4683-967f-18a8a901d3f7","Type":"ContainerDied","Data":"c40142ef958d484b3d88ec057c33b3f5b4fdb38dd3e73ba0134c4e1e89733ac2"} Feb 17 17:26:26 crc kubenswrapper[4808]: I0217 17:26:26.631508 4808 scope.go:117] "RemoveContainer" containerID="c40142ef958d484b3d88ec057c33b3f5b4fdb38dd3e73ba0134c4e1e89733ac2" Feb 17 17:26:27 crc kubenswrapper[4808]: I0217 17:26:27.179251 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-v84wc_must-gather-25mrk_6431aef1-ada4-4683-967f-18a8a901d3f7/gather/0.log" Feb 17 17:26:35 crc kubenswrapper[4808]: E0217 17:26:35.155209 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:26:35 crc kubenswrapper[4808]: E0217 17:26:35.155701 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:26:35 crc kubenswrapper[4808]: I0217 17:26:35.460711 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-v84wc/must-gather-25mrk"] Feb 17 17:26:35 crc kubenswrapper[4808]: I0217 17:26:35.460998 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-v84wc/must-gather-25mrk" podUID="6431aef1-ada4-4683-967f-18a8a901d3f7" containerName="copy" containerID="cri-o://271d9b2135c3935ec151eefdbaf495f4a45fec452012708df37252c90b672306" gracePeriod=2 Feb 17 17:26:35 crc kubenswrapper[4808]: I0217 17:26:35.473667 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-v84wc/must-gather-25mrk"] Feb 17 17:26:35 crc kubenswrapper[4808]: I0217 17:26:35.752448 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-v84wc_must-gather-25mrk_6431aef1-ada4-4683-967f-18a8a901d3f7/copy/0.log" Feb 17 17:26:35 crc kubenswrapper[4808]: I0217 17:26:35.753426 4808 generic.go:334] "Generic (PLEG): container finished" podID="6431aef1-ada4-4683-967f-18a8a901d3f7" containerID="271d9b2135c3935ec151eefdbaf495f4a45fec452012708df37252c90b672306" exitCode=143 Feb 17 17:26:35 crc kubenswrapper[4808]: I0217 17:26:35.956641 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-v84wc_must-gather-25mrk_6431aef1-ada4-4683-967f-18a8a901d3f7/copy/0.log" Feb 17 17:26:35 crc kubenswrapper[4808]: I0217 17:26:35.957192 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v84wc/must-gather-25mrk" Feb 17 17:26:36 crc kubenswrapper[4808]: I0217 17:26:36.147564 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l4xpd\" (UniqueName: \"kubernetes.io/projected/6431aef1-ada4-4683-967f-18a8a901d3f7-kube-api-access-l4xpd\") pod \"6431aef1-ada4-4683-967f-18a8a901d3f7\" (UID: \"6431aef1-ada4-4683-967f-18a8a901d3f7\") " Feb 17 17:26:36 crc kubenswrapper[4808]: I0217 17:26:36.147941 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6431aef1-ada4-4683-967f-18a8a901d3f7-must-gather-output\") pod \"6431aef1-ada4-4683-967f-18a8a901d3f7\" (UID: \"6431aef1-ada4-4683-967f-18a8a901d3f7\") " Feb 17 17:26:36 crc kubenswrapper[4808]: I0217 17:26:36.161259 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6431aef1-ada4-4683-967f-18a8a901d3f7-kube-api-access-l4xpd" (OuterVolumeSpecName: "kube-api-access-l4xpd") pod "6431aef1-ada4-4683-967f-18a8a901d3f7" (UID: "6431aef1-ada4-4683-967f-18a8a901d3f7"). InnerVolumeSpecName "kube-api-access-l4xpd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:26:36 crc kubenswrapper[4808]: I0217 17:26:36.251757 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l4xpd\" (UniqueName: \"kubernetes.io/projected/6431aef1-ada4-4683-967f-18a8a901d3f7-kube-api-access-l4xpd\") on node \"crc\" DevicePath \"\"" Feb 17 17:26:36 crc kubenswrapper[4808]: I0217 17:26:36.356011 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6431aef1-ada4-4683-967f-18a8a901d3f7-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "6431aef1-ada4-4683-967f-18a8a901d3f7" (UID: "6431aef1-ada4-4683-967f-18a8a901d3f7"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:26:36 crc kubenswrapper[4808]: I0217 17:26:36.456390 4808 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6431aef1-ada4-4683-967f-18a8a901d3f7-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 17 17:26:36 crc kubenswrapper[4808]: I0217 17:26:36.768006 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-v84wc_must-gather-25mrk_6431aef1-ada4-4683-967f-18a8a901d3f7/copy/0.log" Feb 17 17:26:36 crc kubenswrapper[4808]: I0217 17:26:36.769000 4808 scope.go:117] "RemoveContainer" containerID="271d9b2135c3935ec151eefdbaf495f4a45fec452012708df37252c90b672306" Feb 17 17:26:36 crc kubenswrapper[4808]: I0217 17:26:36.769225 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v84wc/must-gather-25mrk" Feb 17 17:26:36 crc kubenswrapper[4808]: I0217 17:26:36.827806 4808 scope.go:117] "RemoveContainer" containerID="c40142ef958d484b3d88ec057c33b3f5b4fdb38dd3e73ba0134c4e1e89733ac2" Feb 17 17:26:37 crc kubenswrapper[4808]: I0217 17:26:37.157529 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6431aef1-ada4-4683-967f-18a8a901d3f7" path="/var/lib/kubelet/pods/6431aef1-ada4-4683-967f-18a8a901d3f7/volumes" Feb 17 17:26:47 crc kubenswrapper[4808]: E0217 17:26:47.197753 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:26:50 crc kubenswrapper[4808]: E0217 17:26:50.148867 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:26:51 crc kubenswrapper[4808]: I0217 17:26:51.592243 4808 patch_prober.go:28] interesting pod/machine-config-daemon-k8v8k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 17:26:51 crc kubenswrapper[4808]: I0217 17:26:51.592657 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 17:27:01 crc kubenswrapper[4808]: E0217 17:27:01.148323 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:27:05 crc kubenswrapper[4808]: E0217 17:27:05.148523 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:27:12 crc kubenswrapper[4808]: E0217 17:27:12.171142 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:27:17 crc kubenswrapper[4808]: E0217 17:27:17.161471 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:27:21 crc kubenswrapper[4808]: I0217 17:27:21.592433 4808 patch_prober.go:28] interesting pod/machine-config-daemon-k8v8k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 17:27:21 crc kubenswrapper[4808]: I0217 17:27:21.593021 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 17:27:21 crc kubenswrapper[4808]: I0217 17:27:21.593067 4808 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" Feb 17 17:27:21 crc kubenswrapper[4808]: I0217 17:27:21.593899 4808 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6a461065a2b0984e9cb114713503f1076e495225fe534e196caafd6860edb08f"} pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 17:27:21 crc kubenswrapper[4808]: I0217 17:27:21.593960 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" containerID="cri-o://6a461065a2b0984e9cb114713503f1076e495225fe534e196caafd6860edb08f" gracePeriod=600 Feb 17 17:27:22 crc kubenswrapper[4808]: I0217 17:27:22.254845 4808 generic.go:334] "Generic (PLEG): container finished" podID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerID="6a461065a2b0984e9cb114713503f1076e495225fe534e196caafd6860edb08f" exitCode=0 Feb 17 17:27:22 crc kubenswrapper[4808]: I0217 17:27:22.254970 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" event={"ID":"ca38b6e7-b21c-453d-8b6c-a163dac84b35","Type":"ContainerDied","Data":"6a461065a2b0984e9cb114713503f1076e495225fe534e196caafd6860edb08f"} Feb 17 17:27:22 crc kubenswrapper[4808]: I0217 17:27:22.255424 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" event={"ID":"ca38b6e7-b21c-453d-8b6c-a163dac84b35","Type":"ContainerStarted","Data":"21cd60b81b7f48724a7b1dc2d7a6a9c6b537ff0cbb1155a7193b7f0c090faf54"} Feb 17 17:27:22 crc kubenswrapper[4808]: I0217 17:27:22.255448 4808 scope.go:117] "RemoveContainer" containerID="700c3283572281c218af9f0b845d6de62277f81d69443b3b1ffcaa7d804aa22e" Feb 17 17:27:23 crc kubenswrapper[4808]: E0217 17:27:23.148138 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:27:28 crc kubenswrapper[4808]: E0217 17:27:28.149306 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:27:35 crc kubenswrapper[4808]: E0217 17:27:35.149028 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:27:40 crc kubenswrapper[4808]: E0217 17:27:40.149665 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:27:48 crc kubenswrapper[4808]: E0217 17:27:48.148299 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:27:53 crc kubenswrapper[4808]: E0217 17:27:53.149846 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:28:01 crc kubenswrapper[4808]: E0217 17:28:01.149865 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:28:05 crc kubenswrapper[4808]: E0217 17:28:05.150623 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:28:13 crc kubenswrapper[4808]: E0217 17:28:13.150924 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:28:16 crc kubenswrapper[4808]: E0217 17:28:16.146629 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:28:25 crc kubenswrapper[4808]: E0217 17:28:25.148004 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:28:28 crc kubenswrapper[4808]: E0217 17:28:28.154221 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:28:38 crc kubenswrapper[4808]: E0217 17:28:38.148353 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:28:42 crc kubenswrapper[4808]: E0217 17:28:42.149173 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:28:53 crc kubenswrapper[4808]: E0217 17:28:53.148216 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:28:56 crc kubenswrapper[4808]: E0217 17:28:56.149330 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:29:07 crc kubenswrapper[4808]: E0217 17:29:07.152343 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:29:07 crc kubenswrapper[4808]: E0217 17:29:07.152356 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:29:20 crc kubenswrapper[4808]: E0217 17:29:20.149037 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:29:21 crc kubenswrapper[4808]: I0217 17:29:21.592718 4808 patch_prober.go:28] interesting pod/machine-config-daemon-k8v8k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 17:29:21 crc kubenswrapper[4808]: I0217 17:29:21.593075 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 17:29:22 crc kubenswrapper[4808]: E0217 17:29:22.147923 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:29:32 crc kubenswrapper[4808]: E0217 17:29:32.148755 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:29:36 crc kubenswrapper[4808]: E0217 17:29:36.148530 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:29:44 crc kubenswrapper[4808]: E0217 17:29:44.150559 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:29:50 crc kubenswrapper[4808]: E0217 17:29:50.148203 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:29:51 crc kubenswrapper[4808]: I0217 17:29:51.591954 4808 patch_prober.go:28] interesting pod/machine-config-daemon-k8v8k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 17:29:51 crc kubenswrapper[4808]: I0217 17:29:51.592044 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 17:29:57 crc kubenswrapper[4808]: E0217 17:29:57.159575 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:30:00 crc kubenswrapper[4808]: I0217 17:30:00.164931 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522490-vz8d4"] Feb 17 17:30:00 crc kubenswrapper[4808]: E0217 17:30:00.165832 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6abeea5-59f7-4b89-a47c-bee82aac4741" containerName="extract-utilities" Feb 17 17:30:00 crc kubenswrapper[4808]: I0217 17:30:00.165851 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6abeea5-59f7-4b89-a47c-bee82aac4741" containerName="extract-utilities" Feb 17 17:30:00 crc kubenswrapper[4808]: E0217 17:30:00.165877 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6abeea5-59f7-4b89-a47c-bee82aac4741" containerName="extract-content" Feb 17 17:30:00 crc kubenswrapper[4808]: I0217 17:30:00.165885 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6abeea5-59f7-4b89-a47c-bee82aac4741" containerName="extract-content" Feb 17 17:30:00 crc kubenswrapper[4808]: E0217 17:30:00.165917 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6431aef1-ada4-4683-967f-18a8a901d3f7" containerName="gather" Feb 17 17:30:00 crc kubenswrapper[4808]: I0217 17:30:00.165927 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="6431aef1-ada4-4683-967f-18a8a901d3f7" containerName="gather" Feb 17 17:30:00 crc kubenswrapper[4808]: E0217 17:30:00.165939 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6abeea5-59f7-4b89-a47c-bee82aac4741" containerName="registry-server" Feb 17 17:30:00 crc kubenswrapper[4808]: I0217 17:30:00.165947 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6abeea5-59f7-4b89-a47c-bee82aac4741" containerName="registry-server" Feb 17 17:30:00 crc kubenswrapper[4808]: E0217 17:30:00.165965 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6431aef1-ada4-4683-967f-18a8a901d3f7" containerName="copy" Feb 17 17:30:00 crc kubenswrapper[4808]: I0217 17:30:00.165972 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="6431aef1-ada4-4683-967f-18a8a901d3f7" containerName="copy" Feb 17 17:30:00 crc kubenswrapper[4808]: I0217 17:30:00.166234 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="6431aef1-ada4-4683-967f-18a8a901d3f7" containerName="gather" Feb 17 17:30:00 crc kubenswrapper[4808]: I0217 17:30:00.166248 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6abeea5-59f7-4b89-a47c-bee82aac4741" containerName="registry-server" Feb 17 17:30:00 crc kubenswrapper[4808]: I0217 17:30:00.166261 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="6431aef1-ada4-4683-967f-18a8a901d3f7" containerName="copy" Feb 17 17:30:00 crc kubenswrapper[4808]: I0217 17:30:00.167060 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522490-vz8d4" Feb 17 17:30:00 crc kubenswrapper[4808]: I0217 17:30:00.170417 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 17 17:30:00 crc kubenswrapper[4808]: I0217 17:30:00.171465 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 17 17:30:00 crc kubenswrapper[4808]: I0217 17:30:00.180114 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522490-vz8d4"] Feb 17 17:30:00 crc kubenswrapper[4808]: I0217 17:30:00.250012 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ea831acb-24b6-4b34-9f26-5deb1d134bba-secret-volume\") pod \"collect-profiles-29522490-vz8d4\" (UID: \"ea831acb-24b6-4b34-9f26-5deb1d134bba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522490-vz8d4" Feb 17 17:30:00 crc kubenswrapper[4808]: I0217 17:30:00.250287 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggd6k\" (UniqueName: \"kubernetes.io/projected/ea831acb-24b6-4b34-9f26-5deb1d134bba-kube-api-access-ggd6k\") pod \"collect-profiles-29522490-vz8d4\" (UID: \"ea831acb-24b6-4b34-9f26-5deb1d134bba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522490-vz8d4" Feb 17 17:30:00 crc kubenswrapper[4808]: I0217 17:30:00.250466 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ea831acb-24b6-4b34-9f26-5deb1d134bba-config-volume\") pod \"collect-profiles-29522490-vz8d4\" (UID: \"ea831acb-24b6-4b34-9f26-5deb1d134bba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522490-vz8d4" Feb 17 17:30:00 crc kubenswrapper[4808]: I0217 17:30:00.354671 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggd6k\" (UniqueName: \"kubernetes.io/projected/ea831acb-24b6-4b34-9f26-5deb1d134bba-kube-api-access-ggd6k\") pod \"collect-profiles-29522490-vz8d4\" (UID: \"ea831acb-24b6-4b34-9f26-5deb1d134bba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522490-vz8d4" Feb 17 17:30:00 crc kubenswrapper[4808]: I0217 17:30:00.354826 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ea831acb-24b6-4b34-9f26-5deb1d134bba-config-volume\") pod \"collect-profiles-29522490-vz8d4\" (UID: \"ea831acb-24b6-4b34-9f26-5deb1d134bba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522490-vz8d4" Feb 17 17:30:00 crc kubenswrapper[4808]: I0217 17:30:00.355280 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ea831acb-24b6-4b34-9f26-5deb1d134bba-secret-volume\") pod \"collect-profiles-29522490-vz8d4\" (UID: \"ea831acb-24b6-4b34-9f26-5deb1d134bba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522490-vz8d4" Feb 17 17:30:00 crc kubenswrapper[4808]: I0217 17:30:00.355991 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ea831acb-24b6-4b34-9f26-5deb1d134bba-config-volume\") pod \"collect-profiles-29522490-vz8d4\" (UID: \"ea831acb-24b6-4b34-9f26-5deb1d134bba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522490-vz8d4" Feb 17 17:30:00 crc kubenswrapper[4808]: I0217 17:30:00.365296 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ea831acb-24b6-4b34-9f26-5deb1d134bba-secret-volume\") pod \"collect-profiles-29522490-vz8d4\" (UID: \"ea831acb-24b6-4b34-9f26-5deb1d134bba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522490-vz8d4" Feb 17 17:30:00 crc kubenswrapper[4808]: I0217 17:30:00.372483 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggd6k\" (UniqueName: \"kubernetes.io/projected/ea831acb-24b6-4b34-9f26-5deb1d134bba-kube-api-access-ggd6k\") pod \"collect-profiles-29522490-vz8d4\" (UID: \"ea831acb-24b6-4b34-9f26-5deb1d134bba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522490-vz8d4" Feb 17 17:30:00 crc kubenswrapper[4808]: I0217 17:30:00.495208 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522490-vz8d4" Feb 17 17:30:00 crc kubenswrapper[4808]: I0217 17:30:00.957675 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522490-vz8d4"] Feb 17 17:30:01 crc kubenswrapper[4808]: I0217 17:30:01.058810 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522490-vz8d4" event={"ID":"ea831acb-24b6-4b34-9f26-5deb1d134bba","Type":"ContainerStarted","Data":"1f5029ea81d35ef8da22634b533b22242da37444b392ffdc0447ae81517dc0fb"} Feb 17 17:30:02 crc kubenswrapper[4808]: I0217 17:30:02.072442 4808 generic.go:334] "Generic (PLEG): container finished" podID="ea831acb-24b6-4b34-9f26-5deb1d134bba" containerID="5c8cb2f0ac8654a5c60f57179a47aa3c9838af7e2b7c0c647a02c3ef5c293184" exitCode=0 Feb 17 17:30:02 crc kubenswrapper[4808]: I0217 17:30:02.072548 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522490-vz8d4" event={"ID":"ea831acb-24b6-4b34-9f26-5deb1d134bba","Type":"ContainerDied","Data":"5c8cb2f0ac8654a5c60f57179a47aa3c9838af7e2b7c0c647a02c3ef5c293184"} Feb 17 17:30:03 crc kubenswrapper[4808]: I0217 17:30:03.585274 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522490-vz8d4" Feb 17 17:30:03 crc kubenswrapper[4808]: I0217 17:30:03.746759 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ggd6k\" (UniqueName: \"kubernetes.io/projected/ea831acb-24b6-4b34-9f26-5deb1d134bba-kube-api-access-ggd6k\") pod \"ea831acb-24b6-4b34-9f26-5deb1d134bba\" (UID: \"ea831acb-24b6-4b34-9f26-5deb1d134bba\") " Feb 17 17:30:03 crc kubenswrapper[4808]: I0217 17:30:03.746998 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ea831acb-24b6-4b34-9f26-5deb1d134bba-secret-volume\") pod \"ea831acb-24b6-4b34-9f26-5deb1d134bba\" (UID: \"ea831acb-24b6-4b34-9f26-5deb1d134bba\") " Feb 17 17:30:03 crc kubenswrapper[4808]: I0217 17:30:03.747173 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ea831acb-24b6-4b34-9f26-5deb1d134bba-config-volume\") pod \"ea831acb-24b6-4b34-9f26-5deb1d134bba\" (UID: \"ea831acb-24b6-4b34-9f26-5deb1d134bba\") " Feb 17 17:30:03 crc kubenswrapper[4808]: I0217 17:30:03.747960 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea831acb-24b6-4b34-9f26-5deb1d134bba-config-volume" (OuterVolumeSpecName: "config-volume") pod "ea831acb-24b6-4b34-9f26-5deb1d134bba" (UID: "ea831acb-24b6-4b34-9f26-5deb1d134bba"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 17:30:03 crc kubenswrapper[4808]: I0217 17:30:03.752982 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea831acb-24b6-4b34-9f26-5deb1d134bba-kube-api-access-ggd6k" (OuterVolumeSpecName: "kube-api-access-ggd6k") pod "ea831acb-24b6-4b34-9f26-5deb1d134bba" (UID: "ea831acb-24b6-4b34-9f26-5deb1d134bba"). InnerVolumeSpecName "kube-api-access-ggd6k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:30:03 crc kubenswrapper[4808]: I0217 17:30:03.756812 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea831acb-24b6-4b34-9f26-5deb1d134bba-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ea831acb-24b6-4b34-9f26-5deb1d134bba" (UID: "ea831acb-24b6-4b34-9f26-5deb1d134bba"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:30:03 crc kubenswrapper[4808]: I0217 17:30:03.849832 4808 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ea831acb-24b6-4b34-9f26-5deb1d134bba-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 17 17:30:03 crc kubenswrapper[4808]: I0217 17:30:03.849874 4808 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ea831acb-24b6-4b34-9f26-5deb1d134bba-config-volume\") on node \"crc\" DevicePath \"\"" Feb 17 17:30:03 crc kubenswrapper[4808]: I0217 17:30:03.849885 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ggd6k\" (UniqueName: \"kubernetes.io/projected/ea831acb-24b6-4b34-9f26-5deb1d134bba-kube-api-access-ggd6k\") on node \"crc\" DevicePath \"\"" Feb 17 17:30:04 crc kubenswrapper[4808]: I0217 17:30:04.093153 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522490-vz8d4" event={"ID":"ea831acb-24b6-4b34-9f26-5deb1d134bba","Type":"ContainerDied","Data":"1f5029ea81d35ef8da22634b533b22242da37444b392ffdc0447ae81517dc0fb"} Feb 17 17:30:04 crc kubenswrapper[4808]: I0217 17:30:04.093513 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f5029ea81d35ef8da22634b533b22242da37444b392ffdc0447ae81517dc0fb" Feb 17 17:30:04 crc kubenswrapper[4808]: I0217 17:30:04.093276 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522490-vz8d4" Feb 17 17:30:04 crc kubenswrapper[4808]: I0217 17:30:04.673455 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522445-ttsld"] Feb 17 17:30:04 crc kubenswrapper[4808]: I0217 17:30:04.685650 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522445-ttsld"] Feb 17 17:30:05 crc kubenswrapper[4808]: E0217 17:30:05.148471 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:30:05 crc kubenswrapper[4808]: I0217 17:30:05.165997 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="450a44d1-3fb2-41f5-9200-59c6c1838c86" path="/var/lib/kubelet/pods/450a44d1-3fb2-41f5-9200-59c6c1838c86/volumes" Feb 17 17:30:12 crc kubenswrapper[4808]: E0217 17:30:12.149015 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:30:17 crc kubenswrapper[4808]: E0217 17:30:17.156450 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:30:19 crc kubenswrapper[4808]: I0217 17:30:19.650678 4808 scope.go:117] "RemoveContainer" containerID="51178eccc89b955640453b414bcd16d1523ac289cf0ed8497a9b4ca6a3ebaa2d" Feb 17 17:30:21 crc kubenswrapper[4808]: I0217 17:30:21.593090 4808 patch_prober.go:28] interesting pod/machine-config-daemon-k8v8k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 17:30:21 crc kubenswrapper[4808]: I0217 17:30:21.593695 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 17:30:21 crc kubenswrapper[4808]: I0217 17:30:21.593756 4808 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" Feb 17 17:30:21 crc kubenswrapper[4808]: I0217 17:30:21.594683 4808 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"21cd60b81b7f48724a7b1dc2d7a6a9c6b537ff0cbb1155a7193b7f0c090faf54"} pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 17:30:21 crc kubenswrapper[4808]: I0217 17:30:21.594742 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" containerID="cri-o://21cd60b81b7f48724a7b1dc2d7a6a9c6b537ff0cbb1155a7193b7f0c090faf54" gracePeriod=600 Feb 17 17:30:21 crc kubenswrapper[4808]: E0217 17:30:21.732328 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 17:30:22 crc kubenswrapper[4808]: I0217 17:30:22.302997 4808 generic.go:334] "Generic (PLEG): container finished" podID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerID="21cd60b81b7f48724a7b1dc2d7a6a9c6b537ff0cbb1155a7193b7f0c090faf54" exitCode=0 Feb 17 17:30:22 crc kubenswrapper[4808]: I0217 17:30:22.303056 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" event={"ID":"ca38b6e7-b21c-453d-8b6c-a163dac84b35","Type":"ContainerDied","Data":"21cd60b81b7f48724a7b1dc2d7a6a9c6b537ff0cbb1155a7193b7f0c090faf54"} Feb 17 17:30:22 crc kubenswrapper[4808]: I0217 17:30:22.303107 4808 scope.go:117] "RemoveContainer" containerID="6a461065a2b0984e9cb114713503f1076e495225fe534e196caafd6860edb08f" Feb 17 17:30:22 crc kubenswrapper[4808]: I0217 17:30:22.304253 4808 scope.go:117] "RemoveContainer" containerID="21cd60b81b7f48724a7b1dc2d7a6a9c6b537ff0cbb1155a7193b7f0c090faf54" Feb 17 17:30:22 crc kubenswrapper[4808]: E0217 17:30:22.304859 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 17:30:24 crc kubenswrapper[4808]: E0217 17:30:24.148468 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:30:31 crc kubenswrapper[4808]: I0217 17:30:31.600628 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-hf4ww"] Feb 17 17:30:31 crc kubenswrapper[4808]: E0217 17:30:31.605534 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea831acb-24b6-4b34-9f26-5deb1d134bba" containerName="collect-profiles" Feb 17 17:30:31 crc kubenswrapper[4808]: I0217 17:30:31.605559 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea831acb-24b6-4b34-9f26-5deb1d134bba" containerName="collect-profiles" Feb 17 17:30:31 crc kubenswrapper[4808]: I0217 17:30:31.606366 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea831acb-24b6-4b34-9f26-5deb1d134bba" containerName="collect-profiles" Feb 17 17:30:31 crc kubenswrapper[4808]: I0217 17:30:31.614977 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hf4ww"] Feb 17 17:30:31 crc kubenswrapper[4808]: I0217 17:30:31.615120 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hf4ww" Feb 17 17:30:31 crc kubenswrapper[4808]: I0217 17:30:31.684677 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c342da3e-2aeb-4794-b93b-816f13e8dbf0-catalog-content\") pod \"certified-operators-hf4ww\" (UID: \"c342da3e-2aeb-4794-b93b-816f13e8dbf0\") " pod="openshift-marketplace/certified-operators-hf4ww" Feb 17 17:30:31 crc kubenswrapper[4808]: I0217 17:30:31.684884 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c342da3e-2aeb-4794-b93b-816f13e8dbf0-utilities\") pod \"certified-operators-hf4ww\" (UID: \"c342da3e-2aeb-4794-b93b-816f13e8dbf0\") " pod="openshift-marketplace/certified-operators-hf4ww" Feb 17 17:30:31 crc kubenswrapper[4808]: I0217 17:30:31.684963 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjlzz\" (UniqueName: \"kubernetes.io/projected/c342da3e-2aeb-4794-b93b-816f13e8dbf0-kube-api-access-jjlzz\") pod \"certified-operators-hf4ww\" (UID: \"c342da3e-2aeb-4794-b93b-816f13e8dbf0\") " pod="openshift-marketplace/certified-operators-hf4ww" Feb 17 17:30:31 crc kubenswrapper[4808]: I0217 17:30:31.787460 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjlzz\" (UniqueName: \"kubernetes.io/projected/c342da3e-2aeb-4794-b93b-816f13e8dbf0-kube-api-access-jjlzz\") pod \"certified-operators-hf4ww\" (UID: \"c342da3e-2aeb-4794-b93b-816f13e8dbf0\") " pod="openshift-marketplace/certified-operators-hf4ww" Feb 17 17:30:31 crc kubenswrapper[4808]: I0217 17:30:31.788288 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c342da3e-2aeb-4794-b93b-816f13e8dbf0-catalog-content\") pod \"certified-operators-hf4ww\" (UID: \"c342da3e-2aeb-4794-b93b-816f13e8dbf0\") " pod="openshift-marketplace/certified-operators-hf4ww" Feb 17 17:30:31 crc kubenswrapper[4808]: I0217 17:30:31.788454 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c342da3e-2aeb-4794-b93b-816f13e8dbf0-utilities\") pod \"certified-operators-hf4ww\" (UID: \"c342da3e-2aeb-4794-b93b-816f13e8dbf0\") " pod="openshift-marketplace/certified-operators-hf4ww" Feb 17 17:30:31 crc kubenswrapper[4808]: I0217 17:30:31.788996 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c342da3e-2aeb-4794-b93b-816f13e8dbf0-utilities\") pod \"certified-operators-hf4ww\" (UID: \"c342da3e-2aeb-4794-b93b-816f13e8dbf0\") " pod="openshift-marketplace/certified-operators-hf4ww" Feb 17 17:30:31 crc kubenswrapper[4808]: I0217 17:30:31.789232 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c342da3e-2aeb-4794-b93b-816f13e8dbf0-catalog-content\") pod \"certified-operators-hf4ww\" (UID: \"c342da3e-2aeb-4794-b93b-816f13e8dbf0\") " pod="openshift-marketplace/certified-operators-hf4ww" Feb 17 17:30:31 crc kubenswrapper[4808]: I0217 17:30:31.823489 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjlzz\" (UniqueName: \"kubernetes.io/projected/c342da3e-2aeb-4794-b93b-816f13e8dbf0-kube-api-access-jjlzz\") pod \"certified-operators-hf4ww\" (UID: \"c342da3e-2aeb-4794-b93b-816f13e8dbf0\") " pod="openshift-marketplace/certified-operators-hf4ww" Feb 17 17:30:31 crc kubenswrapper[4808]: I0217 17:30:31.944277 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hf4ww" Feb 17 17:30:32 crc kubenswrapper[4808]: E0217 17:30:32.180181 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:30:32 crc kubenswrapper[4808]: I0217 17:30:32.539150 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hf4ww"] Feb 17 17:30:33 crc kubenswrapper[4808]: I0217 17:30:33.449889 4808 generic.go:334] "Generic (PLEG): container finished" podID="c342da3e-2aeb-4794-b93b-816f13e8dbf0" containerID="87dbbe86e569cdbd049e343ff0348987d288c89683172334820561f2e3545ac5" exitCode=0 Feb 17 17:30:33 crc kubenswrapper[4808]: I0217 17:30:33.449944 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hf4ww" event={"ID":"c342da3e-2aeb-4794-b93b-816f13e8dbf0","Type":"ContainerDied","Data":"87dbbe86e569cdbd049e343ff0348987d288c89683172334820561f2e3545ac5"} Feb 17 17:30:33 crc kubenswrapper[4808]: I0217 17:30:33.450484 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hf4ww" event={"ID":"c342da3e-2aeb-4794-b93b-816f13e8dbf0","Type":"ContainerStarted","Data":"d02687da12e1bb2927925182c84d9031a7ea83d264434f7638701e7bfa4e0094"} Feb 17 17:30:34 crc kubenswrapper[4808]: I0217 17:30:34.463960 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hf4ww" event={"ID":"c342da3e-2aeb-4794-b93b-816f13e8dbf0","Type":"ContainerStarted","Data":"7507b7f5af13914618689c5517c7b7b310b093cc13096b6d41153324c64071e3"} Feb 17 17:30:36 crc kubenswrapper[4808]: I0217 17:30:36.145845 4808 scope.go:117] "RemoveContainer" containerID="21cd60b81b7f48724a7b1dc2d7a6a9c6b537ff0cbb1155a7193b7f0c090faf54" Feb 17 17:30:36 crc kubenswrapper[4808]: E0217 17:30:36.146443 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 17:30:37 crc kubenswrapper[4808]: I0217 17:30:37.503543 4808 generic.go:334] "Generic (PLEG): container finished" podID="c342da3e-2aeb-4794-b93b-816f13e8dbf0" containerID="7507b7f5af13914618689c5517c7b7b310b093cc13096b6d41153324c64071e3" exitCode=0 Feb 17 17:30:37 crc kubenswrapper[4808]: I0217 17:30:37.503623 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hf4ww" event={"ID":"c342da3e-2aeb-4794-b93b-816f13e8dbf0","Type":"ContainerDied","Data":"7507b7f5af13914618689c5517c7b7b310b093cc13096b6d41153324c64071e3"} Feb 17 17:30:38 crc kubenswrapper[4808]: I0217 17:30:38.517277 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hf4ww" event={"ID":"c342da3e-2aeb-4794-b93b-816f13e8dbf0","Type":"ContainerStarted","Data":"bbc852ee41e59782c088b559c60dc802664e0cbe5ae01deaec7b958eda9ffa56"} Feb 17 17:30:38 crc kubenswrapper[4808]: I0217 17:30:38.541970 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-hf4ww" podStartSLOduration=3.07937231 podStartE2EDuration="7.541944944s" podCreationTimestamp="2026-02-17 17:30:31 +0000 UTC" firstStartedPulling="2026-02-17 17:30:33.453084677 +0000 UTC m=+5796.969443750" lastFinishedPulling="2026-02-17 17:30:37.915657271 +0000 UTC m=+5801.432016384" observedRunningTime="2026-02-17 17:30:38.535747836 +0000 UTC m=+5802.052106909" watchObservedRunningTime="2026-02-17 17:30:38.541944944 +0000 UTC m=+5802.058304057" Feb 17 17:30:39 crc kubenswrapper[4808]: E0217 17:30:39.148494 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:30:41 crc kubenswrapper[4808]: I0217 17:30:41.944543 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-hf4ww" Feb 17 17:30:41 crc kubenswrapper[4808]: I0217 17:30:41.945185 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-hf4ww" Feb 17 17:30:42 crc kubenswrapper[4808]: I0217 17:30:42.019866 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-hf4ww" Feb 17 17:30:43 crc kubenswrapper[4808]: I0217 17:30:43.150221 4808 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 17:30:43 crc kubenswrapper[4808]: E0217 17:30:43.281450 4808 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 17 17:30:43 crc kubenswrapper[4808]: E0217 17:30:43.282164 4808 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 17 17:30:43 crc kubenswrapper[4808]: E0217 17:30:43.282352 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fnd2x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-zl7nk_openstack(a4b182d0-48fc-4487-b7ad-18f7803a4d4c): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 17:30:43 crc kubenswrapper[4808]: E0217 17:30:43.283902 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:30:49 crc kubenswrapper[4808]: I0217 17:30:49.146843 4808 scope.go:117] "RemoveContainer" containerID="21cd60b81b7f48724a7b1dc2d7a6a9c6b537ff0cbb1155a7193b7f0c090faf54" Feb 17 17:30:49 crc kubenswrapper[4808]: E0217 17:30:49.147879 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 17:30:50 crc kubenswrapper[4808]: E0217 17:30:50.148993 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:30:52 crc kubenswrapper[4808]: I0217 17:30:52.031265 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-hf4ww" Feb 17 17:30:52 crc kubenswrapper[4808]: I0217 17:30:52.093909 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hf4ww"] Feb 17 17:30:52 crc kubenswrapper[4808]: I0217 17:30:52.684301 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-hf4ww" podUID="c342da3e-2aeb-4794-b93b-816f13e8dbf0" containerName="registry-server" containerID="cri-o://bbc852ee41e59782c088b559c60dc802664e0cbe5ae01deaec7b958eda9ffa56" gracePeriod=2 Feb 17 17:30:53 crc kubenswrapper[4808]: I0217 17:30:53.240921 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hf4ww" Feb 17 17:30:53 crc kubenswrapper[4808]: I0217 17:30:53.283365 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jjlzz\" (UniqueName: \"kubernetes.io/projected/c342da3e-2aeb-4794-b93b-816f13e8dbf0-kube-api-access-jjlzz\") pod \"c342da3e-2aeb-4794-b93b-816f13e8dbf0\" (UID: \"c342da3e-2aeb-4794-b93b-816f13e8dbf0\") " Feb 17 17:30:53 crc kubenswrapper[4808]: I0217 17:30:53.283455 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c342da3e-2aeb-4794-b93b-816f13e8dbf0-utilities\") pod \"c342da3e-2aeb-4794-b93b-816f13e8dbf0\" (UID: \"c342da3e-2aeb-4794-b93b-816f13e8dbf0\") " Feb 17 17:30:53 crc kubenswrapper[4808]: I0217 17:30:53.283487 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c342da3e-2aeb-4794-b93b-816f13e8dbf0-catalog-content\") pod \"c342da3e-2aeb-4794-b93b-816f13e8dbf0\" (UID: \"c342da3e-2aeb-4794-b93b-816f13e8dbf0\") " Feb 17 17:30:53 crc kubenswrapper[4808]: I0217 17:30:53.284280 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c342da3e-2aeb-4794-b93b-816f13e8dbf0-utilities" (OuterVolumeSpecName: "utilities") pod "c342da3e-2aeb-4794-b93b-816f13e8dbf0" (UID: "c342da3e-2aeb-4794-b93b-816f13e8dbf0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:30:53 crc kubenswrapper[4808]: I0217 17:30:53.308705 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c342da3e-2aeb-4794-b93b-816f13e8dbf0-kube-api-access-jjlzz" (OuterVolumeSpecName: "kube-api-access-jjlzz") pod "c342da3e-2aeb-4794-b93b-816f13e8dbf0" (UID: "c342da3e-2aeb-4794-b93b-816f13e8dbf0"). InnerVolumeSpecName "kube-api-access-jjlzz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:30:53 crc kubenswrapper[4808]: I0217 17:30:53.332946 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c342da3e-2aeb-4794-b93b-816f13e8dbf0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c342da3e-2aeb-4794-b93b-816f13e8dbf0" (UID: "c342da3e-2aeb-4794-b93b-816f13e8dbf0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:30:53 crc kubenswrapper[4808]: I0217 17:30:53.385408 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jjlzz\" (UniqueName: \"kubernetes.io/projected/c342da3e-2aeb-4794-b93b-816f13e8dbf0-kube-api-access-jjlzz\") on node \"crc\" DevicePath \"\"" Feb 17 17:30:53 crc kubenswrapper[4808]: I0217 17:30:53.385442 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c342da3e-2aeb-4794-b93b-816f13e8dbf0-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 17:30:53 crc kubenswrapper[4808]: I0217 17:30:53.385451 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c342da3e-2aeb-4794-b93b-816f13e8dbf0-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 17:30:53 crc kubenswrapper[4808]: I0217 17:30:53.696159 4808 generic.go:334] "Generic (PLEG): container finished" podID="c342da3e-2aeb-4794-b93b-816f13e8dbf0" containerID="bbc852ee41e59782c088b559c60dc802664e0cbe5ae01deaec7b958eda9ffa56" exitCode=0 Feb 17 17:30:53 crc kubenswrapper[4808]: I0217 17:30:53.696197 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hf4ww" event={"ID":"c342da3e-2aeb-4794-b93b-816f13e8dbf0","Type":"ContainerDied","Data":"bbc852ee41e59782c088b559c60dc802664e0cbe5ae01deaec7b958eda9ffa56"} Feb 17 17:30:53 crc kubenswrapper[4808]: I0217 17:30:53.696221 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hf4ww" event={"ID":"c342da3e-2aeb-4794-b93b-816f13e8dbf0","Type":"ContainerDied","Data":"d02687da12e1bb2927925182c84d9031a7ea83d264434f7638701e7bfa4e0094"} Feb 17 17:30:53 crc kubenswrapper[4808]: I0217 17:30:53.696238 4808 scope.go:117] "RemoveContainer" containerID="bbc852ee41e59782c088b559c60dc802664e0cbe5ae01deaec7b958eda9ffa56" Feb 17 17:30:53 crc kubenswrapper[4808]: I0217 17:30:53.696337 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hf4ww" Feb 17 17:30:53 crc kubenswrapper[4808]: I0217 17:30:53.721721 4808 scope.go:117] "RemoveContainer" containerID="7507b7f5af13914618689c5517c7b7b310b093cc13096b6d41153324c64071e3" Feb 17 17:30:53 crc kubenswrapper[4808]: I0217 17:30:53.740014 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hf4ww"] Feb 17 17:30:53 crc kubenswrapper[4808]: I0217 17:30:53.749745 4808 scope.go:117] "RemoveContainer" containerID="87dbbe86e569cdbd049e343ff0348987d288c89683172334820561f2e3545ac5" Feb 17 17:30:53 crc kubenswrapper[4808]: I0217 17:30:53.755819 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-hf4ww"] Feb 17 17:30:53 crc kubenswrapper[4808]: I0217 17:30:53.831108 4808 scope.go:117] "RemoveContainer" containerID="bbc852ee41e59782c088b559c60dc802664e0cbe5ae01deaec7b958eda9ffa56" Feb 17 17:30:53 crc kubenswrapper[4808]: E0217 17:30:53.832217 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bbc852ee41e59782c088b559c60dc802664e0cbe5ae01deaec7b958eda9ffa56\": container with ID starting with bbc852ee41e59782c088b559c60dc802664e0cbe5ae01deaec7b958eda9ffa56 not found: ID does not exist" containerID="bbc852ee41e59782c088b559c60dc802664e0cbe5ae01deaec7b958eda9ffa56" Feb 17 17:30:53 crc kubenswrapper[4808]: I0217 17:30:53.832274 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bbc852ee41e59782c088b559c60dc802664e0cbe5ae01deaec7b958eda9ffa56"} err="failed to get container status \"bbc852ee41e59782c088b559c60dc802664e0cbe5ae01deaec7b958eda9ffa56\": rpc error: code = NotFound desc = could not find container \"bbc852ee41e59782c088b559c60dc802664e0cbe5ae01deaec7b958eda9ffa56\": container with ID starting with bbc852ee41e59782c088b559c60dc802664e0cbe5ae01deaec7b958eda9ffa56 not found: ID does not exist" Feb 17 17:30:53 crc kubenswrapper[4808]: I0217 17:30:53.832301 4808 scope.go:117] "RemoveContainer" containerID="7507b7f5af13914618689c5517c7b7b310b093cc13096b6d41153324c64071e3" Feb 17 17:30:53 crc kubenswrapper[4808]: E0217 17:30:53.832642 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7507b7f5af13914618689c5517c7b7b310b093cc13096b6d41153324c64071e3\": container with ID starting with 7507b7f5af13914618689c5517c7b7b310b093cc13096b6d41153324c64071e3 not found: ID does not exist" containerID="7507b7f5af13914618689c5517c7b7b310b093cc13096b6d41153324c64071e3" Feb 17 17:30:53 crc kubenswrapper[4808]: I0217 17:30:53.832677 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7507b7f5af13914618689c5517c7b7b310b093cc13096b6d41153324c64071e3"} err="failed to get container status \"7507b7f5af13914618689c5517c7b7b310b093cc13096b6d41153324c64071e3\": rpc error: code = NotFound desc = could not find container \"7507b7f5af13914618689c5517c7b7b310b093cc13096b6d41153324c64071e3\": container with ID starting with 7507b7f5af13914618689c5517c7b7b310b093cc13096b6d41153324c64071e3 not found: ID does not exist" Feb 17 17:30:53 crc kubenswrapper[4808]: I0217 17:30:53.832707 4808 scope.go:117] "RemoveContainer" containerID="87dbbe86e569cdbd049e343ff0348987d288c89683172334820561f2e3545ac5" Feb 17 17:30:53 crc kubenswrapper[4808]: E0217 17:30:53.832934 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87dbbe86e569cdbd049e343ff0348987d288c89683172334820561f2e3545ac5\": container with ID starting with 87dbbe86e569cdbd049e343ff0348987d288c89683172334820561f2e3545ac5 not found: ID does not exist" containerID="87dbbe86e569cdbd049e343ff0348987d288c89683172334820561f2e3545ac5" Feb 17 17:30:53 crc kubenswrapper[4808]: I0217 17:30:53.832961 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87dbbe86e569cdbd049e343ff0348987d288c89683172334820561f2e3545ac5"} err="failed to get container status \"87dbbe86e569cdbd049e343ff0348987d288c89683172334820561f2e3545ac5\": rpc error: code = NotFound desc = could not find container \"87dbbe86e569cdbd049e343ff0348987d288c89683172334820561f2e3545ac5\": container with ID starting with 87dbbe86e569cdbd049e343ff0348987d288c89683172334820561f2e3545ac5 not found: ID does not exist" Feb 17 17:30:55 crc kubenswrapper[4808]: E0217 17:30:55.147613 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:30:55 crc kubenswrapper[4808]: I0217 17:30:55.159346 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c342da3e-2aeb-4794-b93b-816f13e8dbf0" path="/var/lib/kubelet/pods/c342da3e-2aeb-4794-b93b-816f13e8dbf0/volumes" Feb 17 17:31:02 crc kubenswrapper[4808]: I0217 17:31:02.148873 4808 scope.go:117] "RemoveContainer" containerID="21cd60b81b7f48724a7b1dc2d7a6a9c6b537ff0cbb1155a7193b7f0c090faf54" Feb 17 17:31:02 crc kubenswrapper[4808]: E0217 17:31:02.149922 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 17:31:04 crc kubenswrapper[4808]: E0217 17:31:04.148744 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:31:06 crc kubenswrapper[4808]: E0217 17:31:06.148520 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:31:15 crc kubenswrapper[4808]: I0217 17:31:15.145512 4808 scope.go:117] "RemoveContainer" containerID="21cd60b81b7f48724a7b1dc2d7a6a9c6b537ff0cbb1155a7193b7f0c090faf54" Feb 17 17:31:15 crc kubenswrapper[4808]: E0217 17:31:15.148058 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 17:31:16 crc kubenswrapper[4808]: E0217 17:31:16.305538 4808 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 17:31:16 crc kubenswrapper[4808]: E0217 17:31:16.305869 4808 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 17:31:16 crc kubenswrapper[4808]: E0217 17:31:16.306047 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nfchb4h678h649h5fbh664h79h7fh666h5bfh68h565h555h59dh5b6h5bfh66ch645h547h5cbh549h9fh58bh5d4hcfh78h68chc7h5ch67dhc7h5b4q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rjgf2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(2876084b-7055-449d-9ddb-447d3a515d80): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 17:31:16 crc kubenswrapper[4808]: E0217 17:31:16.307624 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:31:20 crc kubenswrapper[4808]: E0217 17:31:20.147998 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:31:28 crc kubenswrapper[4808]: I0217 17:31:28.145816 4808 scope.go:117] "RemoveContainer" containerID="21cd60b81b7f48724a7b1dc2d7a6a9c6b537ff0cbb1155a7193b7f0c090faf54" Feb 17 17:31:28 crc kubenswrapper[4808]: E0217 17:31:28.146660 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 17:31:28 crc kubenswrapper[4808]: E0217 17:31:28.149237 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:31:32 crc kubenswrapper[4808]: E0217 17:31:32.147813 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:31:40 crc kubenswrapper[4808]: E0217 17:31:40.148759 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:31:42 crc kubenswrapper[4808]: I0217 17:31:42.900811 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6nchh"] Feb 17 17:31:42 crc kubenswrapper[4808]: E0217 17:31:42.901524 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c342da3e-2aeb-4794-b93b-816f13e8dbf0" containerName="extract-utilities" Feb 17 17:31:42 crc kubenswrapper[4808]: I0217 17:31:42.901535 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="c342da3e-2aeb-4794-b93b-816f13e8dbf0" containerName="extract-utilities" Feb 17 17:31:42 crc kubenswrapper[4808]: E0217 17:31:42.901558 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c342da3e-2aeb-4794-b93b-816f13e8dbf0" containerName="extract-content" Feb 17 17:31:42 crc kubenswrapper[4808]: I0217 17:31:42.901563 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="c342da3e-2aeb-4794-b93b-816f13e8dbf0" containerName="extract-content" Feb 17 17:31:42 crc kubenswrapper[4808]: E0217 17:31:42.901592 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c342da3e-2aeb-4794-b93b-816f13e8dbf0" containerName="registry-server" Feb 17 17:31:42 crc kubenswrapper[4808]: I0217 17:31:42.901599 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="c342da3e-2aeb-4794-b93b-816f13e8dbf0" containerName="registry-server" Feb 17 17:31:42 crc kubenswrapper[4808]: I0217 17:31:42.901796 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="c342da3e-2aeb-4794-b93b-816f13e8dbf0" containerName="registry-server" Feb 17 17:31:42 crc kubenswrapper[4808]: I0217 17:31:42.903291 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6nchh" Feb 17 17:31:42 crc kubenswrapper[4808]: I0217 17:31:42.913348 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6nchh"] Feb 17 17:31:43 crc kubenswrapper[4808]: I0217 17:31:43.051141 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc288d34-4657-4146-9213-4b9ddfb8269e-utilities\") pod \"redhat-operators-6nchh\" (UID: \"dc288d34-4657-4146-9213-4b9ddfb8269e\") " pod="openshift-marketplace/redhat-operators-6nchh" Feb 17 17:31:43 crc kubenswrapper[4808]: I0217 17:31:43.051306 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc288d34-4657-4146-9213-4b9ddfb8269e-catalog-content\") pod \"redhat-operators-6nchh\" (UID: \"dc288d34-4657-4146-9213-4b9ddfb8269e\") " pod="openshift-marketplace/redhat-operators-6nchh" Feb 17 17:31:43 crc kubenswrapper[4808]: I0217 17:31:43.051580 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnljm\" (UniqueName: \"kubernetes.io/projected/dc288d34-4657-4146-9213-4b9ddfb8269e-kube-api-access-pnljm\") pod \"redhat-operators-6nchh\" (UID: \"dc288d34-4657-4146-9213-4b9ddfb8269e\") " pod="openshift-marketplace/redhat-operators-6nchh" Feb 17 17:31:43 crc kubenswrapper[4808]: I0217 17:31:43.147050 4808 scope.go:117] "RemoveContainer" containerID="21cd60b81b7f48724a7b1dc2d7a6a9c6b537ff0cbb1155a7193b7f0c090faf54" Feb 17 17:31:43 crc kubenswrapper[4808]: E0217 17:31:43.147336 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 17:31:43 crc kubenswrapper[4808]: I0217 17:31:43.169798 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnljm\" (UniqueName: \"kubernetes.io/projected/dc288d34-4657-4146-9213-4b9ddfb8269e-kube-api-access-pnljm\") pod \"redhat-operators-6nchh\" (UID: \"dc288d34-4657-4146-9213-4b9ddfb8269e\") " pod="openshift-marketplace/redhat-operators-6nchh" Feb 17 17:31:43 crc kubenswrapper[4808]: I0217 17:31:43.171656 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc288d34-4657-4146-9213-4b9ddfb8269e-utilities\") pod \"redhat-operators-6nchh\" (UID: \"dc288d34-4657-4146-9213-4b9ddfb8269e\") " pod="openshift-marketplace/redhat-operators-6nchh" Feb 17 17:31:43 crc kubenswrapper[4808]: I0217 17:31:43.171864 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc288d34-4657-4146-9213-4b9ddfb8269e-catalog-content\") pod \"redhat-operators-6nchh\" (UID: \"dc288d34-4657-4146-9213-4b9ddfb8269e\") " pod="openshift-marketplace/redhat-operators-6nchh" Feb 17 17:31:43 crc kubenswrapper[4808]: I0217 17:31:43.172506 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc288d34-4657-4146-9213-4b9ddfb8269e-utilities\") pod \"redhat-operators-6nchh\" (UID: \"dc288d34-4657-4146-9213-4b9ddfb8269e\") " pod="openshift-marketplace/redhat-operators-6nchh" Feb 17 17:31:43 crc kubenswrapper[4808]: I0217 17:31:43.180253 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc288d34-4657-4146-9213-4b9ddfb8269e-catalog-content\") pod \"redhat-operators-6nchh\" (UID: \"dc288d34-4657-4146-9213-4b9ddfb8269e\") " pod="openshift-marketplace/redhat-operators-6nchh" Feb 17 17:31:43 crc kubenswrapper[4808]: I0217 17:31:43.196712 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnljm\" (UniqueName: \"kubernetes.io/projected/dc288d34-4657-4146-9213-4b9ddfb8269e-kube-api-access-pnljm\") pod \"redhat-operators-6nchh\" (UID: \"dc288d34-4657-4146-9213-4b9ddfb8269e\") " pod="openshift-marketplace/redhat-operators-6nchh" Feb 17 17:31:43 crc kubenswrapper[4808]: I0217 17:31:43.229489 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6nchh" Feb 17 17:31:43 crc kubenswrapper[4808]: I0217 17:31:43.740327 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6nchh"] Feb 17 17:31:44 crc kubenswrapper[4808]: I0217 17:31:44.435391 4808 generic.go:334] "Generic (PLEG): container finished" podID="dc288d34-4657-4146-9213-4b9ddfb8269e" containerID="6a6ec8d852babba36bd4fc21db25531a1d10e4476d871b6cc0ea95c93802ba27" exitCode=0 Feb 17 17:31:44 crc kubenswrapper[4808]: I0217 17:31:44.435685 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6nchh" event={"ID":"dc288d34-4657-4146-9213-4b9ddfb8269e","Type":"ContainerDied","Data":"6a6ec8d852babba36bd4fc21db25531a1d10e4476d871b6cc0ea95c93802ba27"} Feb 17 17:31:44 crc kubenswrapper[4808]: I0217 17:31:44.435711 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6nchh" event={"ID":"dc288d34-4657-4146-9213-4b9ddfb8269e","Type":"ContainerStarted","Data":"503dec0c1b46beb199cdb9b9f8511fa3815bdff8b0c8a0335a7eeadf654cc2ed"} Feb 17 17:31:45 crc kubenswrapper[4808]: E0217 17:31:45.147014 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:31:45 crc kubenswrapper[4808]: I0217 17:31:45.447415 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6nchh" event={"ID":"dc288d34-4657-4146-9213-4b9ddfb8269e","Type":"ContainerStarted","Data":"adb5c9c079ff69ba0859e6efdd26503a6de0545d31e723b22d47848759e510a3"} Feb 17 17:31:52 crc kubenswrapper[4808]: E0217 17:31:52.149289 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:31:52 crc kubenswrapper[4808]: I0217 17:31:52.552531 4808 generic.go:334] "Generic (PLEG): container finished" podID="dc288d34-4657-4146-9213-4b9ddfb8269e" containerID="adb5c9c079ff69ba0859e6efdd26503a6de0545d31e723b22d47848759e510a3" exitCode=0 Feb 17 17:31:52 crc kubenswrapper[4808]: I0217 17:31:52.552743 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6nchh" event={"ID":"dc288d34-4657-4146-9213-4b9ddfb8269e","Type":"ContainerDied","Data":"adb5c9c079ff69ba0859e6efdd26503a6de0545d31e723b22d47848759e510a3"} Feb 17 17:31:54 crc kubenswrapper[4808]: I0217 17:31:54.574651 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6nchh" event={"ID":"dc288d34-4657-4146-9213-4b9ddfb8269e","Type":"ContainerStarted","Data":"6d193ffbf1604de340eb5b6e0c29c3b3d546c32e7b55e401a8d84935e3046788"} Feb 17 17:31:54 crc kubenswrapper[4808]: I0217 17:31:54.604453 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6nchh" podStartSLOduration=3.668280819 podStartE2EDuration="12.604431125s" podCreationTimestamp="2026-02-17 17:31:42 +0000 UTC" firstStartedPulling="2026-02-17 17:31:44.439802956 +0000 UTC m=+5867.956162029" lastFinishedPulling="2026-02-17 17:31:53.375953252 +0000 UTC m=+5876.892312335" observedRunningTime="2026-02-17 17:31:54.596190882 +0000 UTC m=+5878.112549985" watchObservedRunningTime="2026-02-17 17:31:54.604431125 +0000 UTC m=+5878.120790238" Feb 17 17:31:56 crc kubenswrapper[4808]: E0217 17:31:56.151906 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:31:58 crc kubenswrapper[4808]: I0217 17:31:58.147432 4808 scope.go:117] "RemoveContainer" containerID="21cd60b81b7f48724a7b1dc2d7a6a9c6b537ff0cbb1155a7193b7f0c090faf54" Feb 17 17:31:58 crc kubenswrapper[4808]: E0217 17:31:58.148165 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 17:32:03 crc kubenswrapper[4808]: E0217 17:32:03.148341 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:32:03 crc kubenswrapper[4808]: I0217 17:32:03.230755 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-6nchh" Feb 17 17:32:03 crc kubenswrapper[4808]: I0217 17:32:03.230960 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6nchh" Feb 17 17:32:03 crc kubenswrapper[4808]: I0217 17:32:03.303590 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6nchh" Feb 17 17:32:03 crc kubenswrapper[4808]: I0217 17:32:03.744024 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6nchh" Feb 17 17:32:03 crc kubenswrapper[4808]: I0217 17:32:03.816121 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6nchh"] Feb 17 17:32:05 crc kubenswrapper[4808]: I0217 17:32:05.724292 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-6nchh" podUID="dc288d34-4657-4146-9213-4b9ddfb8269e" containerName="registry-server" containerID="cri-o://6d193ffbf1604de340eb5b6e0c29c3b3d546c32e7b55e401a8d84935e3046788" gracePeriod=2 Feb 17 17:32:06 crc kubenswrapper[4808]: I0217 17:32:06.262887 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6nchh" Feb 17 17:32:06 crc kubenswrapper[4808]: I0217 17:32:06.314861 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc288d34-4657-4146-9213-4b9ddfb8269e-catalog-content\") pod \"dc288d34-4657-4146-9213-4b9ddfb8269e\" (UID: \"dc288d34-4657-4146-9213-4b9ddfb8269e\") " Feb 17 17:32:06 crc kubenswrapper[4808]: I0217 17:32:06.315063 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc288d34-4657-4146-9213-4b9ddfb8269e-utilities\") pod \"dc288d34-4657-4146-9213-4b9ddfb8269e\" (UID: \"dc288d34-4657-4146-9213-4b9ddfb8269e\") " Feb 17 17:32:06 crc kubenswrapper[4808]: I0217 17:32:06.315161 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pnljm\" (UniqueName: \"kubernetes.io/projected/dc288d34-4657-4146-9213-4b9ddfb8269e-kube-api-access-pnljm\") pod \"dc288d34-4657-4146-9213-4b9ddfb8269e\" (UID: \"dc288d34-4657-4146-9213-4b9ddfb8269e\") " Feb 17 17:32:06 crc kubenswrapper[4808]: I0217 17:32:06.316113 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc288d34-4657-4146-9213-4b9ddfb8269e-utilities" (OuterVolumeSpecName: "utilities") pod "dc288d34-4657-4146-9213-4b9ddfb8269e" (UID: "dc288d34-4657-4146-9213-4b9ddfb8269e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:32:06 crc kubenswrapper[4808]: I0217 17:32:06.320859 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc288d34-4657-4146-9213-4b9ddfb8269e-kube-api-access-pnljm" (OuterVolumeSpecName: "kube-api-access-pnljm") pod "dc288d34-4657-4146-9213-4b9ddfb8269e" (UID: "dc288d34-4657-4146-9213-4b9ddfb8269e"). InnerVolumeSpecName "kube-api-access-pnljm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:32:06 crc kubenswrapper[4808]: I0217 17:32:06.417282 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc288d34-4657-4146-9213-4b9ddfb8269e-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 17:32:06 crc kubenswrapper[4808]: I0217 17:32:06.417316 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pnljm\" (UniqueName: \"kubernetes.io/projected/dc288d34-4657-4146-9213-4b9ddfb8269e-kube-api-access-pnljm\") on node \"crc\" DevicePath \"\"" Feb 17 17:32:06 crc kubenswrapper[4808]: I0217 17:32:06.456859 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc288d34-4657-4146-9213-4b9ddfb8269e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dc288d34-4657-4146-9213-4b9ddfb8269e" (UID: "dc288d34-4657-4146-9213-4b9ddfb8269e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:32:06 crc kubenswrapper[4808]: I0217 17:32:06.519084 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc288d34-4657-4146-9213-4b9ddfb8269e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 17:32:06 crc kubenswrapper[4808]: I0217 17:32:06.739473 4808 generic.go:334] "Generic (PLEG): container finished" podID="dc288d34-4657-4146-9213-4b9ddfb8269e" containerID="6d193ffbf1604de340eb5b6e0c29c3b3d546c32e7b55e401a8d84935e3046788" exitCode=0 Feb 17 17:32:06 crc kubenswrapper[4808]: I0217 17:32:06.739519 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6nchh" event={"ID":"dc288d34-4657-4146-9213-4b9ddfb8269e","Type":"ContainerDied","Data":"6d193ffbf1604de340eb5b6e0c29c3b3d546c32e7b55e401a8d84935e3046788"} Feb 17 17:32:06 crc kubenswrapper[4808]: I0217 17:32:06.739549 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6nchh" event={"ID":"dc288d34-4657-4146-9213-4b9ddfb8269e","Type":"ContainerDied","Data":"503dec0c1b46beb199cdb9b9f8511fa3815bdff8b0c8a0335a7eeadf654cc2ed"} Feb 17 17:32:06 crc kubenswrapper[4808]: I0217 17:32:06.739659 4808 scope.go:117] "RemoveContainer" containerID="6d193ffbf1604de340eb5b6e0c29c3b3d546c32e7b55e401a8d84935e3046788" Feb 17 17:32:06 crc kubenswrapper[4808]: I0217 17:32:06.739805 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6nchh" Feb 17 17:32:06 crc kubenswrapper[4808]: I0217 17:32:06.773242 4808 scope.go:117] "RemoveContainer" containerID="adb5c9c079ff69ba0859e6efdd26503a6de0545d31e723b22d47848759e510a3" Feb 17 17:32:06 crc kubenswrapper[4808]: I0217 17:32:06.797062 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6nchh"] Feb 17 17:32:06 crc kubenswrapper[4808]: I0217 17:32:06.805528 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-6nchh"] Feb 17 17:32:06 crc kubenswrapper[4808]: I0217 17:32:06.822202 4808 scope.go:117] "RemoveContainer" containerID="6a6ec8d852babba36bd4fc21db25531a1d10e4476d871b6cc0ea95c93802ba27" Feb 17 17:32:06 crc kubenswrapper[4808]: I0217 17:32:06.880630 4808 scope.go:117] "RemoveContainer" containerID="6d193ffbf1604de340eb5b6e0c29c3b3d546c32e7b55e401a8d84935e3046788" Feb 17 17:32:06 crc kubenswrapper[4808]: E0217 17:32:06.881367 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d193ffbf1604de340eb5b6e0c29c3b3d546c32e7b55e401a8d84935e3046788\": container with ID starting with 6d193ffbf1604de340eb5b6e0c29c3b3d546c32e7b55e401a8d84935e3046788 not found: ID does not exist" containerID="6d193ffbf1604de340eb5b6e0c29c3b3d546c32e7b55e401a8d84935e3046788" Feb 17 17:32:06 crc kubenswrapper[4808]: I0217 17:32:06.881428 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d193ffbf1604de340eb5b6e0c29c3b3d546c32e7b55e401a8d84935e3046788"} err="failed to get container status \"6d193ffbf1604de340eb5b6e0c29c3b3d546c32e7b55e401a8d84935e3046788\": rpc error: code = NotFound desc = could not find container \"6d193ffbf1604de340eb5b6e0c29c3b3d546c32e7b55e401a8d84935e3046788\": container with ID starting with 6d193ffbf1604de340eb5b6e0c29c3b3d546c32e7b55e401a8d84935e3046788 not found: ID does not exist" Feb 17 17:32:06 crc kubenswrapper[4808]: I0217 17:32:06.881471 4808 scope.go:117] "RemoveContainer" containerID="adb5c9c079ff69ba0859e6efdd26503a6de0545d31e723b22d47848759e510a3" Feb 17 17:32:06 crc kubenswrapper[4808]: E0217 17:32:06.883158 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"adb5c9c079ff69ba0859e6efdd26503a6de0545d31e723b22d47848759e510a3\": container with ID starting with adb5c9c079ff69ba0859e6efdd26503a6de0545d31e723b22d47848759e510a3 not found: ID does not exist" containerID="adb5c9c079ff69ba0859e6efdd26503a6de0545d31e723b22d47848759e510a3" Feb 17 17:32:06 crc kubenswrapper[4808]: I0217 17:32:06.883232 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"adb5c9c079ff69ba0859e6efdd26503a6de0545d31e723b22d47848759e510a3"} err="failed to get container status \"adb5c9c079ff69ba0859e6efdd26503a6de0545d31e723b22d47848759e510a3\": rpc error: code = NotFound desc = could not find container \"adb5c9c079ff69ba0859e6efdd26503a6de0545d31e723b22d47848759e510a3\": container with ID starting with adb5c9c079ff69ba0859e6efdd26503a6de0545d31e723b22d47848759e510a3 not found: ID does not exist" Feb 17 17:32:06 crc kubenswrapper[4808]: I0217 17:32:06.883275 4808 scope.go:117] "RemoveContainer" containerID="6a6ec8d852babba36bd4fc21db25531a1d10e4476d871b6cc0ea95c93802ba27" Feb 17 17:32:06 crc kubenswrapper[4808]: E0217 17:32:06.884128 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a6ec8d852babba36bd4fc21db25531a1d10e4476d871b6cc0ea95c93802ba27\": container with ID starting with 6a6ec8d852babba36bd4fc21db25531a1d10e4476d871b6cc0ea95c93802ba27 not found: ID does not exist" containerID="6a6ec8d852babba36bd4fc21db25531a1d10e4476d871b6cc0ea95c93802ba27" Feb 17 17:32:06 crc kubenswrapper[4808]: I0217 17:32:06.884228 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a6ec8d852babba36bd4fc21db25531a1d10e4476d871b6cc0ea95c93802ba27"} err="failed to get container status \"6a6ec8d852babba36bd4fc21db25531a1d10e4476d871b6cc0ea95c93802ba27\": rpc error: code = NotFound desc = could not find container \"6a6ec8d852babba36bd4fc21db25531a1d10e4476d871b6cc0ea95c93802ba27\": container with ID starting with 6a6ec8d852babba36bd4fc21db25531a1d10e4476d871b6cc0ea95c93802ba27 not found: ID does not exist" Feb 17 17:32:07 crc kubenswrapper[4808]: E0217 17:32:07.160821 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:32:07 crc kubenswrapper[4808]: I0217 17:32:07.165444 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc288d34-4657-4146-9213-4b9ddfb8269e" path="/var/lib/kubelet/pods/dc288d34-4657-4146-9213-4b9ddfb8269e/volumes" Feb 17 17:32:09 crc kubenswrapper[4808]: I0217 17:32:09.370667 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qkmhg"] Feb 17 17:32:09 crc kubenswrapper[4808]: E0217 17:32:09.371651 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc288d34-4657-4146-9213-4b9ddfb8269e" containerName="extract-content" Feb 17 17:32:09 crc kubenswrapper[4808]: I0217 17:32:09.371673 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc288d34-4657-4146-9213-4b9ddfb8269e" containerName="extract-content" Feb 17 17:32:09 crc kubenswrapper[4808]: E0217 17:32:09.371696 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc288d34-4657-4146-9213-4b9ddfb8269e" containerName="registry-server" Feb 17 17:32:09 crc kubenswrapper[4808]: I0217 17:32:09.371711 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc288d34-4657-4146-9213-4b9ddfb8269e" containerName="registry-server" Feb 17 17:32:09 crc kubenswrapper[4808]: E0217 17:32:09.371753 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc288d34-4657-4146-9213-4b9ddfb8269e" containerName="extract-utilities" Feb 17 17:32:09 crc kubenswrapper[4808]: I0217 17:32:09.371768 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc288d34-4657-4146-9213-4b9ddfb8269e" containerName="extract-utilities" Feb 17 17:32:09 crc kubenswrapper[4808]: I0217 17:32:09.372146 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc288d34-4657-4146-9213-4b9ddfb8269e" containerName="registry-server" Feb 17 17:32:09 crc kubenswrapper[4808]: I0217 17:32:09.375201 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qkmhg" Feb 17 17:32:09 crc kubenswrapper[4808]: I0217 17:32:09.384975 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qkmhg"] Feb 17 17:32:09 crc kubenswrapper[4808]: I0217 17:32:09.398111 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f28f98e-2752-4bf6-8867-d29f769d6d34-catalog-content\") pod \"redhat-marketplace-qkmhg\" (UID: \"2f28f98e-2752-4bf6-8867-d29f769d6d34\") " pod="openshift-marketplace/redhat-marketplace-qkmhg" Feb 17 17:32:09 crc kubenswrapper[4808]: I0217 17:32:09.398159 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kq9qv\" (UniqueName: \"kubernetes.io/projected/2f28f98e-2752-4bf6-8867-d29f769d6d34-kube-api-access-kq9qv\") pod \"redhat-marketplace-qkmhg\" (UID: \"2f28f98e-2752-4bf6-8867-d29f769d6d34\") " pod="openshift-marketplace/redhat-marketplace-qkmhg" Feb 17 17:32:09 crc kubenswrapper[4808]: I0217 17:32:09.398301 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f28f98e-2752-4bf6-8867-d29f769d6d34-utilities\") pod \"redhat-marketplace-qkmhg\" (UID: \"2f28f98e-2752-4bf6-8867-d29f769d6d34\") " pod="openshift-marketplace/redhat-marketplace-qkmhg" Feb 17 17:32:09 crc kubenswrapper[4808]: I0217 17:32:09.499930 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f28f98e-2752-4bf6-8867-d29f769d6d34-catalog-content\") pod \"redhat-marketplace-qkmhg\" (UID: \"2f28f98e-2752-4bf6-8867-d29f769d6d34\") " pod="openshift-marketplace/redhat-marketplace-qkmhg" Feb 17 17:32:09 crc kubenswrapper[4808]: I0217 17:32:09.499975 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kq9qv\" (UniqueName: \"kubernetes.io/projected/2f28f98e-2752-4bf6-8867-d29f769d6d34-kube-api-access-kq9qv\") pod \"redhat-marketplace-qkmhg\" (UID: \"2f28f98e-2752-4bf6-8867-d29f769d6d34\") " pod="openshift-marketplace/redhat-marketplace-qkmhg" Feb 17 17:32:09 crc kubenswrapper[4808]: I0217 17:32:09.500132 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f28f98e-2752-4bf6-8867-d29f769d6d34-utilities\") pod \"redhat-marketplace-qkmhg\" (UID: \"2f28f98e-2752-4bf6-8867-d29f769d6d34\") " pod="openshift-marketplace/redhat-marketplace-qkmhg" Feb 17 17:32:09 crc kubenswrapper[4808]: I0217 17:32:09.500704 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f28f98e-2752-4bf6-8867-d29f769d6d34-utilities\") pod \"redhat-marketplace-qkmhg\" (UID: \"2f28f98e-2752-4bf6-8867-d29f769d6d34\") " pod="openshift-marketplace/redhat-marketplace-qkmhg" Feb 17 17:32:09 crc kubenswrapper[4808]: I0217 17:32:09.500736 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f28f98e-2752-4bf6-8867-d29f769d6d34-catalog-content\") pod \"redhat-marketplace-qkmhg\" (UID: \"2f28f98e-2752-4bf6-8867-d29f769d6d34\") " pod="openshift-marketplace/redhat-marketplace-qkmhg" Feb 17 17:32:09 crc kubenswrapper[4808]: I0217 17:32:09.525111 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kq9qv\" (UniqueName: \"kubernetes.io/projected/2f28f98e-2752-4bf6-8867-d29f769d6d34-kube-api-access-kq9qv\") pod \"redhat-marketplace-qkmhg\" (UID: \"2f28f98e-2752-4bf6-8867-d29f769d6d34\") " pod="openshift-marketplace/redhat-marketplace-qkmhg" Feb 17 17:32:09 crc kubenswrapper[4808]: I0217 17:32:09.706596 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qkmhg" Feb 17 17:32:10 crc kubenswrapper[4808]: I0217 17:32:10.197337 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qkmhg"] Feb 17 17:32:10 crc kubenswrapper[4808]: I0217 17:32:10.797927 4808 generic.go:334] "Generic (PLEG): container finished" podID="2f28f98e-2752-4bf6-8867-d29f769d6d34" containerID="20f9253d2c18217469a3b4d06a05e7594eabfa2e4a73524d65b1b7e0e12483f6" exitCode=0 Feb 17 17:32:10 crc kubenswrapper[4808]: I0217 17:32:10.798239 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qkmhg" event={"ID":"2f28f98e-2752-4bf6-8867-d29f769d6d34","Type":"ContainerDied","Data":"20f9253d2c18217469a3b4d06a05e7594eabfa2e4a73524d65b1b7e0e12483f6"} Feb 17 17:32:10 crc kubenswrapper[4808]: I0217 17:32:10.798279 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qkmhg" event={"ID":"2f28f98e-2752-4bf6-8867-d29f769d6d34","Type":"ContainerStarted","Data":"ea69f1e61af0c69960e28784a7e10b53b9c27388edf075d8aa066d5335b479b7"} Feb 17 17:32:11 crc kubenswrapper[4808]: I0217 17:32:11.146658 4808 scope.go:117] "RemoveContainer" containerID="21cd60b81b7f48724a7b1dc2d7a6a9c6b537ff0cbb1155a7193b7f0c090faf54" Feb 17 17:32:11 crc kubenswrapper[4808]: E0217 17:32:11.147017 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 17:32:11 crc kubenswrapper[4808]: I0217 17:32:11.810151 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qkmhg" event={"ID":"2f28f98e-2752-4bf6-8867-d29f769d6d34","Type":"ContainerStarted","Data":"94dfb34901d9dd0dff5abfc80586e6a30900b46ae8fc9049d4949f08304db628"} Feb 17 17:32:12 crc kubenswrapper[4808]: I0217 17:32:12.826128 4808 generic.go:334] "Generic (PLEG): container finished" podID="2f28f98e-2752-4bf6-8867-d29f769d6d34" containerID="94dfb34901d9dd0dff5abfc80586e6a30900b46ae8fc9049d4949f08304db628" exitCode=0 Feb 17 17:32:12 crc kubenswrapper[4808]: I0217 17:32:12.826193 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qkmhg" event={"ID":"2f28f98e-2752-4bf6-8867-d29f769d6d34","Type":"ContainerDied","Data":"94dfb34901d9dd0dff5abfc80586e6a30900b46ae8fc9049d4949f08304db628"} Feb 17 17:32:13 crc kubenswrapper[4808]: I0217 17:32:13.846244 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qkmhg" event={"ID":"2f28f98e-2752-4bf6-8867-d29f769d6d34","Type":"ContainerStarted","Data":"1a6da2647f99bb4084bd5d2a1f4ae2713b2efc88a90054abaf8302e395ac5ef9"} Feb 17 17:32:13 crc kubenswrapper[4808]: I0217 17:32:13.896434 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qkmhg" podStartSLOduration=2.449945129 podStartE2EDuration="4.896407444s" podCreationTimestamp="2026-02-17 17:32:09 +0000 UTC" firstStartedPulling="2026-02-17 17:32:10.80081569 +0000 UTC m=+5894.317174783" lastFinishedPulling="2026-02-17 17:32:13.247277985 +0000 UTC m=+5896.763637098" observedRunningTime="2026-02-17 17:32:13.882759512 +0000 UTC m=+5897.399118685" watchObservedRunningTime="2026-02-17 17:32:13.896407444 +0000 UTC m=+5897.412766547" Feb 17 17:32:18 crc kubenswrapper[4808]: E0217 17:32:18.149553 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:32:19 crc kubenswrapper[4808]: I0217 17:32:19.706704 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-qkmhg" Feb 17 17:32:19 crc kubenswrapper[4808]: I0217 17:32:19.708031 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qkmhg" Feb 17 17:32:19 crc kubenswrapper[4808]: I0217 17:32:19.773242 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qkmhg" Feb 17 17:32:19 crc kubenswrapper[4808]: I0217 17:32:19.960539 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qkmhg" Feb 17 17:32:20 crc kubenswrapper[4808]: I0217 17:32:20.028352 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qkmhg"] Feb 17 17:32:21 crc kubenswrapper[4808]: I0217 17:32:21.937630 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-qkmhg" podUID="2f28f98e-2752-4bf6-8867-d29f769d6d34" containerName="registry-server" containerID="cri-o://1a6da2647f99bb4084bd5d2a1f4ae2713b2efc88a90054abaf8302e395ac5ef9" gracePeriod=2 Feb 17 17:32:22 crc kubenswrapper[4808]: E0217 17:32:22.152039 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:32:22 crc kubenswrapper[4808]: I0217 17:32:22.516218 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qkmhg" Feb 17 17:32:22 crc kubenswrapper[4808]: I0217 17:32:22.599924 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f28f98e-2752-4bf6-8867-d29f769d6d34-catalog-content\") pod \"2f28f98e-2752-4bf6-8867-d29f769d6d34\" (UID: \"2f28f98e-2752-4bf6-8867-d29f769d6d34\") " Feb 17 17:32:22 crc kubenswrapper[4808]: I0217 17:32:22.599985 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kq9qv\" (UniqueName: \"kubernetes.io/projected/2f28f98e-2752-4bf6-8867-d29f769d6d34-kube-api-access-kq9qv\") pod \"2f28f98e-2752-4bf6-8867-d29f769d6d34\" (UID: \"2f28f98e-2752-4bf6-8867-d29f769d6d34\") " Feb 17 17:32:22 crc kubenswrapper[4808]: I0217 17:32:22.600126 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f28f98e-2752-4bf6-8867-d29f769d6d34-utilities\") pod \"2f28f98e-2752-4bf6-8867-d29f769d6d34\" (UID: \"2f28f98e-2752-4bf6-8867-d29f769d6d34\") " Feb 17 17:32:22 crc kubenswrapper[4808]: I0217 17:32:22.600974 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f28f98e-2752-4bf6-8867-d29f769d6d34-utilities" (OuterVolumeSpecName: "utilities") pod "2f28f98e-2752-4bf6-8867-d29f769d6d34" (UID: "2f28f98e-2752-4bf6-8867-d29f769d6d34"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:32:22 crc kubenswrapper[4808]: I0217 17:32:22.601456 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f28f98e-2752-4bf6-8867-d29f769d6d34-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 17:32:22 crc kubenswrapper[4808]: I0217 17:32:22.606331 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f28f98e-2752-4bf6-8867-d29f769d6d34-kube-api-access-kq9qv" (OuterVolumeSpecName: "kube-api-access-kq9qv") pod "2f28f98e-2752-4bf6-8867-d29f769d6d34" (UID: "2f28f98e-2752-4bf6-8867-d29f769d6d34"). InnerVolumeSpecName "kube-api-access-kq9qv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:32:22 crc kubenswrapper[4808]: I0217 17:32:22.635973 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f28f98e-2752-4bf6-8867-d29f769d6d34-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2f28f98e-2752-4bf6-8867-d29f769d6d34" (UID: "2f28f98e-2752-4bf6-8867-d29f769d6d34"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:32:22 crc kubenswrapper[4808]: I0217 17:32:22.704248 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f28f98e-2752-4bf6-8867-d29f769d6d34-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 17:32:22 crc kubenswrapper[4808]: I0217 17:32:22.704287 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kq9qv\" (UniqueName: \"kubernetes.io/projected/2f28f98e-2752-4bf6-8867-d29f769d6d34-kube-api-access-kq9qv\") on node \"crc\" DevicePath \"\"" Feb 17 17:32:22 crc kubenswrapper[4808]: I0217 17:32:22.954023 4808 generic.go:334] "Generic (PLEG): container finished" podID="2f28f98e-2752-4bf6-8867-d29f769d6d34" containerID="1a6da2647f99bb4084bd5d2a1f4ae2713b2efc88a90054abaf8302e395ac5ef9" exitCode=0 Feb 17 17:32:22 crc kubenswrapper[4808]: I0217 17:32:22.954089 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qkmhg" event={"ID":"2f28f98e-2752-4bf6-8867-d29f769d6d34","Type":"ContainerDied","Data":"1a6da2647f99bb4084bd5d2a1f4ae2713b2efc88a90054abaf8302e395ac5ef9"} Feb 17 17:32:22 crc kubenswrapper[4808]: I0217 17:32:22.954132 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qkmhg" event={"ID":"2f28f98e-2752-4bf6-8867-d29f769d6d34","Type":"ContainerDied","Data":"ea69f1e61af0c69960e28784a7e10b53b9c27388edf075d8aa066d5335b479b7"} Feb 17 17:32:22 crc kubenswrapper[4808]: I0217 17:32:22.954161 4808 scope.go:117] "RemoveContainer" containerID="1a6da2647f99bb4084bd5d2a1f4ae2713b2efc88a90054abaf8302e395ac5ef9" Feb 17 17:32:22 crc kubenswrapper[4808]: I0217 17:32:22.954366 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qkmhg" Feb 17 17:32:22 crc kubenswrapper[4808]: I0217 17:32:22.987239 4808 scope.go:117] "RemoveContainer" containerID="94dfb34901d9dd0dff5abfc80586e6a30900b46ae8fc9049d4949f08304db628" Feb 17 17:32:23 crc kubenswrapper[4808]: I0217 17:32:23.020098 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qkmhg"] Feb 17 17:32:23 crc kubenswrapper[4808]: I0217 17:32:23.036859 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-qkmhg"] Feb 17 17:32:23 crc kubenswrapper[4808]: I0217 17:32:23.057028 4808 scope.go:117] "RemoveContainer" containerID="20f9253d2c18217469a3b4d06a05e7594eabfa2e4a73524d65b1b7e0e12483f6" Feb 17 17:32:23 crc kubenswrapper[4808]: I0217 17:32:23.086289 4808 scope.go:117] "RemoveContainer" containerID="1a6da2647f99bb4084bd5d2a1f4ae2713b2efc88a90054abaf8302e395ac5ef9" Feb 17 17:32:23 crc kubenswrapper[4808]: E0217 17:32:23.086935 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a6da2647f99bb4084bd5d2a1f4ae2713b2efc88a90054abaf8302e395ac5ef9\": container with ID starting with 1a6da2647f99bb4084bd5d2a1f4ae2713b2efc88a90054abaf8302e395ac5ef9 not found: ID does not exist" containerID="1a6da2647f99bb4084bd5d2a1f4ae2713b2efc88a90054abaf8302e395ac5ef9" Feb 17 17:32:23 crc kubenswrapper[4808]: I0217 17:32:23.087020 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a6da2647f99bb4084bd5d2a1f4ae2713b2efc88a90054abaf8302e395ac5ef9"} err="failed to get container status \"1a6da2647f99bb4084bd5d2a1f4ae2713b2efc88a90054abaf8302e395ac5ef9\": rpc error: code = NotFound desc = could not find container \"1a6da2647f99bb4084bd5d2a1f4ae2713b2efc88a90054abaf8302e395ac5ef9\": container with ID starting with 1a6da2647f99bb4084bd5d2a1f4ae2713b2efc88a90054abaf8302e395ac5ef9 not found: ID does not exist" Feb 17 17:32:23 crc kubenswrapper[4808]: I0217 17:32:23.087061 4808 scope.go:117] "RemoveContainer" containerID="94dfb34901d9dd0dff5abfc80586e6a30900b46ae8fc9049d4949f08304db628" Feb 17 17:32:23 crc kubenswrapper[4808]: E0217 17:32:23.087601 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"94dfb34901d9dd0dff5abfc80586e6a30900b46ae8fc9049d4949f08304db628\": container with ID starting with 94dfb34901d9dd0dff5abfc80586e6a30900b46ae8fc9049d4949f08304db628 not found: ID does not exist" containerID="94dfb34901d9dd0dff5abfc80586e6a30900b46ae8fc9049d4949f08304db628" Feb 17 17:32:23 crc kubenswrapper[4808]: I0217 17:32:23.087643 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94dfb34901d9dd0dff5abfc80586e6a30900b46ae8fc9049d4949f08304db628"} err="failed to get container status \"94dfb34901d9dd0dff5abfc80586e6a30900b46ae8fc9049d4949f08304db628\": rpc error: code = NotFound desc = could not find container \"94dfb34901d9dd0dff5abfc80586e6a30900b46ae8fc9049d4949f08304db628\": container with ID starting with 94dfb34901d9dd0dff5abfc80586e6a30900b46ae8fc9049d4949f08304db628 not found: ID does not exist" Feb 17 17:32:23 crc kubenswrapper[4808]: I0217 17:32:23.087671 4808 scope.go:117] "RemoveContainer" containerID="20f9253d2c18217469a3b4d06a05e7594eabfa2e4a73524d65b1b7e0e12483f6" Feb 17 17:32:23 crc kubenswrapper[4808]: E0217 17:32:23.088236 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"20f9253d2c18217469a3b4d06a05e7594eabfa2e4a73524d65b1b7e0e12483f6\": container with ID starting with 20f9253d2c18217469a3b4d06a05e7594eabfa2e4a73524d65b1b7e0e12483f6 not found: ID does not exist" containerID="20f9253d2c18217469a3b4d06a05e7594eabfa2e4a73524d65b1b7e0e12483f6" Feb 17 17:32:23 crc kubenswrapper[4808]: I0217 17:32:23.088331 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20f9253d2c18217469a3b4d06a05e7594eabfa2e4a73524d65b1b7e0e12483f6"} err="failed to get container status \"20f9253d2c18217469a3b4d06a05e7594eabfa2e4a73524d65b1b7e0e12483f6\": rpc error: code = NotFound desc = could not find container \"20f9253d2c18217469a3b4d06a05e7594eabfa2e4a73524d65b1b7e0e12483f6\": container with ID starting with 20f9253d2c18217469a3b4d06a05e7594eabfa2e4a73524d65b1b7e0e12483f6 not found: ID does not exist" Feb 17 17:32:23 crc kubenswrapper[4808]: I0217 17:32:23.162300 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f28f98e-2752-4bf6-8867-d29f769d6d34" path="/var/lib/kubelet/pods/2f28f98e-2752-4bf6-8867-d29f769d6d34/volumes" Feb 17 17:32:24 crc kubenswrapper[4808]: I0217 17:32:24.146518 4808 scope.go:117] "RemoveContainer" containerID="21cd60b81b7f48724a7b1dc2d7a6a9c6b537ff0cbb1155a7193b7f0c090faf54" Feb 17 17:32:24 crc kubenswrapper[4808]: E0217 17:32:24.147121 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 17:32:30 crc kubenswrapper[4808]: E0217 17:32:30.150075 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:32:34 crc kubenswrapper[4808]: E0217 17:32:34.149208 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c"